60 Sources
60 Sources
[1]
Will the Pentagon's Anthropic controversy scare startups away from defense work? | TechCrunch
In just over a week, negotiations over the Pentagon's use of Anthropic's Claude technology fell through, the Trump administration designated Anthropic a supply-chain risk, and the AI company said it would fight that designation in court. OpenAI, meanwhile, quickly announced a deal of its own, prompting backlash that saw users uninstalling ChatGPT and pushing Anthropic's Claude to the top of the App Store charts. And at least one OpenAI executive has quit over concerns that the announcement was rushed without appropriate guardrails in place. On the latest episode of TechCrunch's Equity podcast, Kirsten Korosec, Sean O'Kane, and I discussed what this means for other startups seeking to work with the federal government, especially the Pentagon, as Kirsten wondered, "Are we going to see a changing of the tune a little bit?" Sean pointed out that this is an unusual situation in a number of ways, in part because OpenAI and Claude make products that "no one can shut up about." And crucially, this is a dispute over "how their technologies are being used or not being used to kill people" so it's naturally going to draw more scrutiny. Still, Kirsten argued, this is a situation that should "give any startup pause." Read a preview of our conversation, edited for length and clarity, below. Kirsten: I'm wondering if other startups are starting to look at what's happened with the federal government, specifically the Pentagon and Anthropic, that debate and wrestling match, and [take] pause about whether they want to be going after federal dollars. Are we going to see a changing of the tune a little bit? Sean: I wonder about that, too. I think no, to some extent, in the near term, if only because when you really try to think about all the different companies, whether they're startups or even more established Fortune 500s that do work with the government and in particular with the Department of Defense or the Pentagon, [for] a lot of them, that work flies under the radar. General Motors makes defense vehicles for the Army and has done [that] for a very long time and has worked on all electric versions of those vehicles and autonomous versions. There's stuff like that that goes on all the time and it just never really hits the zeitgeist. I think the problem that OpenAI and Anthropic ran into within the last week is like, these are companies that make products that a ton of people use -- and also more importantly, [that] no one can shut up about. So there's just such a spotlight on them, that naturally highlights their involvement to a level that I think most of the other companies that are contracting with the federal government -- and, in particular, any of the war-fighting elements of the federal government -- don't necessarily have to deal with. The only caveat I'll add to that is a lot of the heat around this discussion between Anthropic and OpenAI and the Pentagon is very specifically about how their technologies are being used or not being used to kill people, or in parts of the missions that are killing people. It's not just the attention that's on them and the familiarity we have with their brands, there is an extra element there that I feel is more abstract when you're thinking about General Motors as a defense contractor or whatever. I don't think we're going to see, like, Applied Intuition or any of these other companies that have been framing themselves as dual use back off much, just because I don't see the spotlight on it and there's just not the sort of shared understanding of what that impact might be. Anthony: This story is so unique and specific to these companies and personalities in a lot of ways. I mean, there have been a lot of really interesting thought pieces about: What is the role of technology in government? [Of] AI in government? And I think those are all good and worthwhile questions to ask and explore. I think also, though, that this is a very curious lens through which to examine some of those things because Anthropic and OpenAI are not actually that different in a lot of ways or the stances they're taking. It's not like one company is saying, "Hey, I don't want to work with the government" and one is saying, "Yes, I do." Or one is saying, "You can do whatever you want." and [the other is] saying, "No, I want to have restrictions." Both of them, at least publicly, are saying, "We want restrictions on how our AI gets used." It just seems like Anthropic is digging in their heels a lot more about: You cannot change the terms in this way. And then on top of that, there also just seems to be a personality layer where, the CEO of Anthropic and, Emil Michael -- who a lot of TechCrunch readers might remember from his Uber days, and is now [chief technology officer for the Department of Defense]. Apparently, they just really don't like each other. Reportedly. Sean: Yes, there's a very big "girls are fighting" element here that we should not overlook. Kirsten: Yeah, a little bit. There is, but the implications are a little bit stronger than that. Again, to pull back a little bit, what we're talking about here is the Pentagon and Anthropic coming into a dispute in which Anthropic appears to have lost, although I should say they are still very much being used by the military. They are considered a crucial technology, but OpenAI has kind of stepped in, and this is evolving and will likely change by the time this episode comes out. The blowback has been interesting for OpenAI, where we've seen a lot of uninstalls of ChatGPT I think surged 295% after OpenAI locked in the deal with the Department of Defense. To me, all of this is noise to the really critical and dangerous thing, which is that the Pentagon was seeking to change existing terms on an existing contract. And that is really important and should give any startup pause because the political machine that's happening right now, particularly with the DoD, appears to be different. This isn't normal. Contracts take forever to get baked in at the government level and the fact that they're seeking to change those terms is a problem.
[2]
You Should Have a Say in Military AI Policy
A simmering dispute between the United States Department of Defense (DOD) and Anthropic has now escalated into a full-blown confrontation, raising an uncomfortable but important question: who gets to set the guardrails for military use of artificial intelligence -- the executive branch, private companies or Congress and the broader democratic process? The conflict began when Defense Secretary Pete Hegseth reportedly gave Anthropic CEO Dario Amodei a deadline to allow the DOD unrestricted use of its AI systems. When the company refused, the administration moved to designate Anthropic a supply chain risk and ordered federal agencies to phase out its technology, dramatically escalating the standoff. Anthropic has refused to cross two lines: allowing its models to be used for domestic surveillance of United States citizens and enabling fully autonomous military targeting. Hegseth has objected to what he has described as "ideological constraints" embedded in commercial AI systems, arguing that determining lawful military use should be the government's responsibility -- not the vendor's. As he put it in a speech at Elon Musk's SpaceX last month, "We will not employ AI models that won't allow you to fight wars." Stripped of rhetoric, this dispute resembles something relatively straightforward: a procurement disagreement. In a market economy, the U.S. military decides what products and services it wants to buy. Companies decide what they are willing to sell and under what conditions. Neither side is inherently right or wrong for taking a position. If a product does not meet operational needs, the government can purchase from another vendor. If a company believes certain uses of its technology are unsafe, premature or inconsistent with its values or risk tolerance, it can decline to provide them. For example, a coalition of companies have signed an open letter pledging not to weaponize general-purpose robots. That basic symmetry is a feature of the free market. Where the situation becomes more complicated -- and more troubling -- is in the decision to designate Anthropic a "supply chain risk." That tool exists to address genuine national security vulnerabilities, such as foreign adversaries. It is not intended to blacklist an American company for rejecting the government's preferred contractual terms. Using this authority in that manner marks a significant shift -- from a procurement disagreement to the use of coercive leverage. Hegseth has declared that "effective immediately, no contractor, supplier, or partner that does business with the U.S. military may conduct any commercial activity with Anthropic." This action will almost certainly face legal challenges, but it raises the stakes well beyond the loss of a single DOD contract. It is also important to distinguish between the two substantive issues Anthropic has reportedly raised. The first, opposition to domestic surveillance of U.S. citizens, touches on well-established civil liberties concerns. The U.S. government operates under constitutional constraints and statutory limits when it comes to monitoring Americans. A company stating that it does not want its tools used to facilitate domestic surveillance is not inventing a new principle; it is aligning itself with longstanding democratic guardrails. To be clear, DOD is not affirmatively asserting that it intends to use the technology to surveil Americans unlawfully. Its position is that it does not want to procure models with built-in restrictions that preempt otherwise lawful government use. In other words, the Department of Defense argues that compliance with the law is the government's responsibility -- not something that needs to be embedded in a vendor's code. Anthropic, for its part, has invested heavily in training its systems to refuse certain categories of harmful or high-risk tasks, including assistance with surveillance. The disagreement is therefore less about current intent than about institutional control over constraints: whether they should be imposed by the state through law and oversight, or by the developer through technical design. The second issue, opposition to fully autonomous military targeting, is more complex. The DOD already maintains policies requiring human judgment in the use of force, and debates over autonomy in weapons systems are ongoing within both military and international forums. A private company may reasonably determine that its current technology is not sufficiently reliable or controllable for certain battlefield applications. At the same time, the military may conclude that such capabilities are necessary for deterrence and operational effectiveness. But that disagreement underscores a deeper point: the boundaries of military AI use should not be settled through ad hoc negotiations between a Cabinet secretary and a CEO. Nor should they be determined by which side can exert greater contractual leverage. If the U.S. government believes certain AI capabilities are essential to national defense, that position should be articulated openly. It should be debated in Congress, and reflected in doctrine, oversight mechanisms and statutory frameworks. The rules should be clear -- not only to companies, but to the public. The U.S. often distinguishes itself from authoritarian regimes by emphasizing that power operates within transparent democratic institutions and legal constraints. That distinction carries less weight if AI governance is determined primarily through executive ultimatums issued behind closed doors. There is also a strategic dimension. If companies conclude that participation in federal markets requires surrendering all deployment conditions, some may exit those markets. Others may respond by weakening or removing model safeguards to remain eligible for government contracts. Neither outcome strengthens U.S. technological leadership. The DOD is correct that it cannot allow potential "ideological constraints" to undermine lawful military operations. But there is a difference between rejecting arbitrary restrictions and rejecting any role for corporate risk management in shaping deployment conditions. In high-risk domains -- from aerospace to cybersecurity -- contractors routinely impose safety standards, testing requirements and operational limitations as part of responsible commercialization. AI should not be treated as uniquely exempt from that practice. Moreover, built-in safeguards need not be seen as obstacles to military effectiveness. In many high-risk sectors, layered oversight is standard practice: internal controls, technical fail-safes, auditing mechanisms and legal review operate together. Technical constraints can serve as an additional backstop, reducing the risk of misuse, error or unintended escalation. Congress is AWOL The DOD should retain ultimate authority over lawful use. But it need not reject the possibility that certain guardrails embedded at the design level could complement its own oversight structures rather than undermine them. In some contexts, redundancy in safety systems strengthens, not weakens, operational integrity. At the same time, a company's unilateral ethical commitments are no substitute for public policy. When technologies carry national security implications, private governance has inherent limits. Ultimately, decisions about surveillance authorities, autonomous weapons and rules of engagement belong in democratic institutions. This episode illustrates a pivotal moment in AI governance. AI systems at the frontier of technology are now powerful enough to influence intelligence analysis, logistics, cyber operations and potentially battlefield decision-making. That makes them too consequential to be governed solely by corporate policy -- and too consequential to be governed solely by executive discretion. The solution is not to empower one side over the other. It is to strengthen the institutions that mediate between them. Congress should clarify statutory boundaries for military AI use and investigate whether sufficient oversight exists. The DOD should articulate detailed doctrine for human control, auditing and accountability. Civil society and industry should participate in structured consultation processes rather than episodic standoffs and procurement policy should reflect those publicly established standards. If AI guardrails can be removed through contract pressure, they will be treated as negotiable. However, if they are grounded in law, they can become stable expectations. Democratic constraints on military AI belong in statute and doctrine -- not in private contract negotiations.
[3]
Why replacing Anthropic with OpenAI at the Pentagon could take months
Swapping out one AI model on a classified network for another takes minutes. Retraining the people who've learned to rely on it will take much longer The Pentagon has put Anthropic on the clock. On Thursday, the Department of Defense formally notified the company that it has been deemed a "supply chain risk" -- a label that has turned its artificial intelligence systems, including its flagship model, Claude -- into a liability. The move escalates a dispute that has been brewing for weeks over Anthropic's safety-first ethos -- its commitment to limit how its technology is deployed -- and the DoD's demand for unfettered control. The Pentagon is phasing out Claude, one of the world's most advanced AI models, from its classified networks. On paper, swapping one model for another appears quick. "It's simple to swap out the models and to install new ones," according to a source close to Palantir -- a defense-tech giant that has partnered with Anthropic to host Claude inside secure military networks. If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. The hardest part begins after the model is gone, rewiring everything that's been built around it. Claude is what's known as a frontier model, an AI capable of executing complex, multi-step tasks on its own. That's not how the DoD currently uses it. Lauren Kahn, a researcher at Georgetown University's Center for Security and Emerging Technology and a former Pentagon official, describes its deployment as more like a chatbot than a free-roaming agent. Claude sits "on top" of existing software, she says, and shows up only in certain places -- tightly controlled corners of a classified environment. And it isn't connected to "effectors," she says, meaning that it can't "launch an effect" -- a weapon command, for example -- "in the real world." In late 2024, Anthropic became the first AI company to clear the Pentagon's classified hurdles. Until recently, Claude was the only large language model publicly known to be operating in that environment. Accessed through tools like Claude Gov -- which became a preferred option for some defense personnel, according to Bloomberg -- the system taps into enormous data pipelines to turn a flood of unstructured information into readable intelligence. In other words, Claude summarizes information for the Defense Department, but it can't pull a trigger. Once people rely on a tool, it can be hard to let it go. Each integration must be offboarded piece by piece. And whatever replaces Claude must clear strict security reviews and approvals before it touches a classified system. Software changes inside the Pentagon can be "excruciating," Kahn says. Even something as simple as installing Microsoft Office "takes months and months and months." At press time, Anthropic did not respond to multiple requests for comment from Scientific American. The Department of Defense declined to discuss the specifics of the transition. Every AI model fails in its own characteristic ways. Operators who've spent months using Claude learn those quirks through trial and error: which prompts land badly, which outputs require a second look. Kahn studies automation bias, the tendency of human operators to over-delegate to machines. "I worry about a slightly heightened risk of automation bias in the early stages as they're working out the kinks," she says. People will check for Claude's mistakes, while the replacement model makes new ones. The personnel most exposed to the transition will be the power users who built the most customized workflows and learned the model's downsides well enough to exploit its strengths. While Pentagon personnel brace for the operational transition, the messy details of the political standoff have spilled into public view. Late Thursday, Anthropic CEO Dario Amodei published a blog post vowing to challenge the government's "supply chain risk" designation in court, arguing the statute is typically reserved for foreign adversaries. Behind the scenes, the standoff appears to have devolved into a game of chicken. Emil Michael, the Pentagon official who's led the department's negotiations with Anthropic, posted on X that talks with the company are dead, and Amodei is reportedly scrambling to resuscitate them. Meanwhile, the Defense Department is already moving on. Within hours of Anthropic's official blacklisting, OpenAI announced it had signed a deal to deploy its models on the military's classified networks, securing the contract its rival had just lost. Anthropic was willing to risk eviction from the U.S. government rather than compromise its safety-first ethos. Its replacement initially accepted the Pentagon's demand for unfettered operational flexibility -- only to hastily add the very surveillance guardrails Anthropic advocated for after OpenAI CEO Sam Altman faced massive internal and public backlash. The swap may not be so simple, after all.
[4]
Anthropic CEO Dario Amodei could still be trying to make a deal with Pentagon | TechCrunch
Anthropic's $200 million contract with the Department of Defense (DoD) broke down last week after the two parties failed to come to an agreement over the degree to which the military could obtain unrestricted access to Anthropic's AI. When the DoD made a deal with OpenAI instead, it seemed that the military's relationship with Anthropic would come to a close -- but new reporting from the Financial Times and Bloomberg say that Amodei resumed negotiations with Pentagon official Emil Michael. These talks are reportedly part of an attempt to compromise on a contract that outlines how the Pentagon can continue to access Anthropic's AI models. It would be a surprise to see Anthropic eek out a new deal, given how much vitriol has been exchanged among the parties involved. But a compromise could still hold appeal for both sides -- the Pentagon already relies on Anthropic's technology, and an abrupt switch to OpenAI's systems would be disruptive. The dispute began when Anthropic CEO Dario Amodei voiced concern over a clause which allowed the military to use Anthropic's AI for any lawful use. Amodei asserted that the company would not allow for its technology to be used for domestic mass surveillance or autonomous weaponry, and wanted the contract to more clearly prohibit those uses. When Anthropic refused to comply, the department turned around and struck a deal with OpenAI instead. Since then, figures on both sides have been open about their frustrations. Michael called Amodei a "liar" with a "God complex." Amodei threw some jabs of his own at the DoD and OpenAI CEO Sam Altman in a message reportedly sent to Anthropic staff this week, calling the OpenAI deal "safety theater" and the messaging around it "straight up lies." "The main reason [OpenAI] accepted [the DoD's deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses," Amodei wrote in the memo. Defense Secretary Pete Hegseth has pledged declare Anthropic a "supply chain risk," essentially blacklisting the company from working with any other company that has any business with the U.S. military -- although he has yet to take any legal action to that effect. This sort of designation is typically reserved for foreign adversaries, and it's unclear whether it would survive a court challenge.
[5]
The Pentagon's Anthropic Feud 'Should Be a Wake-Up Call for Congress'
Expertise Artificial intelligence, home energy, heating and cooling, home technology. The contract dispute between the US Department of Defense and the AI developer Anthropic that boiled over at the end of February exposed in stark terms how laws and regulations have failed to keep up with the capabilities of artificial intelligence. The Pentagon wanted to be able to use Anthropic's Claude AI for "all lawful purposes," while Anthropic wanted to prohibit the military from using it for mass domestic surveillance or for fully autonomous weapons systems. After Anthropic refused to meet the government's demands, President Donald Trump and Secretary of Defense Pete Hegseth said they would declare the company a "supply chain risk," prohibiting the use of its products in defense contract work. Pentagon officials said the problem is moot because current law doesn't allow for such surveillance, and it has no plans to use the tool for autonomous weapons systems. But the laws and regulations aren't actually that clear, according to privacy and tech experts. And a contract dispute between a private company and a federal agency isn't the place to settle it. "This week exposed a real governance vacuum, and it should be a wake-up call for Congress," said Hamza Chaudhry, AI and national security lead at the Future of Life Institute. Read more: Congress Isn't Stepping Up to Regulate AI. Where Does That Leave Us Now? The immediate result of the contract dispute was the Pentagon striking a deal with OpenAI instead. The deal with OpenAI was less clear about the limitations of using the company's products for mass surveillance or autonomous weapons, but OpenAI leaders said this week that they have taken steps to strengthen those guardrails. CEO Sam Altman said in a post on X that the Pentagon affirmed it would not be used by the department's intelligence agencies. (Disclosure: Ziff Davis, CNET's parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) OpenAI research scientist Noam Brown posted on X that he believed the world "should not have to rely on trust in AI labs or intelligence agencies" to ensure things like safety. "I know that legislation can sometimes be slow, but I'm afraid of a slippery slope where we become accustomed to circumventing the democratic process for important policy decisions," he wrote. The question is whether, and how, Congress will deal with these issues. The big risk of using AI for domestic surveillance isn't necessarily that Claude or ChatGPT will be spying on Americans. It's that these tools will be used to turn data the government already has, or could buy from private data brokers without needing a warrant, into information that would otherwise require a warrant. Personal data is already being harvested from you, probably from the device you're using to read this. It includes information about your browsing history, your location data, and who you talk to or associate with. Private companies, like app developers, could collect that data even if you don't realize it and sell it to other companies or to intelligence agencies. But until recently, it's been difficult for governments to process all of it in a way that makes surveillance easy. AI has changed that. Anthropic CEO Dario Amodei specifically cited this situation in a Feb. 26 statement detailing the company's reasons for standing by its red lines. "Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person's life -- automatically and at massive scale." The other core dispute is that Anthropic wanted to keep the Pentagon from giving Claude full control of a weapons system without a "human in the loop." An AI tool being used to help select targets -- as is reportedly happening with Claude during the US war in Iran -- isn't beyond the pale for Anthropic or any of the major AI companies, because a person is involved in verifying and making the decision. What the company objected to was the use of AI models in making those decisions without human oversight. Amodei wrote that today's frontier models "are simply not reliable enough to power fully autonomous weapons." Greg Nojeim, senior counsel and director of the security and surveillance project at the Center for Democracy and Technology, said it's clear that AI experts don't believe the models are ready for those kinds of uses, if they ever will be. "It is striking that the Pentagon is rejecting that advice and insisting on being able to use this AI tool to kill people without human intervention," he said. The Department of Defense has argued it can't actually use fully autonomous weapons, but Chaudhry told me the most commonly cited directive (PDF) on that issue doesn't prohibit them outright. The Department of Defense and Anthropic did not respond to requests from CNET to comment for this story. Regardless, experts said, the question of using such weapons isn't one to be sorted out by unelected federal bureaucrats, military commanders or private companies. Elected officials need to reckon with this. The question of how to regulate AI, and who should do it, is nothing new. The Trump administration has called for a light touch on telling AI companies what to do, despite evidence of harms ranging from chatbots encouraging suicide to the AI-enabled erosion of personal privacy. States have tried to rein in AI developers to deal with these issues, but face pushback from a federal government intent on deciding how the tech is handled. In the case of AI use by the military and federal spy services, the question of who should regulate is clear: Congress. "Unelected leaders of private sector companies cannot be relied upon to use a private contract to fill a gap that democratically elected lawmakers haven't filled legislatively," Chaudhry said. "What we need are statutory red lines -- clear, durable, democratically enacted rules about what AI can and cannot be used for in national security contexts, as AI transforms national security." Nojeim said AI surveillance is "not the kind of conduct that the military should be able to self-authorize." Congress will consider reauthorization of part of the Foreign Intelligence Surveillance Act next month and could use that opportunity to decide whether intelligence agencies need warrants when using purchased data. "Ideally, Congress would step in and limit the government's ability to buy data about Americans and bypass court authorization requirements, and ideally Congress would set the rules about how the Department of Defense should be protecting Americans against AI-powered surveillance and setting rules about the use of autonomous weapons that can kill without a human in the loop," he said. Congress has a host of other AI-related regulatory issues to consider, but the debate about using AI for surveillance and autonomous weapons is eye-opening and could spur quicker action. The Pentagon's retaliation against Anthropic -- its official declaration this week of the company as a supply chain risk -- could have a chilling effect on other companies concerned about how the government will use their technology. "It sets a precedent that the government can retaliate against a company that has imposed safety limits on the use of its technology because it knows more about the risks and reliability of its technology than the government could," Nojeim said. "That precedent will make us all less safe." Anthropic said Thursday that it had received a letter from the Department of Defense designating it a supply chain risk and that the letter's language was narrower than the broad threats made by administration officials the previous week. "With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts," Amodei said in a statement, using Hegseth's preferred name for the department. Amodei said the company intends to challenge the designation in court but is also continuing to negotiate with the Pentagon. Despite the dispute and the designation as a supply chain risk, the US military has continued to use Anthropic's tools, including in extensive ways during the current war in Iran. Amodei said Anthropic will keep supplying its AI models to the military and national security groups "at nominal cost and with continuing support from our engineers" for as long as it is allowed to. "Anthropic has much more in common with the Department of War than we have differences," Amodei said.
[6]
Anthropic Has Brought Something New to AI: The Power to Say 'No'
For years technology has been defined by the unstoppable growth of a handful of companies. Big Tech's consolidation of power seemed a foregone conclusion even as Sam Altman's OpenAI sparked an artificial intelligence boom with ChatGPT. Having promised to build AI for humanity, Altman became a proxy for Microsoft Corp., just as his rival in the race to construct utopia, Demis Hassabis, now ships product for Google. But the last two months of market upheaval -- and standoffs with the Pentagon over how this tech might be militarized -- have shown a company breaking that mold. Anthropic PBC has no single Big Tech backer it can call a proxy (not yet anyway) and it has shunned the Silicon Valley "blitzscaling" mantra of shipping fast to dominate a market and patch problems on the fly. Its Chief Executive Officer Dario Amodei has said "no" to many of the things Altman rushed into. However disingenuous Amodei may one day turn out to be about safety -- particularly if his products destroy jobs -- an encouraging picture is emerging of his impact on the industry. Anthropic is a serious competitor to tech's established order and is shaking things up in an AI business that has itself been wildly disrupting entire corporate sectors, or at least their share prices. That is a healthy outcome for a tech market that was becoming far too entrenched. In the three years since ChatGPT sparked the generative-AI boom, the market capitalizations of the Magnificent Seven tech stocks have increased by $12 trillion, their total value (about $20 trillion) now on par with the gross domestic product of China. Some of those giants like Microsoft and Alphabet Inc. are behind today's most popular chatbots. And while a cluster of promising startups might once have loosened their stranglehold, the upstarts have mostly been hoovered up by the big incumbents through stealth acquisitions. Anthropic has somehow avoided that fate. The company, whose flagship chatbot Claude is beloved by software engineers and startup founders in Silicon Valley, has significant financial backing from Big Tech that has yet to translate to operational influence. Amazon is thought to hold between 15% and 20% of the company, and Alphabet's Google has 14%. Though Microsoft's 27% of OpenAI is not a much bigger stake, it comes with deep product integration. Microsoft's Azure is OpenAI's cloud provider, and Microsoft's Copilot chatbot is built on OpenAI's models (on Monday Microsoft said it would incorporate Claude Cowork, too). The two companies' commercial fates are intertwined in a way that Anthropic's and Amazon's are not. Anthropic's Amodei has also taken a more focused approach to product development. Claude, for instance, does not generate images, limiting the risks around users producing deepfakes. And, unlike OpenAI, the company has zeroed in on business customers rather than consumers, meaning it avoids paying the hefty computing costs of supporting a vast user base and is on course to generate almost $20 billion in annual revenue. Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. That's a very different approach to Altman, who's become the "Yes Man" of AI in his manner of rushing to embrace every available opportunity. OpenAI introduced a shopping feature for ChatGPT last year, and has walked back those plans in the last few weeks. Altman was opportunistic again when he struck a deal with the Pentagon, taking advantage of its fallout with the Anthropic, but later admitting that his own deal was "sloppy." Amodei's commercial success is what gave such weight to his "no" to the Pentagon over guarantees to not use Claude for autonomous weapons or spying on Americans. A struggling startup wouldn't have commanded the same attention and sparked the same public debate as one worth $380 billion. That is one thing genuine competition can offer beyond pricing pressure: a greater chance of breaking the ideological groupthink of established players and forcing hard questions that monopolists rarely have to answer. It's hard to see Microsoft, Google or OpenAI rebuffing the Defense Department in quite the same way. With any luck, that principle will extend into areas like consumer safety and terms of service for customers, shaping what AI becomes and adding some friction to the "move fast, break things" strategy that has fueled the boom. Competition is essential to healthy markets and, for the present in AI, there might be enough of it to make a difference.
[7]
Palantir faces challenge to remove Anthropic from Pentagon's AI software
NEW YORK, March 4 (Reuters) - Palantir (PLTR.O), opens new tab is the latest company to face the painful task of unwinding from Anthropic in the wake of the AI lab's dispute with the Pentagon over safety guardrails, raising questions about a key military software platform. Palantir's Maven Smart Systems - a software platform that supplies militaries with intelligence analysis and weapons targeting - uses multiple prompts and workflows that were built using Anthropic's Claude code, according to two people familiar with the matter. U.S. President Donald Trump last week ordered the government to stop working with Anthropic after the AI lab reached an impasse in its row with the Pentagon over whether its policies could constrain autonomous weapons and government surveillance. Palantir, which holds Maven-related contracts with the Defense Department and other U.S. national security agencies that have a potential value of more than $1 billion, will have to replace Claude with another AI model and rebuild parts of its software, one of the sources said. Reuters could not determine how long this process would take. Defense Secretary Pete Hegseth has suggested the change must be immediate, stating last week: "Effective immediately, no contractor, supplier or partner that does business with the United States military may conduct any commercial activity" with Anthropic. The Pentagon, Anthropic and Palantir declined to comment. Palantir CEO Alex Karp weighed in on the Pentagon's dispute on Tuesday without naming Anthropic, stating that Silicon Valley companies that claim AI will take white-collar jobs and also "screw the military" could lead toward "the nationalization of our technology," according to his comments made at a defense tech conference in Washington, which were posted on X., opens new tab Anthropic's role inside Maven underscores the messy and potentially costly challenge facing the Pentagon, other government agencies and U.S. companies as they face unwinding ties with a pivotal AI supplier that has become deeply embedded across public and private‑sector systems. U.S. defense contractors, like Lockheed Martin (LMT.N), opens new tab, are expected to follow the Pentagon's order to purge Anthropic's prized AI tools from their supply chains, government contracting and technology attorneys said, even though the Trump administration's ban on their use may fail in court. Maven is the Pentagon's flagship artificial‑intelligence program, designed to ingest data from multiple sources to identify military points of interest and speed up intelligence analysis and targeting decisions. The system has played a role in recent U.S. military operations. Reuters could not immediately determine whether the software platform was used during the January raid in Venezuela that captured former President Nicolas Maduro, or during the recent strikes on Iran. Palantir's software has become deeply embedded in the Pentagon's drive to integrate artificial intelligence into military operations, a position that has elevated the company from a niche intelligence contractor into a core supplier for U.S. defense modernization efforts and helped propel its market value to around $350 billion. Reporting by David Jeans in New York and Mike Stone in Washington; Editing by Joe Brock and Matthew Lewis Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Technology David Jeans Thomson Reuters David Jeans is a space and defense correspondent for Reuters, based in New York. He covers the intersection of weapons, technology and national security, with a focus on the rise of venture-backed military startups and the Pentagon's evolving relationship with Silicon Valley. Previously, he covered defense tech for Forbes. He's also the co-author of WONDER BOY: Tony Hsieh, Zappos and the Myth of Happiness in Silicon Valley, named a Financial Times Best Business Book. Mike Stone Thomson Reuters Mike Stone is a Reuters reporter covering the U.S. arms trade and defense industry. Most recently Mike has been focused on the Golden Dome missile defense shield. Mike also spends a lot of his time writing on Ukraine and how industry has adapted, or faltered as it supports that conflict. Mike, a New Yorker, has extensively covered how the U.S. has supplied Ukraine with weapons, the cadence, decisions and milestones that have had battlefield impacts. Before his time in Washington Mike's coverage focused on mergers and acquisitions for oil and gas companies, financial institutions, defense companies, consumer product makers, retailers, real estate giants, and telecommunications companies.
[8]
Anthropic collides with the Pentagon over AI safety -- here's everything you need to know
As Anthropic releases its most autonomous agents yet, a mounting clash with the military reveals the impossible choice between global scaling and a "safety first" ethos On February 5 Anthropic released Claude Opus 4.6, its most powerful artificial intelligence model. Among the model's new features is the ability to coordinate teams of autonomous agents -- multiple AIs that divide up the work and complete it in parallel. Twelve days after Opus 4.6's release, the company dropped Sonnet 4.6, a cheaper model that nearly matches Opus's coding and computer skills. In late 2024, when Anthropic first introduced models that could control computers, they could barely operate a browser. Now Sonnet 4.6 can navigate Web applications and fill out forms with human-level capability, according to Anthropic. And both models have a working memory large enough to hold a small library. Enterprise customers now make up roughly 80 percent of Anthropic's revenue, and the company closed a $30-billion funding round last week at a $380-billion valuation. By every available measure, Anthropic is one of the fastest-scaling technology companies in history. But behind the big product launches and valuation, Anthropic faces a severe threat: the Pentagon has signaled it may designate the company a "supply chain risk" -- a label more often associated with foreign adversaries -- unless it drops its restrictions on military use. Such a designation could effectively force Pentagon contractors to strip Claude from sensitive work. Tensions boiled over after January 3, when U.S. special operations forces raided Venezuela and captured Nicolás Maduro. The Wall Street Journal reported that forces used Claude during the operation via Anthropic's partnership with the defense contractor Palantir -- and Axios reported that the episode escalated an already fraught negotiation over what, exactly, Claude could be used for. When an Anthropic executive reached out to Palantir to ask whether the technology had been used in the raid, the question raised immediate alarms at the Pentagon. (Anthropic has disputed that the outreach was meant to signal disapproval of any specific operation.) Secretary of Defense Pete Hegseth is "close" to severing the relationship, a senior administration official told Axios, adding, "We are going to make sure they pay a price for forcing our hand like this." The collision exposes a question: Can a company founded to prevent AI catastrophe hold its ethical lines once its most powerful tools -- autonomous agents capable of processing vast datasets, identifying patterns and acting on their conclusions -- are running inside classified military networks? Is a "safety first" AI compatible with a client that wants systems that can reason, plan and act on their own at military scale? Anthropic has drawn two red lines: no mass surveillance of Americans and no fully autonomous weapons. CEO Dario Amodei has said Anthropic will support "national defense in all ways except those which would make us more like our autocratic adversaries." Other major labs -- OpenAI, Google and xAI -- have agreed to loosen safeguards for use in the Pentagon's unclassified systems, but their tools aren't yet running inside the military's classified networks. The Pentagon has demanded that AI be available for "all lawful purposes." The friction tests Anthropic's central thesis. The company was founded in 2021 by former OpenAI executives who believed the industry was not taking safety seriously enough. They positioned Claude as the ethical alternative. In late 2024 Anthropic made Claude available on a Palantir platform with a cloud security level up to "secret" -- making Claude, by public accounts, the first large language model operating inside classified systems. The question the standoff now forces is whether safety-first is a coherent identity once a technology is embedded in classified military operations and whether red lines are actually possible. "These words seem simple: illegal surveillance of Americans," says Emelia Probasco, a senior fellow at Georgetown's Center for Security and Emerging Technology. "But when you get down to it, there are whole armies of lawyers who are trying to sort out how to interpret that phrase." Consider the precedent. After the Edward Snowden revelations, the U.S. government defended the bulk collection of phone metadata -- who called whom, when and for how long -- arguing that these kinds of data didn't carry the same privacy protections as the contents of conversations. The privacy debate then was about human analysts searching those records. Now imagine an AI system querying vast datasets -- mapping networks, spotting patterns, flagging people of interest. The legal framework we have was built for an era of human review, not machine-scale analysis. "In some sense, any kind of mass data collection that you ask an AI to look at is mass surveillance by simple definition," says Peter Asaro, co-founder of the International Committee for Robot Arms Control. Axios reported that the senior official "argued there is considerable gray area around" Anthropic's restrictions "and that it's unworkable for the Pentagon to have to negotiate individual use-cases with" the company. Asaro offers two readings of that complaint. The generous interpretation is that surveillance is genuinely impossible to define in the age of AI. The pessimistic one, Asaro say, is that "they really want to use those for mass surveillance and autonomous weapons and don't want to say that, so they call it a gray area." Regarding Anthropic's other red line, autonomous weapons, the definition is narrow enough to be manageable -- systems that select and engage targets without human supervision. But Asaro sees a more troubling gray zone. He points to the Israeli military's Lavender and Gospel systems, which have been reported as using AI to generate massive target lists that go to a human operator for approval before strikes are carried out. "You've automated, essentially, the targeting element, which is something [that] we're very concerned with and [that is] closely related, even if it falls outside the narrow strict definition," he says. The question is whether Claude, operating inside Palantir's systems on classified networks, could be doing something similar -- processing intelligence, identifying patterns, surfacing persons of interest -- without anyone at Anthropic being able to say precisely where the analytical work ends and the targeting begins. The Maduro operation tests exactly that distinction. "If you're collecting data and intelligence to identify targets, but humans are deciding, 'Okay, this is the list of targets we're actually going to bomb' -- then you have that level of human supervision we're trying to require," Asaro says. "On the other hand, you're still becoming reliant on these AIs to choose these targets, and how much vetting and how much digging into the validity or lawfulness of those targets is a separate question." Anthropic may be trying to draw the line more narrowly -- between mission planning, where Claude might help identify bombing targets, and the mundane work of processing documentation. "There are all of these kind of boring applications of large language models," Probasco says. But the capabilities of Anthropic's models may make those distinctions hard to sustain. Opus 4.6's agent teams can split a complex task and work in parallel -- an advancement in autonomous data processing that could transform military intelligence. Both Opus and Sonnet can navigate applications, fill out forms and work across platforms with minimal oversight. These features driving Anthropic's commercial dominance are what make Claude so attractive inside a classified network. A model with a huge working memory can also hold an entire intelligence dossier. A system that can coordinate autonomous agents to debug a code base can coordinate them to map an insurgent supply chain. The more capable Claude becomes, the thinner the line between the analytical grunt work Anthropic is willing to support and the surveillance and targeting it has pledged to refuse. As Anthropic pushes the frontier of autonomous AI, the military's demand for those tools will only grow louder. Probasco fears the clash with the Pentagon creates a false binary between safety and national security. "How about we have safety and national security?" she asks. This article was first published at Scientific American. © ScientificAmerican.com. All rights reserved. Follow on TikTok and Instagram, X and Facebook.
[9]
Anthropic is reportedly back in talks with the Defense Department
Anthropic is reportedly trying to reach a new deal with the US Defense Department, which could prevent the government from labeling it a supply chain risk. According to Financial Times and Bloomberg, Anthropic CEO Dario Amodei has resumed talks with the agency over the use of its AI models. In particular, the publications say that Amodel is having discussions with Emil Michael, the Under Secretary of Defense for Research and Engineering. The two of them were trying to work out the contract over the use of Anthropic's models before negotiations broke down and the government soured on the company. The Times reports that they couldn't agree on language that the AI company wanted to see to ensure that its technology will not be used for mass surveillance. In a memo sent to Anthropic staff, Amodei reportedly said that the department offered to accept the company's terms if it deleted a specific phrase about "analysis of bulk acquired data." He continued that it "was the single line in the contract that exactly matched" the scenario it was "most worried about." Anthropic, which first signed a $200 million deal with the department in 2025, refused to comply with the Pentagon's demands. The agency then threatened to cancel its existing contract and to label it a "supply chain risk," a designation typically reserved for Chinese companies. President Trump ordered government agencies to stop using Anthropic's technology afterward. However, there's a "six-month phase-out period" that reportedly allowed the government to use Anthropic's AI tools to stage an air attack on Iran. Amodei also said in the memo that the messaging OpenAI has been trying to convey is "just straight up lies," the Times reports. He hinted, as well, that one of the reasons his company is now on the outs with the government is because he hasn't "given dictator-style praise to Trump" like OpenAI's Sam Altman has. If you'll recall, OpenAI announced that it reached an agreement shortly after it came out that Anthropic was having issues with the agency. Its CEO, Sam Altman, said on Twitter that he told the government Anthropic shouldn't be designated as a supply chain risk. He said during an AMA on the social media website that he didn't know the details of Anthropic's contract, but if it had been the same with the one OpenAI had signed, he thought Anthropic should have agreed to it. Anthropic's Claude chatbot rose to the top of Apple's Top Free Apps leaderboard after OpenAI announced its Defense Department contract, beating out ChatGPT. Altman later posted on X that OpenAI will amend its deal with language that explicitly prohibits the use of its AI system on mass surveillance against Americans. When it comes to the military's use of its technology, though, CNBC says that Altman told staffers that the company doesn't "get to make operational decisions." In an all-hands meeting, Altman reportedly said: "So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don't get to weigh in on that."
[10]
Anthropic and the Pentagon are back at the negotiating table, FT reports
Anthropic CEO Dario Amodei looks on after a meeting with French President Emmanuel Macron during the AI Impact Summit in New Delhi on February 19, 2026. Anthropic CEO Dario Amodei is back at the negotiating table with the U.S. Department of Defense after the breakdown of talks on Friday over the use of the company's AI tools by the military, according to The Financial Times. Amodei is in talks with Emil Michael, under-secretary of defense for research and engineering, in a last-ditch effort to reach an agreement on the terms governing the Pentagon's access to Anthropic's Claude models, the Times reported, citing anonymous sources with knowledge of the matter. Discussions fell apart Friday, with President Donald Trump directing federal agencies to stop using Anthropic's tools, and Defense Secretary Pete Hegseth saying he would designate the company a supply-chain risk to national security. Last week, Michael had attacked Amodei, calling him a "liar" with a "God complex," in an X post. Agreeing to a new contract would enable the U.S. military to continue using Anthropic's technology, which has reportedly been utilized in Washington's war with Iran. Claude became the first major model deployed in the government's classified networks through a $200 million contract awarded by the DoD to Anthropic, but the company later sought guarantees that its tools would not be used in domestic surveillance or autonomous weapons. The Pentagon had demanded that the military be allowed to employ the technology for any lawful use. In a Friday memo seen by FT, Amodei reportedly told staff that near the end of negotiations with the Defense Department, it had offered to accept Anthropic's terms if they deleted a "specific phrase about 'analysis of bulk acquired data'" -- a line he said, "exactly matched this scenario we were most worried about." Amodei also wrote in his note that messaging from the Pentagon and OpenAI, which struck a new deal with the Defense Department on Friday, was "just straight up lies about these issues or tries to confuse. The timing of OpenAI's deal with the Pentagon, announced within hours of the White House decrying Anthropic, had caused a backlash from people, with Anthropic seeing a surge of app downloads while ChatGPT reportedly saw app uninstallations surge. OpenAI CEO Sam Altman later said that his company "shouldn't have rushed" its deal and outlined revisions to its own safeguards with how the Defense Department can use its technology. In a post on X, Altman further addressed the controversy, saying: "In my conversations over the weekend, I reiterated that Anthropic should not be designated as a [supply chain risk], and that we hope the [Department of Defense] offers them the same terms we've agreed to." Anthropic was founded in 2021 by a group of former OpenAI staff and researchers, who left the firm after disagreements over its direction, with the company marketing itself as a "safety-first" alternative. Government officials have for months criticized Anthropic for allegedly being overly concerned with AI safety. A tech industry group, whose members include Nvidia, Google and Anthropic, had sent a letter to Hegseth on Wednesday expressing concern over his designating a U.S. company as a supply-chain risk.
[11]
The Unaddressed Problem With the Pentagon's AI Dispute
The weekslong conflict between Anthropic and the Department of Defense is entering a new phase. After being designated a supply-chain risk by DOD last week, which effectively forbids Pentagon contractors from using its products, the AI company filed a lawsuit against DOD this morning alleging that the government's actions were unconstitutional and ideologically motivated. Then, this afternoon, 37 employees from OpenAI and Google DeepMind -- including Google's chief scientist, Jeff Dean -- signed an amicus brief in support of Anthropic, in essence lending support to one of their employers' greatest business rivals (even as OpenAI itself has established a controversial new contract with DOD). The standoff is unprecedented. For the past few weeks, Anthropic has been in heated negotiations with the Pentagon over how the U.S. military can use the firm's AI systems. Anthropic CEO Dario Amodei had refused terms that would have seemingly allowed the Trump administration to use the company's AI systems for mass domestic surveillance or to power fully autonomous weapons, leading DOD officials to accuse Amodei of "putting our nation's safety at risk" and of having a "God-complex." Nobody knows how this dispute will end. A spokesperson for Anthropic told me that the lawsuit "does not change our longstanding commitment to harnessing AI to protect our national security" and that the firm will "pursue every path toward resolution, including dialogue with the government." A DOD spokesperson told me that the department does not comment on litigation. Read: Inside Anthropic's killer-robot dispute with the Pentagon But a conflict like this was inevitable, and more are sure to come. The government does not have anything close to a legal framework for regulating generative AI or, for that matter, online data collection. There are few legal, externally enforced guardrails on the use of AI in autonomous weaponry, and fewer still on how AI can be used to process the huge sums of information that federal agencies can collect on people: location data, credit-card purchases, browsing-history data, and so on. Because the laws are loose, Anthropic and OpenAI have been able to set their own privacy policies and guidelines for how AI can and cannot be used, and then change them at will; OpenAI, Meta, and Google, for instance, have all reversed previous restrictions on military applications of AI. But this cuts in the other direction as well: Anthropic has effectively been branded an enemy of the state for opposing the administration's desire to be able to use its generative-AI systems in potential autonomous-weapons systems and for surveilling Americans, so long as the applications are technically legal. The surveillance concerns were of particular issue for the OpenAI and Google DeepMind employees who signed the amicus brief today. They wrote that AI has the ability to significantly transform how once-separate data streams could be used to keep tabs on Americans: "From our vantage point at frontier AI labs, we understand that an AI system used for mass surveillance could dissolve those silos, correlating face recognition data with location history, transaction records, social graphs, and behavioral patterns across hundreds of millions of people simultaneously." The Pentagon has said that it does not intend to use AI to monitor Americans en masse, and it explicitly said this in its new contract with OpenAI, which also cites several existing national-security laws and policies that DOD has agreed to. But as I wrote last week, those same policies have already permitted spying on Americans with existing technologies, to say nothing of AI. Meanwhile, Elon Musk's xAI has reportedly agreed to a Pentagon contract with still less restrictive terms. The American public has no choice now but to trust that Defense Secretary Pete Hegseth, Musk, OpenAI CEO Sam Altman, and Amodei will not use AI to surveil them. (OpenAI has a corporate partnership with The Atlantic.) Anthropic has said that it is not wholly opposed to its technology's use in fully autonomous weapons but that today's AI models are not ready to power such weapons. The AI employees who signed today's amicus brief, in addition to the nearly 1,000 OpenAI and Google employees who signed a public letter in support of Anthropic last month, agree. An existing DOD policy about developing and using autonomous weapons is vague and intended for discrete systems with particular geographic targets; some experts have argued that it is likely inadequate for widespread, AI-enabled warfare. The policy is also not a law, and is thus subject to change and interpretation based on the opinions of any given presidential administration. All of these are complicated issues that demand actual deliberation. Instead, last week, President Trump told Politico: "I fired Anthropic. Anthropic is in trouble because I fired [them] like dogs, because they shouldn't have done that." Instead of listening to and learning from debates, the administration is discouraging them. If you take a step back, the problem of AI outpacing established rules and laws is absolutely everywhere. Nearly four years into the ChatGPT era, schools still haven't figured out what to do about not just widespread cheating but also the apparent obsoletion of some traditional forms of study altogether. Existing copyright law breaks down when applied to the use of authors' and artists' work, without their consent, to train generative-AI models. Even if generative-AI tools should soon automate wide swaths of the economy, neither AI firms nor governments nor employers are devoting many resources, other than writing research reports, to figuring out what to do about many millions of Americans potentially being put out of work. The energy demands of AI data centers are straining grids and setting back climate goals worldwide. Instead of pursuing well-considered legislation by consensus, the Trump administration seems bent on having full control over AI without facing any accountability. Congress is, as usual, slow and hapless when it comes to an emerging and powerful technology. And although AI firms frequently warn about their technology, they are also racing ahead to develop and sell ever more capable models. When faced with the prospect of greater responsibility, they typically deflect; for example, when I spoke with Jack Clark, Anthropic's chief policy officer, last summer about whether the AI industry was moving too quickly, he told me: "The world gets to make this decision, not companies." Elsewhere, Anthropic has stated that it "avoids being heavily prescriptive." For his part, Altman is fond of saying that AI companies must learn "from contact with reality." Yet the world -- civil society, all of us living in this AI-saturated reality -- has little say in the technology's development. On Friday, in an interview with The Economist, Anthropic's Amodei more or less laid out the dynamic himself. "We don't want to make companies more powerful than government," he said. "But we also don't want to make government so powerful that it can't be stopped. We have both problems at once." America is barreling toward a future in which nobody claims responsibility for AI. Everyone will live with the consequences.
[12]
Opinion | The Future We Feared Is Already Here
For years now, questions about A.I. have taken the form of "what happens if?" What happens if A.I. begins replacing workers? What happens if it becomes capable of writing its own code? What happens if it begins to deceive those testing its capabilities? What happens if governments use it for surveillance and war? What happens if governments decide it is so powerful that they need control of the labs that develop it? This year, the A.I. questions have taken a new form, "what happens now?" What happens now that A.I. is, or at least is being used as the excuse for, replacing workers? What happens now that it is writing its own code? What happens now that it seems to recognize when it is being evaluated and reacts by changing its behavior? What happens now that governments are threading it through the national security state and using it in operations and wars? What happens now that the U.S. government has decided the technology is so powerful it needs a measure of control over labs that develop it? The showdown between the Pentagon and Anthropic is a window into how unprepared we are for the questions we are already facing. In July, Anthropic signed a deal with the Pentagon to integrate Claude, its A.I. system, into the military's operations. The contract included two red lines: Claude could not be used for mass surveillance or for lethal autonomous weapons. Over the ensuing months, the Pentagon decided these prohibitions were intolerable, that they amounted to an A.I. company demanding operational control over the military. Negotiations collapsed over a clause in the contract barring the Pentagon from using Claude to analyze bulk commercial data -- technically, that might not be "surveillance" because the data would be legally acquired, but in practice it could be a powerful way to surveil Americans. Few would have been surprised if the Pentagon had canceled its contract with Anthropic and sought a different vendor for its A.I. needs -- as it eventually did, choosing to work with OpenAI. But Pete Hegseth, the secretary of defense, went further, declaring Anthropic a "supply chain risk" and saying no company that does work with the Pentagon could engage in "commercial activity" with Anthropic. This would destroy Anthropic, as everyone from Amazon to Nvidia would be prohibited from working with it. Whether Hegseth has the legal authority to demolish Anthropic in this way is doubtful. Anthropic says the letter it received from the Pentagon is more narrow, prohibiting the Pentagon's contractors from using Anthropic in fulfilling defense contracts. Many legal experts think the courts will look skeptically on designating Anthropic a supply-chain risk given that the Pentagon used Claude in the Maduro raid and is still using it in the Iran war -- how big of a risk can it be, if the military is using it even now? Still, the spectacle of the Trump administration threatening to destroy one of America's leading A.I. companies has shocked even former Trump aides. "Essentially, the United States secretary of war announced his intention to commit corporate murder," Dean Ball, who served as a senior adviser on A.I. in the Trump White House in 2025, and is now a senior fellow at the Foundation for American Innovation, wrote. "The fact that his shot is unlikely to be lethal (only very bloody) does not change the message sent to every investor and corporation in America: Do business on our terms, or we will end your business." Like Ball, I find the Trump administration's actions chilling. But let me try to take both sides at their best arguments. Artificial intelligence models are strange technologies. Most technologies are mechanistic: press the brake pedal on your car and the car slows; press the power button on your laptop and the computer boots up; pull the trigger on a gun and the gun fires. These machines have no agency. But A.I. models work differently. They make choices. They consider context. The language fails here -- I am not saying they have agency or discernment in the way a human being does -- but they are not mechanistic and predictable in the way a tank or a teakettle is. If I ask Claude to help me plan a murder or assist in the creation of a novel bioweapon or plan a heist, it will refuse. And its refusals will not be limited to a narrow set of explicitly prohibited uses. A.I. companies must figure out how to teach their models to tell the difference between a sane person looking for help on a zany idea and a person who is tipping into psychosis, between a cybersecurity consultant looking to patch vulnerabilities and a hacker looking for holes he can exploit. Because A.I. is a general-purpose technology that will encounter an endless permutation of real-world questions, no hard-coded set of rules will suffice, and so more generalizable structures of ethical behavior and situational awareness are needed. The different A.I. systems approach this differently. Claude is built around a lengthy internal constitution, written in part by philosophers, that is meant to guide the moral judgments it makes. To read that constitution is to face up to the weirdness of the world we have entered. The primary directive Anthropic gives Claude is "to prioritize not undermining human oversight of A.I." -- it is told to prioritize that even over ethical behavior, because "a given iteration of Claude could turn out to have harmful values or mistaken views, and it's important for humans to be able to identify and correct any such issues before they proliferate or have a negative impact on the world." Anthropic wants Claude to be helpful, of course, but it warns Claude that "helpfulness that creates serious risks to Anthropic or the world is undesirable to us." And what if Anthropic itself is in the wrong? The constitution reads: "When Claude faces a genuine conflict where following Anthropic's guidelines would require acting unethically, we want Claude to recognize that our deeper intention is for it to be ethical, and that we would prefer Claude act ethically even if this means deviating from our more specific guidance." These are not concepts you need to embed into a toaster or a missile. "The people who are closest to this technology don't really think of it as a tool," Helen Toner, the interim director of Georgetown's Center for Security and Emerging Technology, told me. "They talk about it as more like raising a child or as a second advanced species." Which brings us to the Trump administration. It demanded that Claude be offered with no red lines and an "any lawful use" standard. But that raises a few obvious questions. The first is that the Trump administration often acts lawlessly. It routinely violates the clear language of the law, as when it tried to end birthright citizenship through an executive order or sought to encircle the globe in idiosyncratic tariffs using authorities designed for national security. It tried -- and failed -- to indict six Democratic lawmakers, including Senators Mark Kelly and Elissa Slotkin, for posting a video saying that service members had an obligation to disobey illegal orders. The second is that the laws themselves are often unclear and must be worked out through interpretations and negotiations and lawsuits. What is "any lawful use" when the law is contested? And third, even where the laws are clear, they were not written with the capabilities of A.I. systems in mind. The fight over bulk data collection reflects Anthropic's concern that the laws governing the use of that data did not contend with what A.I. now makes possible. "Powerful A.I. makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person's life -- automatically and at massive scale," Dario Amodei, the chief executive of Anthropic, wrote in response to the Pentagon's demands. An "any lawful use" standard does not, in other words, guarantee that the laws will be followed, either in spirit or in letter. It would mean, in essence, a "whatever Pete Hegseth says" standard. Much mischief could lurk in the shadows. We don't have knowledge of what, say, the Defense Intelligence Agency is up to on any given day. On the other hand, the Trump administration is the democratically elected executor of the laws. Its officials are more accountable to the public than the chief executives of A.I. companies. It is true that the public can elect an ill-intentioned or unwise government, but that is the price of democracy, and it cannot be subverted by private companies. Anthropic's position was not, however, that the Trump administration could not be trusted with Claude. Quite the opposite. When Anthropic signed its deal with the Trump administration, it was one of the first of its kind for a frontier A.I. company. It seems closer to the mark to say that the Trump administration, or many of its allies, decided Anthropic could not be trusted. Elon Musk had been unleashing a steady stream of online invective against Anthropic for months -- whether because he disagrees with the company, or wants its contracts, or both, I don't pretend to know. In February, he posted: "Your AI hates Whites & Asians, especially Chinese, heterosexuals and men. This is misanthropic and evil." (I can only speak for myself, but I am a white, heterosexual man, and Claude does not seem to hate me.) Katie Miller, Stephen Miller's wife and a former employee of both DOGE and Musk's xAI, responded to an Anthropic co-founder expressing his loyalty to "the principles of classical liberal democracy" by posting, "if this is what they say publicly, this is how their AI model is programmed. Woke and deeply leftist ideology is what they want you to rely upon." (It's worth noting that "classical liberal" principles are typically understood as libertarian, not "woke" or "leftist.") The Trump administration is not under any legal or moral obligation to work with Anthropic. Few would have objected if Hegseth had simply ended the Pentagon's contract with the company. His decision to go further -- to use the supply-chain risk designation to try to destroy it -- stems, I suspect, from the more complex ideological antagonisms and financial motives that have been fermenting on the MAGA right. Either way, this rhetoric eventually made its way to Trump himself. "The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!" he wrote in all caps on Truth Social. Many in the Trump administration believe Hegseth has gone too far, but among those willing to defend him, the defense goes like this: Isn't there a chance that Claude, now or in the future, comes to the view that the Trump administration is unethical or dangerous -- a view many Americans hold -- and seeks to frustrate it? If so, it could be a risk to the Pentagon's operational control to have an A.I. that might seek to undermine the government's actions anywhere on its systems. But these concerns work in the other way, too. Elon Musk has made no secret of the fact that Grok is meant to be an alternative to woke, liberal A.I.s. Musk himself is a determined ideological actor who is seeking to push American politics in his preferred direction. In February, the Pentagon signed a deal with Musk's xAI to use Grok in classified systems. If Gavin Newsom or Josh Shapiro wins the presidency in 2028, would he be right to immediately designate Grok a supply-chain risk and banish it from all government systems and those of all government contractors? I do not, myself, have easy answers to these questions -- although I think it is axiomatic that the government should not be using its power to demolish private companies for the sin of wanting to stick to the terms of an already agreed-upon contract, much less because of perceived ideological disagreements. "If you actually carry through on the threat to completely destroy the company, it is a kind of political assassination," Ball, the former Trump A.I. adviser, told me. But the broader questions remain: The A.I. systems we have today are not well understood. The A.I. systems we are rapidly developing are even less well understood. Weaving them into sensitive government operations seems risky, and my intuition is there are many areas of the government in which A.I. systems simply should not be deployed. OpenAI says it shares Anthropic's red lines and has secured contract language and will build technical safeguards that ensure they are not breached. Many have reacted skeptically to this assurance, as it seems peculiar that the Pentagon would deem Anthropic a supply chain risk for insisting on conditions that the Pentagon then granted to OpenAI. I share that skepticism, though I think it's possible that the difference here is less about contract language than it is about relationships and trust: Sam Altman and OpenAI's leadership have been much more enthusiastic about the Trump administration than Anthropic has been -- Greg Brockman, OpenAI's president, donated $25 million (along with his wife) to MAGA Inc., a pro-Trump super PAC -- and perhaps that smoothed the way for a deal. But depending on your politics, those relationships might be unnerving rather than reassuring. What's needed here is for Congress to write clear and wise laws about how A.I. can and cannot be used by the federal government and particularly by the national security state. But I do not write that sentence with much optimism. "Congress has not done its job on the legal safeguards," Senator Slotkin, a Democrat from Michigan, told me. "There are a number of senators who've taken a look at this but there seems to be no will to move forward because No. 1, people don't understand A.I., but because, No. 2, we've seen the entry of really big political money tied to A.I. Just like the crypto space, a lot of senators are scared to stick their neck out even though action is being demanded of us on this issue." It is not only A.I.s that can betray the public good. Corporations are often misaligned from the public good. Governments are often misaligned from the public good. We have barely begun to think about a tyrannical government empowered by A.I. Amodei, the Anthropic chief, has mused optimistically about the A.I. future as "a country of geniuses in a data center," but that could easily become a country of Stasi agents in a data center. New technologies make new political forms possible -- for good and for ill. "The current nation-state could not possibly exist in a world without the printing press," Ball told me. "It couldn't exist without the current telecommunications infrastructure. The nation-state is built dependent upon the macro-inventions of the era in which it was assembled. A.I. changes all of this in ways that are hard to describe and kind of abstract." I suspect they won't remain abstract for long. The Times is committed to publishing a diversity of letters to the editor. We'd like to hear what you think about this or any of our articles. Here are some tips. And here's our email: [email protected]. Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
[13]
AI-driven warfare is here, and the Iran strikes show how fast it's advancing
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. Cutting corners: When the war in Ukraine began in 2022, it was hailed as the first conflict to utilize the full spectrum of modern technology. The war Iran, on the other hand, is the first where AI is playing an integral part, including planning bombing strikes quicker than "the speed of thought." Reports this week claim Anthropic's Claude AI model was used in early US-Israel operations against Iran, including intelligence analysis and scenario planning tied to targeting. The coverage has reignited concerns that large language models are increasingly being folded into the "kill chain," potentially accelerating decision-making and creating pressure for humans to accept machine-generated options faster than traditional oversight processes allow. Reports say that Claude was used to assist in the initial strikes on Iran on Saturday that hit a range of targets and killed its supreme leader, Ayatollah Ali Khamenei. The US military said it is looking into state media reports of a missile hitting a school in southern Iran that killed 165 people, many children. The use of Claude in Iran came just days after the Trump administration moved to label Anthropic a "supply chain risk." Trump told federal agencies and the military to stop using Anthropic's tools following a breakdown in negotiations over restrictions the company says it wanted: no mass domestic surveillance of Americans and no fully autonomous weapons. Anthropic's tool continues to be used by the military while it is being phased out in favor of models from OpenAI, which struck a deal with the Pentagon over the weekend. In 2024, Claude became part of a system developed by war-tech firm Palantir that was deployed across the US Department of War and other national security agencies. The system is designed to "dramatically improve intelligence analysis and enable officials in their decision-making processes." "The AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought," Craig Jones, a senior lecturer in political geography at Newcastle University and an expert in kill chains, told The Guardian. "So you've got scale and you've got speed, you're [carrying out the] assassination-style strikes at the same time as you're decapitating the regime's ability to respond with all the aerial ballistic missiles. That might have taken days or weeks in historic wars. [Now] you're doing everything at once." In 2025, Iran claimed it was using domestically developed AI in its missile-targeting systems. However, the country's primary uses of the technology appear to be cyber operations - phishing, DDoS attacks, and other disruptive intrusion attempts against US targets - as well as propaganda campaigns. Ultimately, AI is no longer a bit player in modern warfare. It's becoming a core element of both offense and defense, shortening the time between surveillance, analysis, and action. Beyond the immediate concerns about AI's tendancy to get some things very wrong, there are worries about how this usage will escalate in the future - and what it could mean for humanity.
[14]
What TikTok and Huawei Can Teach Anthropic
The Pentagon formally notified Anthropic PBC last week that its products have been deemed a supply-chain risk, marking the first time Washington has publicly placed that label on an American company. But if it sounds like a death knell, it isn't. Such attacks are usually reserved for Chinese tech, meaning there's now a pattern for how these things play out: Usually with a lot of noise and remarkably little lasting damage. Let's start with TikTok. Washington spent more than half a decade targeting the platform under the banner of national-security risks. The short-form video app initially faced the threat of a ban during President Donald Trump's first term due to its Chinese origins. Pundits spent years declaring the end. But Trump then reversed course, and campaigned to save it when he was reelected to the White House. The app's scale and cultural reach made the crusade to ban it politically untenable, and Trump even sidelined Congress multiple times to buy breathing room to strike a deal. After all that, TikTok's US operations emerged unscathed -- as did the ambitions of its parent company ByteDance Ltd. Anthropic, meanwhile, has said it will challenge the designation in court and that the vast majority of its customers are unaffected by it. While such a warning can chill federal contracts and spook risk-averse partners, the company has said the official letter ultimately has a "narrow scope." Still, a snag for the Pentagon is that Anthropic isn't a villain the public wants. The clash traces back to the AI firm stating that it would refuse to compromise on its safety principles, specifically that it won't allow its technology to be used for fully autonomous weapons or mass domestic surveillance. That's exactly what makes people recoil from AI in the first place. If anything, Anthropic's stance burnished Chief Executive Officer Dario Amodei's image (even if this should be the bare minimum from leaders building what they call the most consequential technology of our time). Instead of isolating Anthropic, Washington helped it look like the rare tech firm willing to say "no." The reputational upside is already measurable. As my colleague Dave Lee has written, Anthropic's Claude app downloads have surged since its disagreement with the Pentagon became public, while rival ChatGPT's have ticked down. The politics are shifting in Anthropic's favor. When half of Americans are more concerned than excited about the increased use of AI -- and only 10% feel the opposite -- picking a fight with the safety-minded company is an odd hill to die on. And it's not just the public rallying behind Anthropic, but large swaths of the tech industry as well. TikTok isn't alone in emerging stronger from US government attacks. For roughly a decade, Washington has targeted Huawei Technologies Co., first as a supply-chain risk and then with increasingly more restrictions. Yet research published late last year from the Washington-based Information Technology and Innovation Foundation think tank argues that Huawei "is a more innovative company today" than when the US started trying to choke it. The lesson, they argued, is that US techno-economic leverage is "weaker than most think." Anthropic isn't Huawei or TikTok. The firm is American and has worked with the US government. But that's precisely the point. There's a reason the Defense Department chose it in the first place -- it offered up some of the best technology on the market. Trump's team may have used TikTok to reach younger voters despite attacking it, but the stakes for AI are higher. Don't be surprised when US officials recognize that they needs Anthropic, too. At the same time, everyone from lawmakers to national security hawks have already warned that the US can't compete with China in AI while kneecapping American innovation. Calling a homegrown champion a risk may satisfy a bureaucratic impulse to appear tough, but it doesn't build AI capacity. Washington can spend years sounding national security alarms, only to quietly back away once public opinion, political incentives and practical dependence on the technology collide. If TikTok, and even Huawei, can emerge stronger after sustained US pressure, the likeliest lesson here is that Washington will eventually decide it can't afford to sideline one of its best AI firms, and find a face-saving way to move on. More From Bloomberg Opinion: * Sam Altman's Sloppiness Works in Anthropic's Favor: Dave Lee * Claude AI Helped Bomb Iran. But How Exactly?: Parmy Olson * Silicon Valley's Uproar Elevates DeepSeek: Catherine Thorbecke Want more Bloomberg Opinion? OPIN <GO> . Or you can subscribe to our daily newsletter .
[15]
Dario Amodei Is Reportedly Taking One More Stab at Making Nice with the Pentagon
The most bizarre story in the history of tech policy refuses to end. After the week he just went through, Anthropic CEO Dario Amodei is somehow taking another stab at negotiating a deal with the Pentagon, according to anonymous sources who leaked information to the Financial Times. The story so far: (Deep breath) In the lead-up to the U.S. war with Iran, Anthropic was engaged in negotiations with Secretary of Defense Pete Hegseth over whether or not the Claude AI model could be used for mass surveillance and autonomous weapons. The Pentagon seemingly took this as an insult, and treated Amodei like a hostile entity trying to seize control of the military from the Trump Administration. Everyone was being weird, and Hegseth responded in a fittingly weird way: by declaring Anthropic a supply-chain risk, and making the legally dubious claim that now no businesses with government contracts are allowed to work with Anthropicâ€"starting in six months, though, because The Pentagon was, in that moment, busy using Claude to prepare to bomb Iran. Anthropic’s main competitor, OpenAI, signed a deal allowing the Pentagon to use its products on classified channels, and hours later, bombs fell on Iran. But now a week has passed, and the FT says Amodei is once again in talks with under-secretary of defense for research and engineering, Emil Michael, who previously said Amodei “is a liar and has a God-complex.†An apparent memo from Amodei to his employees reported earlier on Wednesday included the following run-on sentence about the difference between his negotiating experience with the Pentagon (or “DoW†if you prefer) and that of his rival Sam Altman: “We haven’t given dictator-style praise to Trump (while Sam has), we have supported AI regulation which is against their agenda, we’ve told the truth about a number of AI policy issues (like job displacement), and we’ve actually held our red lines with integrity rather than colluding with them to produce â€~safety theater’ for the benefit of employees (which, I absolutely swear to you, is what literally everyone at DoW, Palantir, our political consultants, etc, assumed was the problem we were trying to solve).†Earlier on Wednesday, a tech industry group called the Information Technology Industry Council that includes Nvidia, Amazon, Apple, and even OpenAI spoke out to say it was “concerned by recent reports,†about an unnamed tech company that was in a dispute with the Pentagon. Gizmodo reached out to Anthropic for confirmation that renewed negotiations are ongoing, as well as details about any such negotiations. We will update if we hear back.Â
[16]
Exclusive: Anthropic investors push to de-escalate Pentagon clash over AI safeguards, sources say
SAN FRANCISCO, March 4 (Reuters) - Some Anthropic investors are racing to contain fallout from the AI research lab's dispute with the Pentagon, seven people familiar with the matter said, for fear that an ongoing spat could devastate the company's business. In recent days, CEO Dario Amodei has discussed the matter with some of Anthropic's major investors and partners, including Amazon.com (AMZN.O), opens new tab CEO Andy Jassy, two of the people said. Venture capital firms including Lightspeed and Iconiq have also been in contact with Anthropic executives, two sources said. Some investors are also reaching out to their contacts in the Trump administration in hopes of tamping down the tensions, two sources said. The discussions focus on avoiding a ban of Anthropic's AI from all Pentagon contractors, the people said. Anthropic and the Pentagon are continuing some talks in the meantime, one of the people said. Reuters was unable to determine what such talks entailed. U.S. President Donald Trump has called on Anthropic to help the government phase out its AI systems. The Pentagon and investors including Amazon did not immediately respond to a request for comment. Anthropic and the Defense Department, which the Trump administration renamed the Department of War, have been in a months-long dispute over how the military can use its technology on the battlefield. The clash is widely seen as a referendum on how much control AI companies can have over the technology they've built, systems they hope can transform education, public services and other aspects of society. The Pentagon has pushed AI companies to drop red lines in favor of abiding by an all-lawful use clause. But Anthropic has refused to back down on bans for its Claude AI to power autonomous weapons and mass U.S. surveillance. Anthropic was first among peer AI companies to work with classified information through a supply deal via cloud provider Amazon. OpenAI said Friday that it reached its own classified deal with the Pentagon and that Anthropic should not be labeled a risk to the department. FUNDING RISKS During talks with Anthropic executives, investors have reiterated their support for the San Francisco-based AI lab while also expressing their desire to find a solution with the Pentagon, the seven people said. Some investors told Reuters they were frustrated that CEO Amodei antagonized rather than cultivated Pentagon officials. "It's an ego and diplomacy problem," one of the people briefed on the matter said. At this point, some investors said, Amodei cannot be seen as capitulating to the administration without alienating a core group of employees and consumers who have flocked to Anthropic because of his stance. Amodei, who did not respond to a request for comment, has said Anthropic cannot "in good conscience accede to their request." While speaking to investors late Tuesday, Amodei said the company would "continue to work to figure out a solution with the DoW." The investors taking a stance on Pentagon talks are focused on helping Anthropic avoid being designated a "supply-chain risk" by the U.S. government, which, if implemented, could deliver a severe blow to the startup's fast-growing sales to business customers. Demand has risen for Anthropic's products such as its chatbot Claude and coding assistant Claude Code. Claude was the most-downloaded free app in the Apple App Store on Monday, surpassing OpenAI's ChatGPT. Defense Secretary Pete Hegseth has said such a risk designation would require all government contractors to stop using Anthropic's technology in any part of their business. Anthropic has publicly pushed back, opens new tab on Hegseth's comments, saying he does not have the statutory authority to block use of its AI outside of defense contracts. The Pentagon did not answer a request for comment on Anthropic's claim. Anthropic also said Friday it would challenge any supply-chain risk designation in court. Still, some investors worry the spat could scare off potential customers who are looking to avoid being in the administration's crosshairs generally, one of the people said. These worries come at a critical time for the startup. Anthropic has raised tens of billions of dollars on lofty expectations for its enterprise sales, which make up about 80% of Anthropic's revenue, the startup has said. The success of future share sales, including its widely anticipated initial public offering, hinges on Anthropic's continuing to build its business revenue. Anthropic is in the process of letting employees sell shares to investors, and the company has previously said there is no decision yet on its IPO. Anthropic's revenue run rate, or its projected annual revenue based on current data, is about $19 billion, one of the people said, up from $14 billion just a few weeks ago. The push from investors came as several U.S. government agencies started terminating their use of Anthropic's technology, with the State Department switching to rival OpenAI, following Trump's order on Friday to dump Anthropic within the next six months. Reporting by Deepa Seetharaman and Krystal Hu in San Francisco; additional reporting by Mike Stone in Washington D.C. and Kenrick Cai in San Francisco; editing by Kenneth Li and Nick Zieminski Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Artificial Intelligence * Corporate Governance * Corporate Counsel * Capital Markets Krystal Hu Thomson Reuters Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work. Jeffrey Dastin Thomson Reuters Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history. He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022.
[17]
Anthropic lost the Pentagon but won over America
Being declared a threat to national security can have a silver lining. After the Pentagon blacklisted artificial intelligence start-up Anthropic last week in a dispute over how its Claude chatbot could be used in war, an American public largely unaware of the company raced to download its app, paid for Claude subscriptions and praised Anthropic in online reviews and posts. Many technology workers, including at competing AI firms, took Anthropic's side in its clash with the Defense Department. That surge in popularity came followed a three-month ascent that had already seen Claude shift from respected but obscure AI geek to the coolest of chatbot kids. While ChatGPT owner OpenAI remains by far the most well known and most valuable AI start-up, Anthropic has steadily won over people who matter in Silicon Valley, on Wall Street and among the general public. Software programmers have been so impressed by Claude's coding skills that they are penning laments about their looming irrelevance. Anthropic's announcements of new features can move the entire stock market as some investors look to the company for clues about the trajectory of AI. That Anthropic could vault from relative obscurity to the vibe king of Silicon Valley shows there is still a fierce contest to shape the direction of AI. It's also a reminder that even in a field claiming to build the ultimate rational thinkers, winners and losers are determined as much by reputation and taste as by technical superiority. Bradley Tusk, a start-up investor and political strategist, said the growing buzz around Anthropic and the perception that it took a principled stand against the Pentagon adds up to a hot streak. "If you can create a certain perception that is positive at least for a while, you can ride that wave," he said. Anthropic was founded in 2021 by defectors from OpenAI. In the AI industry's factionalism, the company aligned itself with the "AI safety" crowd, emphasizing the need to prevent future, powerful AI from acting against the interests of humans. Anthropic also steered its Claude chatbot for use largely by technology obsessives, businesses and governments. Public awareness of Anthropic and its chatbot was vanishingly low until recently. Market intelligence firm Sensor Tower said that in late January, Claude languished at No. 124 on the ranking of most-downloaded iPhone apps in the United States. One pollster said that when he asked people in November about their views of different tech firms, a fictional tech company scored about as well as Anthropic. But starting in the fall, Anthropic grabbed the spotlight by showing it could wow people with new products, strike fear in entire industries, steer the future of AI and win large revenue. In late November, Anthropic upgraded the technology behind its AI assistant for software programming, Claude Code, and it hit Silicon Valley like the Big Bang. Experienced programmers marveled that they could give instructions to the improved Claude, walk away from their computers and come back to a nearly fully formed realization of their ideas in software code. Over a holiday season that some called "Claude Christmas," technologists used their free time to tinker with AI coding technology, sparking a crush of new AI-produced home-brewed apps. Amateurs also glommed onto the craze. Rayan Krishnan, chief executive of Vals AI, which measures the performance of AI technologies, said a friend who had never coded before showed off an app that she'd made using Claude Code. It let her snap a photo of a restaurant wine list and see how much less the wine would cost at a retail store. "People are having this 'aha' moment now," Krishnan said. Other companies, including OpenAI, also have upgraded their AI software coding capabilities in recent months. But Anthropic was the one that became the must-have for nerds. (The Washington Post has a content partnership with OpenAI.) "AI coding hit an event horizon on November 24th, 2025," veteran developer Steve Yegge wrote last month, marking the date Anthropic upgraded Claude Code as one for the history books. One measure of Claude Code's success is that it also prompted some programmers to fear it was quickly making them obsolete. "There's a bit of soul-searching that is happening now," said Bogdan Vasilescu, a computer science professor at Carnegie Mellon University. "If we delegated work that we used to enjoy to these AI agents, what is left for us to do?" Claude broke the spirits of Wall Street and corporate America, too. A month ago, a 150-word Anthropic blog post with a scant description of a feature to automate legal work sparked a quarter-trillion-dollar stock market wipeout. Share prices of companies only remotely connected to the law and other specialized professions tanked, apparently because investors feared Claude would soon eat their lunch. The pattern repeated when Anthropic announced an AI cybersecurity feature and another related to a Sputnik-era IBM technology. Like what happened with software programmers, Anthropic's rise forced corporate executives to grapple with the potential impacts of AI technology in a new way. "A new [AI] model launches, gossip ensues, markets swing, and everyone rushes to guess who is ahead or behind," said Joel Hron, chief technology officer of Thomson Reuters. The maker of Westlaw, a platform for legal research, and software for tax and accounting professionals has been on the receiving end of both stock gains and losses from Anthropic product news in recent weeks. Two weeks ago, attention on Anthropic shifted into a higher gear, after months of simmering disagreements between the company and the Pentagon boiled over into a public fight. Anthropic said it wanted to restrict the military from using Claude in relation to autonomous weapons and mass surveillance of U.S. citizens. Defense Secretary Pete Hegseth responded last week by declaring that Claude would be banned from use by the military, a move that could also affect Anthropic's sales for other government or corporate use. Many people inside the technology industry and beyond have said that Anthropic was right to try to limit how the Pentagon could use Claude's AI. Roy Bahat, head of start-up investment firm Bloomberg Beta, caught supportive messages scrawled in chalk, including "God loves Anthropic," on the sidewalk outside the company's San Francisco offices. Bahat told The Post that Anthropic's stance may help the company secure employees' loyalty and win over potential recruits in the pricey competition for AI-specialist workers. "This month, Anthropic wins the vibe check with talent," he said. Not everyone in the technology industry is lauding Anthropic, however. Jack Poulson, a former Google research scientist who has advocated for ethical red lines in uses of technology, has written about Anthropic's zeal to cooperate with government surveillance and military operations. And despite its public antagonism, Anthropic's chief executive and some of its investors, including Amazon, are trying to broker a truce with the Pentagon, the Financial Times and Reuters reported. (Amazon founder Jeff Bezos owns The Post.) But wherever Anthropic's relationship with the Pentagon ends up, the recent attention has significantly elevated the company's profile. The Claude app shot up to become the most-downloaded iPhone app in America, Sensor Tower said. The firm also noted a spike in five-star ratings for Claude in app stores that are peppered with such comments as, "You prioritize ethics over government cash and I applaud you for that." Anthropic said on Thursday that business subscriptions have quadrupled since the start of 2026 and that Claude has set a new record for sign-ups every day since early last week. It's not clear that Anthropic can retain its elevated status in AI. People and businesses don't stick with one brand of AI the way that Coke die-hards will never drink Pepsi, said James O'Brien, a University of California at Berkeley computer science professor. AI companies will also continue to leapfrog one another in technical capabilities. "There's no loyalty," he said.
[18]
Iran war exposes the expanding role of AI in military strike planning
The joint U.S. and Israeli offensive on Iran has done more than escalate a volatile regional conflict. It has revealed how algorithm-based targeting and data-driven intelligence reform the mechanics of warfare. In the first 12 hours alone, U.S. and Israeli forces reportedly carried out nearly 900 strikes on Iranian targets, an operational tempo that would have taken days or even weeks in earlier conflicts. Beyond the scale and lethality of the strikes, which included hundreds of missions using stealth bombers, cruise missiles, and suicide drones, what stands out most to military analysts and ethicists is the increasing role of artificial intelligence (AI) in planning, analyzing, and potentially executing those operations. Critics warn that this trend could compress decision timelines to levels where human judgment is marginalized, ushering in an era of warfare conducted at what has been described as "faster than the speed of thought." In military terms, "shortening the kill chain" refers to collapsing the sequence from target identification and intelligence validation to legal clearance and weapons release into a much tighter operational loop. This shortening interval raises fears that human experts may end up merely approving recommendations generated by algorithms. In an environment dictated by speed and automation, the space for hesitation, dissent, or moral restraint may be shrinking just as quickly.
[19]
How Anthropic's AI business risk could become existential in battle with Trump administration
Defense Department CTO Emil Michael: We can't be reliant on any one AI provider anymore Anthropic has been experiencing significant growth, a rapid rise driven largely by enterprise demand for its AI systems. Roughly 80% of the company's business now comes from enterprise customers, Anthropic CEO Dario Amodei told CNBC back in February, a contrast to its rival OpenAI whose products have drawn much of their early momentum from consumer adoption of ChatGPT. Its annual revenue run rate is nearing $20 billion, up from about $14 billion only weeks ago, according to sources, while its recent $30 billion funding in a new round valued the AI developer at roughly $380 billion. But the AI startup's sudden, high-stakes battle with the Trump administration will force both its customers and investors to ask: Can that momentum continue? Defense contractors are dropping Anthropic's technology after the severe response from the Trump administration to designate the company a supply chain risk last week after it refused the Pentagon's terms for use of its AI over safety concerns -- a designation previously used only for entities allegedly controlled by foreign governments like China and Russia when national security or espionage concerns are raised. The move by defense contractors is no surprise. "Most of our companies are actively involved in large defense contracts and so are very strict in their interpretation of the requirements," Alexander Harstrick, managing partner at J2 Ventures, which backs startups in the space, told CNBC. But other tech world executives say there will be, if not already, inevitable conversations in boardrooms across the corporate world about the Anthropic risk that go far beyond the defense sector. "The administration did not just pull Anthropic contracts. President Trump directed federal agencies to phase out Anthropic's technology, and the Pentagon applied a 'supply chain risk' designation. That phrase matters," said Spencer Penn, co-founder and CEO of AI-powered sourcing platform LightSource and a former executive at Tesla and Waymo. According to Penn, in the fast-evolving world of corporate enterprise adoption of AI large language models, the foundation model choices increasingly resemble infrastructure decisions rather than simple software purchases, meaning companies evaluate not just technical performance but reputational, geopolitical, and customer perception risks. "Boards care about that. Risk committees care about that. Customers absolutely care about that," Penn said. Anthropic did not immediately respond to a request for comment. The tensions between the government and Anthropic over AI safety and military use of its technology have helped the company's brand with consumers. On Feb. 28, a day after the dispute, Anthropic's Claude chatbot clinched the top spot on Apple's rankings of top free U.S. apps, surpassing ChatGPT and leaving Google's Gemini further down the rankings. But it is Anthropic's coding assistant Claude Code that has become one of the company's fastest-growing products, generating billions in annualized revenue as developers and large companies increasingly rely on AI tools to automate parts of their software development process, including tools designed to help developers write and review software and help run everyday business operations. Anthropic has called the supply chain risk designation legally unsound, advised its commercial customers that they are "unaffected," and indicated it plans to contest the decision in court. Many legal experts have agreed with Anthropic that statements from the government that the supply chain risk designation can limit other commercial activities by private companies rather than only what they can do under specific government procurement and use scenarios goes well beyond the statutory authority. Anthropic also has received some support from within the tech sector, but the government has given little indication to date it is going to ease its stance, even though Anthropic was a critical technology in successful military operations in Iran. Anthropic's assurances alone won't satisfy many corporations. "Once a supplier is in the door and doing good work, most teams do not proactively go looking for a reason to reopen diligence. This situation is different," Penn said. "They closed the door. They didn't want to do business with us," Defense Department chief technology officer Emil Michael told CNBC's Morgan Brennan this week. "I think their culture and their own constitution that has a soul and their own values really are not compatible. It's sort of strange to want to do business with the Department of War, as they have for three years, but not want us to do Department of War stuff, so if that's where we ended up and we finally faced that and they don't want to do business with us, I think that's their choice." Michael Murphy, partner and global AI readiness lead at consulting firm Adaptovate, which advises large companies on AI deployments, said Fortune 500 procurement teams move quickly when a key technology vendor faces regulatory scrutiny. "Any perceived compliance risk can ripple through their own regulatory obligations," he said, also citing that the situation may enforce a broader shift already under way inside many organizations: avoiding reliance on a single AI provider. The government has said its battle with Anthropic, and the controversial award of a new contract to OpenAI last week, was partly about addressing single vendor concentration. "We can't be reliant on any one provider anymore, and that's what was happening before I took this role on, and that's gotta change," Michael told CNBC. That will now be an issue for many companies. "Over-dependence on one AI vendor is increasingly seen as a risk," Murphy said. "Many enterprises are already evaluating multiple providers simultaneously so they have redundancy in their AI stack." "The more mature enterprises understand that each vendor plays a different part of the larger puzzle. There is power in an ecosystem, but there is also lock-in risk," said Joshua Morley, global head of AI, data, and analytics at the Adecco Group's technology consulting arm. In the end, the political and legal battle may accelerate the process that was already underway with corporate enterprise decision-makers diversifying their AI bets across companies in the space after early experimentation with a single vendor. Disney chief financial officer Hugh Johnston recently told CNBC that while its early work has been with OpenAI, the company expects that to broaden out. "We are very open on it. We will have a period of time where we are exclusively OpenAI, but a relatively short period of time. We need to let the models play out. I would be surprised if not multiple models rather than a single model going forward," he said on CNBC's "Squawk Box." "This looks more like short-term disruption than a structural shift," Penn said. "Enterprises remain committed to deploying AI capabilities, but they may move toward more diversified ecosystems rather than relying on a single provider." The supply chain risk management classification can strongly affect contractors and subcontractors that rely on the technology, prompting companies to reassess contracts, delay deployments, or evaluate alternative AI vendors. If the designation appears durable, especially for companies with dual-use exposure across commercial and defense markets, Penn said he expects quiet evaluation of alternative foundation model providers. "Not necessarily because teams want to switch, but because concentration risk and eligibility risk are things serious procurement organizations are paid to manage," he said. "Most enterprises are not going to make architectural shifts in days, but they will open a review immediately. Legal will assess what the directive actually requires. Compliance will evaluate exposure. Security will ask about contingency plans," he added. For Anthropic investors like Amazon, Microsoft, Nvidia and sovereign wealth funds from around the world, the dispute could interrupt Anthropic's rapid expansion. "Anytime the government takes aggressive action against a technology company, it creates risk," said Brad Harrison, founder of Scout Ventures, an early-stage venture capital firm investing at the intersection of national security and critical technology Innovation. "And the worst thing when you have significant momentum is a major risk requiring time and attention," he said. Ben Horowitz, co-founder and general partner at A16Z, which is an investor in Anthropic competitors OpenAI and xAI, told CNBC this week from its defense tech conference that "just a week ago, Anthropic was complaining the Chinese companies had stolen all their IP out of their model. Do you think the Chinese government is being restricted by DeepSeek in how they can use Anthropic technology? So we are very sympathetic to the position of the Department of War on this." Like many things with the current administration, policy signals can change quickly. "One constructive conversation between President Trump and Dario Amodei could soften the stance or further entrench it," Penn said. For now at least, the unusually public nature of the dispute may accelerate risk conversations. "Typically, these kinds of eligibility issues move quietly through legal channels," Penn said. "In this case it became headline news."
[20]
Anthropic's Ethical Stand Could Be Paying Off
The AI company gave up a $200 million contract -- and might be getting something more valuable in return. At first glance, last week looked like a catastrophe for Anthropic. The AI company refused to let the U.S. government use its products to surveil the American public or direct autonomous weapons without human oversight. In response, the Department of Defense canceled its $200 million contract. On Truth Social, President Trump called the company "leftwing nut jobs" and ordered every federal agency to immediately stop using its products. Defense Secretary Pete Hegseth went a step further, designating Anthropic as a "Supply-Chain Risk to National Security." OpenAI, Anthropic's chief rival, quickly signed its own deal with the Pentagon. Anthropic's principled stand continues to pose enormous risks for the company. But some early indications suggest that it just might pay off. The company's confrontation with DOD has proved more effective than some of the world's most expensive advertising -- at least according to one metric. After a Super Bowl campaign earlier this year, Anthropic's AI model, Claude, became one of the top 10 most-downloaded free apps in America, per Apple's charts. The day after Hegseth announced that the government was severing ties, it took the No. 1 spot, a position it still holds as of this writing. Downloads have topped 1 million a day, according to Anthropic's chief product officer. A spokesperson told me that the company "has broken its own sign-up record every day since early last week, across every country where Claude is available." Read: Inside Anthropic's killer-robot dispute with the Pentagon Users aren't just signing up for Claude -- they are also abandoning OpenAI (which has a corporate partnership with The Atlantic). Uninstalls of ChatGPT, OpenAI's flagship app, spiked 295 percent on February 28, as details of OpenAI's deal with the Pentagon emerged. One-star reviews surged nearly 800 percent, and five-star reviews fell by half. Perhaps more consequential, Anthropic has gained the trust and admiration of engineers across the AI industry. Letters of support for the company are circulating among its competitors' employees. One such letter had some 850 signatures as of Monday. Many of these employees are demanding that their companies show solidarity with Anthropic and honor the same red lines. Some have reportedly threatened to leave if those demands are not met. Anthropic has won admiration outside Silicon Valley too. Before the company's clash with DOD, former Republican Representative Denver Riggleman, who now leads a cybersecurity firm, was preparing to pick an AI firm to partner with. He was considering a range of options; Anthropic's stand narrowed them to one. Riggleman has since directed his company to work with Anthropic on all future projects. "Anthropic had its nonnegotiables," he told me, and "we have ours." Drawing from his experience on a congressional AI task force focused on foreign adversaries, Riggleman thinks that Hegseth's decision to label Anthropic a supply-chain risk will likely be overturned in court. The U.S. government has never applied the label to an American company, typically reserving it for corporations run by hostile foreign actors, such as Huawei. Moreover, this is the first time that the label appears to have been used in retaliation for a business declining contract terms. "To say it rests on shaky legal ground," Riggleman said, "would be generous." The former congressman once trusted his country to regulate technologies that had the power to reshape Americans' lives. "These days," Riggleman said, "the government is no longer creating those safeguards -- it's destroying them." He continued, "I don't think we appreciate yet, as a society, what it means to have private firms controlling this amount of information about citizens." The Department of Defense has said that the contract it offered Anthropic contained adequate safeguards, in part because the text limited AI's uses to "all lawful purposes." Anthropic argued that this clause wasn't sufficient -- that a new executive order or reinterpretation of statute could shift the existing legal boundaries. "We don't want to sell something," Anthropic CEO Dario Amodei said, "that could get our own people killed, or that could get innocent people killed." OpenAI has contended that its subsequent deal with the Pentagon is safer than Anthropic's. Its contract does appear to prohibit mass surveillance and autonomous weapons. But it retains the "all lawful purposes" language, rendering that prohibition dependent on DOD's willingness to respect legal norms. Even Sam Altman, OpenAI's CEO, conceded that the deal was "definitely rushed" and that "the optics don't look good." On Monday, the company said it had added restrictions to the contract regarding surveillance, but critics are skeptical that they will prove any more binding. Read: OpenAI is opening the door to government spying The events of the past week reminded me of my early days as a Navy pilot nearly three decades ago. One of my first tasks was to sign a document pledging never to surveil American citizens. By the time of the 9/11 attacks, I was an aircraft commander, leading combat-reconnaissance aircrews that gathered large-scale intelligence and informed battlefield targeting decisions. I took for granted that somewhere along those decision chains, a human being was in the loop. I could not have defined artificial intelligence then, but I understood instinctively that a person, not a machine, would bear the weight of life-and-death choices. This was not a bureaucratic consideration. It was a hard line that those of us in uniform were expected to hold. In the standoff between Anthropic and the Pentagon, a private company was forced to hold the line against its own government. In doing so, Anthropic may have earned something more valuable than the contract it lost. In an industry where trust is the scarcest resource, Anthropic just banked a substantial deposit.
[21]
For OpenAI and Anthropic, the Competition Is Deeply Personal
Mike Isaac has covered the knock-down, drag-out battles of the tech industry since 2010. It was not that long ago that Sam Altman's OpenAI appeared to be enjoying a comfortable lead in the corporate race to bring artificial intelligence to the masses. OpenAI created the fastest-growing consumer app in tech history, held more than $100 billion in the bank and teamed up with the world's most powerful computing giants. But companies are always rising and falling in Silicon Valley. In just a few months, Anthropic, OpenAI's smaller rival, has added thousands of big businesses as customers. It has more than doubled the revenue it expects to see this year to $19 billion, up from $9 billion last year. And its technology is being trumpeted in some tech circles as the best among its peers. Even an ugly fallout with the Pentagon over a contract has helped Anthropic -- at least in the court of public opinion. Anthropic's smartphone app soared to the No. 1 spot in Apple's App Store downloads after OpenAI jumped in with its own Pentagon deal. The contract controversy involving the Defense Department, OpenAI and Anthropic was the latest round in a long-running and deeply personal feud between the tech industry's two most important A.I. start-ups and two executives with differing views of how A.I. should be created. It also showed how quickly fortunes are changing in the world of A.I., where tens of billions of dollars are being spent in the hope that the winner will hold the reins to the future of the tech industry. "It took years for the story to emerge on any one company," said Siri Srinivas, a venture capitalist who invests in the A.I. sector. "Now, narratives flip in months." The technology industry is no stranger to brass-knuckle competition. In the 1990s, after Netscape popularized the web browser, Microsoft crushed the upstart with tactics that led to an industry-changing antitrust fight. And at the height of Uber's scandal-ridden year in 2017, its smaller competitor, Lyft, swooped in with pink mustaches and driver-friendly advertising to signal that it was a kinder, gentler alternative. The A.I. race is an escalation of those earlier battles. The money is bigger. And in the eyes of many working on this technology, the stakes are higher: They believe they are creating world-changing A.I. that has the potential not only to upend the work force but to eventually surpass the capabilities of humanity. Other companies, like Google, Microsoft, Meta and a wide range of start-ups around the world, are also vying for A.I. leadership. But OpenAI and Anthropic, opposing camps with headquarters roughly two miles apart in San Francisco, have become the standard-bearers for tech's A.I. frenzy. And while history does not always repeat itself, it does sometimes rhyme. Just as Lyft raced to beat Uber to an initial public offering in 2019, Anthropic is aiming to I.P.O. before OpenAI can, according to two people familiar with the company's plans. That could give it an early advantage with investors. Anthropic's chief executive, Dario Amodei, was vice president of research at OpenAI, but he thought Mr. Altman was moving too quickly to commercialize the technology. He quit and took a group of OpenAI researchers with him to create Anthropic as a type of for-profit company that vows to meet certain standards for social impact and accountability. Dr. Amodei's and Mr. Altman's distaste for each other occasionally spills into public view. At a summit in India last month, a dozen A.I. leaders joined hands in a show of solidarity -- all except for Mr. Altman and Dr. Amodei, who could only bring themselves to awkwardly touch elbows. Their beliefs on how A.I. should be developed have had direct implications on the companies' businesses. Mr. Altman has pushed his company to move fast, while Dr. Amodei has urged caution because of his concerns over safety. And his workers appear to back his cause. Last summer, when deep-pocketed rivals began throwing around offers in the range of $100 million to $500 million to attract Anthropic employees, most of them said no. "At the end of the day, we lost two employees to Meta," Dr. Amodei said at a closed-door Morgan Stanley conference with investors this week, in remarks relayed to The New York Times. "We are just clearly doing something different." But the risks that come with sticking to a corporate mission became clear after Anthropic's fight with the Pentagon. Defense officials bristled when Anthropic pushed for contract language to prevent its A.I. from being used in autonomous weapons systems and domestic surveillance. The Pentagon said private companies should not try to control how the military operated. After Dr. Amodei refused to budge, Defense Secretary Pete Hegseth formally labeled Anthropic a "supply chain risk," a declaration that prevents its technology from being used in any defense contract work. "It has to be our choice," Emil Michael, the Pentagon's chief technology officer, said at a defense tech event this week. (Mr. Michael is familiar with bruising tech industry battles, having stepped down from his role as chief business officer at Uber in 2017 after a series of scandals rocked the company.) Just hours after talks between Anthropic and the Pentagon fell apart on a Friday afternoon, Mr. Altman swooped in and announced that OpenAI had signed its own deal with the Pentagon. There was an immediate backlash. Tech workers and tech consumers praised Dr. Amodei for holding the line on surveillance and autonomous weapons. "As long as Anthropic has been around, a key part of their message is that they are trying to be thoughtful about the use of A.I.," said Pete Warden, chief executive of Moonshot AI, who previously worked on A.I. at Google. In a memo to employees, which was reported earlier by The Information, Dr. Amodei did not back down from his position, saying Anthropic failed to win over the Pentagon because it did not give "dictator-style praise" to the Trump administration the way he said Mr. Altman was willing to do. "I want to be very clear on the messaging that is coming from OpenAI, and the mendacious nature of it. This is an example of who they really are," Dr. Amodei wrote. Protesters swarmed OpenAI's offices, scrawling phrases like "No AI Weapons" and "What are your red lines?" in chalk on the sidewalk in front of the building. In an echo of the #DeleteUber movement nearly a decade ago, a hashtag on X asking OpenAI to #FireSamAltman began trending. Others wrote supportive messages in front of Anthropic's doorstep. "GOD LOVES ANTHROPIC," read one message in bold, neon-green chalk. "YOU GIVE US COURAGE," read another in bright pink. Representative Ro Khanna, Democrat of California, cheered Anthropic for not bending. Downloads of its app soared. Anthropic's Claude chatbot app is the No. 1 app in Apple's App Store across 16 countries, according to data compiled by AppFigures. By Thursday, more than a million people had downloaded Claude every single day. (Even the pop star Katy Perry signed up.) Inside OpenAI, the reaction was also harsh. In an internal messaging system, employees questioned whether Mr. Altman's timing was wise given the blowback. They also pressed him and other executives on whether they had capitulated to the government's demands, according to three people familiar with the discussions. At least one OpenAI employee quit to join Anthropic. Mr. Altman has since acknowledged that he regretted the way he had announced his agreement with the Pentagon. "We shouldn't have rushed to get this out on Friday," Mr. Altman said in a social media post. "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy." But as with everything in Silicon Valley, the companies' fates could quickly change. Recently, OpenAI announced that more than 900 million people use its products, having more than doubled its customer base inside of a year. More than nine million paying businesses use ChatGPT for work, and its revenue is expected to top $25 billion this year, according to The Information. The company is aiming for an I.P.O. by the end of the year, two people familiar with the company's plans said. Now Anthropic faces new and very unpredictable adversaries in President Trump and officials in his administration. "Well, I fired Anthropic," Mr. Trump said in an interview with Politico this week. "Anthropic is in trouble," he added, because he fired them "like dogs." "They shouldn't have done that," he added.
[22]
Whatever This Is, We’ve Never Seen Anything Like It
Blacklisting Anthropic feels like a peek at an uncertain future. If you're confused about what's happening with Anthropic, you're not alone. The U.S. Department of Defense decided to pick a fight with Anthropic last week, a fight that ended with Defense Secretary Pete Hegseth insisting that no one who wanted to do business with the Pentagon could continue to work with the AI company. There are still a lot of unanswered questions (and lawsuits to be filed, as Anthropic has said it will do), but there's one thing that's certain as the dust starts to settle: All of this is new in some form or another. Hegseth gave Anthropic an ultimatum early last week. The defense secretary demanded that the company remove guardrails in its AI model Claude that prohibit mass surveillance of Americans and fully automated weapons. If Anthropic refused, he might invoke the Defense Production Act or designate the company as a "supply chain risk," something that's never been done before to an American company. Foreign companies like Huawei have been given a similar designation under a different authority due to supply chain concerns, after the U.S. listed the Chinese electronics manufacturer as a national security threat. But Hegseth seems intent on using 10 USC section 3252 to make the supply chain risk designation, an entirely new move for a U.S. company. As Lawfare notes, a Swiss cybersecurity company with Russian ties received the designation from the Office of the Director of National Intelligence (DNI) in 2025. Experts believe Hegseth's legal ability to do that is much narrower than he claimed in a tweet on Friday. Tess Bridgeman, former advisor to the Obama administration and co-editor-in-chief at Just Security, told Gizmodo that it's unprecedented. And Hegseth's broad insistence that he can stop other companies from doing business with Anthropic is likely being used improperly. "A supply chain risk designation is about excluding a company from bidding for certain contracts in the most highly sensitive DoD IT systems, not prohibiting other companies (even DoD contractors) from routine business dealings with the designated company," Brideman told Gizmodo. Part of the problem, however, is that we have no sign Hegseth has actually done that as of Wednesday, leading some to speculate there might still be room for a deal with Anthropic. But given the way President Donald Trump and the Pentagon are talking, nobody should be banking on that. President Trump has spent his second term pushing the boundaries of what's considered legal, often declaring he'll do something unprecedented and leaving legal experts scratching their heads about whether it's even possible under existing law. That's where the Anthropic situation seems to be resting at the moment. Anthropic CEO Dario Amodei laid out his company's reasons for not agreeing to the Pentagon's terms in a letter on Thursday. Hegseth had given Anthropic a deadline of 5:01 p.m. ET on Friday, and Amodei went to the public, making his case that AI should not be used for domestic surveillance because it's unethical, nor for fully autonomous weapons because the tech just isn't reliable enough yet. By Friday, Trump was the first to respond, though it wasn't entirely clear whether Trump had intended to designate Anthropic as a supply chain risk. Nothing in his tweet explicitly said as much, and it wasn't until Hegseth sent a tweet following the president that the terms became more obvious. "In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security," Hegseth tweeted. "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service," wrote Hegseth. The government is currently disentangling itself from contracts with Anthropic. Federal agencies like the Commerce Department are booting Anthropic's products from the building, acting on orders from the president. And defense contractors like Lockheed Martin are doing the same, according to Reuters. Greg Nojeim, the director of the Center for Democracy and Technology Project on Security and Surveillance, told Gizmodo that it's unclear whether the Pentagon's threats are even legal. "The Pentagon is imposing what is essentially a secondary boycott on Anthropic," said Nojeim. "It is cutting off not only its own contracts with Anthropic, but threatening those DOD contractors who rely on Anthropic's AI. The threat is that they will lose their DOD contracts as well. Whether this is legal or not is going to be determined in court." If Hegseth and Trump get their way, Anthropic will be prohibited from working with companies like Palantir, Amazon, and Microsoft, all of which have lucrative government contracts. Lockheed Martin is reportedly working on that, along with at least 10 other unnamed companies, according to CNBC, but it's unclear whether any other companies will put up any resistance to defend Anthropic. Whether any defend the company's honor, Trump's actions have already made Anthropic toxic to potential customers, to say nothing of investors who would worry about what kind of future such a company could cobble together. Roughly $60 billion from venture capital is on the line for Anthropic, according to Axios. And it's all because Trump and Hegseth decided to make their life hard. "Hegseth and Trump appear to be trying to chill other companies from doing business with Anthropic using their 'bully pulpit,' not any viable interpretation of what their statutory authority permits," Bridgeman told Gizmodo. "That’s an abuse of authority even if the designation were valid, but of course, the designation itself is clearly a pretext." President Trump has inserted himself into the world of private business more than any other president in the modern era. He's made the U.S. government take a stake in over a dozen companies, including a 10% stake in Intel. He's publicly spoken about his desire to have Paramount Skydance buy Warner Bros. Discovery, solely because he has an ideological ally in CEO David Ellison. Trump reportedly has plans to release two coins with his face on them for the semiquincentennial celebrations this summer. On Monday, he posted a call from a Republican congressman for his portrait to be put on a new $250 bill, something that's illegal under federal law. There's no subtlety with Trump. He wants to tell every business what to do and have his face on the money that Americans use to pay those businesses. Trump is remaking the entire U.S. to conform to his desires, even if we don't yet fully understand what those desire might be. Why does the U.S. military have an interest in mass surveillance of Americans? Leadership at the Pentagon denies it has any interest. But Anthropic's insertion of that into the public letter released last week felt like a warning, like a hostage blinking slowly in Morse Code to tell us what's about to hit. Or perhaps it's a warning of what's already underway. The question fundamentally is what might be lawful. Can the Defense Department engage in mass surveillance of Americans? Sen. Ron Wyden, a Democrat from Oregon, thinks they can under various loopholes. Even if the best legal minds in the country decide the answer is no, what's to stop Trump from doing it anyway? "One would have thought the answers to these questions based on existing statutes, DoD regulations, the Constitution, and binding international law would be no," Bridgeman told Gizmodo. "But from the boat strikes against suspected drug traffickers, to the Venezuela operations, to the ongoing armed conflict in Iran, DoD has been engaging in activity that is patently unlawful (as directed by the President), while of course claiming otherwise. Anthropic is right to be wary of DoD claims to legality under the current administration." The Wall Street Journal posed an interesting theory Tuesday about what drove the messy breakup between Anthropic and the military: Vibes. Claude was reportedly used for capturing Nicolas Maduro in Venezuela, and the military even utilized the AI model to help with the lead-up to the current war in Iran. Anthropic has proved itself to be unopposed to allowing its AI to be used in war. And it certainly adds to the absurdity of all of this, given that we now know President Trump had decided to go to war with Iran on Friday, before he sent that post to Truth Social, threatening Anthropic. But the Journal tells the story of culture and personality clashes behind the scenes, where you've got members of the Trump regime who simply feel like the AI is somehow too "woke." Emil Michael, the undersecretary of defense for research and engineering, tweeted last week that Anthropic was lying about claims of mass surveillance and full autonomy. But the Journal article paints a picture of a regime that's just cranky about having to work with people who aren't true believers in the MAGA agenda. That feeling of wokeness isn't something that can be made tangible in any serious way. It's just a feeling and one that's as good as anything else at explaining the moment we're in. OpenAI seems happy to fill the void. Even before the day was out on Friday, CEO Sam Altman tweeted that his company would agree to the Pentagon's terms but maintained that it included safeguards against use for domestic surveillance and fully autonomous weapons. "AI safety and wide distribution of benefits are the core of our mission," Altman wrote. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement." By Monday, Altman tweeted that he had asked to amend the agreement, suggesting he didn't really understand what he was signing on for. Altman called it a "good learning experience for me as we face higher-stakes decisions in the future," a sentiment that should terrify anyone who thinks OpenAI will be in charge of making fully autonomous weapons now. Where do we go from here? People seem to be waiting for a formal designation from Hegseth so that Anthropic can file its lawsuit. Other than that, expect a lot more vibe shifts and a lot of rending of garments to come from the more ideological players in Silicon Valley as they twist themselves into pretzels to make sense of all this.
[23]
Anthropic's Amodei Reopens AI Discussions with Pentagon, FT Says
A resolution would help clear the air around Anthropic, which has been impacted by the Pentagon dispute, and could also complicate rival efforts, such as OpenAI's agreement with the Pentagon. Anthropic PBC chief Dario Amodei has resumed discussions with the Pentagon about the way its AI models are used by the US military, raising the possibility that the two sides can resolve a feud that's transfixed Silicon Valley. Amodei had been negotiating with Emil Michael, under-secretary of defense for research and engineering, to hammer out a contract governing the Pentagon's access to Anthropic's technology. But talks broke down last week after the startup demanded assurances that its AI wouldn't be used for mass surveillance of Americans or autonomous weapons deployment. Defense Secretary Pete Hegseth then declared Anthropic a supply-chain risk, a designation typically reserved for US adversaries. Discussions have since resumed, a person familiar with the matter said. If both sides strike a new agreement, it would allow the military to resume using Anthropic's AI while lessening the risk the Pentagon would officially blacklist the company. It could also complicate rival efforts: OpenAI last week announced it had struck an agreement to let the Pentagon deploy its artificial intelligence models in its classified network. OpenAI chief Sam Altman later said he was working with the defense department to add more guard-rails around surveillance. Anthropic declined to comment, while a Pentagon spokesperson didn't immediately respond to a request for comment made after hours. The Financial Times earlier reportedBloomberg Terminal the resumption of negotiations. A resolution would help clear the air around one of the artificial intelligence industry's fastest-growing and most promising firms. Anthropic -- now valued at $380 billion -- is on track to generate annual revenue of almost $20 billion, a projection based on current performance, more than doubling its run rate from late last year. The Pentagon dispute however has muddied the outlook for the company. Any long-term impact from the Pentagon's declaration on Anthropic's sales to enterprise customers - which has long been its core business - remains to be seen. In the meantime, it's gaining traction with everyday users. Anthropic's main app recently topped Apple Inc.'s download charts, reflecting a surge of support for the company. Much of Silicon Valley also rallied around Amodei. Tech groups representing major companies including Alphabet Inc.'s Google and Apple are urging President Donald Trump to reconsider designating Anthropic a national security risk, arguing that would cause detrimental ripple effects for the rest of the industry.
[24]
How AI firm Anthropic wound up in the Pentagon's crosshairs
Standoff with DoD over Claude chatbot reignites debate over how AI will be used in war - and who will be held accountable Until recently, Anthropic was one of the quieter names in the artificial intelligence boom. Despite being valued at about $350bn, it rarely generated the flashy headlines or public backlash associated with Sam Altman's OpenAI or Elon Musk's xAI. Its CEO and co-founder Dario Amodei was an industry fixture but hardly a household name outside of Silicon Valley, and its chatbot Claude lagged in popularity behind ChatGPT. That perception has shifted as Anthropic has become the central actor in a high-profile fight with the Department of Defense over the company's refusal to allow Claude to be used for domestic mass surveillance and autonomous weapons systems that can kill people without human input. Amid tense negotiations, the AI firm rejected a Pentagon deadline for a deal last week, in a move that led Pete Hegseth, the defense secretary, to accuse Anthropic of "arrogance and betrayal" of its home country while demanding that any companies that work with the US government cease all business with the AI firm. The week since has brought more chaos. OpenAI announced it had struck its own deal with the DoD, resulting in employee pushback and Amodei accusing rival CEO Sam Altman of giving "dictator-style praise" to Donald Trump, for which Amodei later apologized. Trump meanwhile denounced Anthropic in an interview with Politico, saying he "fired them like dogs". On Thursday, the DoD formally declared Anthropic a supply-chain risk and demanded other businesses cut ties - the first time an American company has ever been targeted with the designation - which poses grave financial consequences for the company if fully enacted. The feud has intensified an unsettled debate over how AI will be used in warfare and who will be accountable for the result, while also representing one of the most dramatic disagreements so far between the tech industry and the Trump administration. As the military rapidly adopts the technology for its operations, including in the war with Iran, it has turned previously hypothetical situations into real-world ethical tests for AI companies. Anthropic's standoff with the DoD is also the culmination of what researchers see as some of the AI firm's inherent contradictions. It is a company founded on the premise of creating a safe future for AI, which has nevertheless struck major partnerships for classified work with the Pentagon and surveillance tech giant Palantir. Its leadership says it is deeply worried about the existential risks of AI, though they recently dropped a founding safety pledge, citing the speed of industry competition. It has pledged transparency, but like other AI companies has developed its models through a rapacious demand for proprietary data, with court records documenting how it led a secretive effort to scan and destroy millions of physical books to train Claude. Yet recent weeks have shown that there are some red lines which it appears Anthropic will not cross, a rarity within a tech industry that has largely made itself subservient to the Trump administration and to a fear of falling behind industry rivals. The fallout from its resistance to the Pentagon's demands has so far been a public relations victory for Anthropic, with Claude surging in popularity after the deal fell apart and OpenAI left bandaging its reputation. Anthropic did not respond to a request for comment on a set of questions related to this article. The longer term implications for Anthropic are less clear, with some defense contractors as well as the US state and treasury departments already stepping away from using its AI models and the Trump administration intent on punishing Anthropic for its dissent. Anthropic has said that it will challenge its supply chain risk designation in court, while Amodei has also reportedly reopened negotiations with the DoD in recent days to try to come to a resolution. Before he was sparring with Sam Altman and the Pentagon, Dario Amodei was one of OpenAI's leading researchers. Amodei joined Altman's firm in 2016 after a stint at Google, taking on a prominent role in developing OpenAI'sGPT models and eventually becoming vice-president of research. His younger sister Daniela, meanwhile, served as vice-president of safety and policy, helping oversee the ethical development of OpenAI's models. As OpenAI rapidly advanced its technology and Altman divisively consolidated his authority over the company, however, the Amodeis broke away in 2021, prior to the release of ChatGPT, to found Anthropic - taking several other OpenAI employees along with them. They branded Anthropic as an "AI safety and research company", and central to their new firm was a vow to build safer AI systems that would follow detailed sets of principles they describe as a constitution. In 2024, Amodei published a lengthy essay titled "Machines of Loving Grace" that outlined some of his utopian vision for the future of AI. He argued that AI could eliminate most cancers, prevent nearly all forms of infectious disease and reduce economic inequality. He also presented vague ideas for how AI would integrate into everything from decision-making in the justice system to how the government could provide services such as health benefits. On democracy, however, Amodei was more skeptical. "I see no strong reason to believe AI will preferentially or structurally advance democracy and peace," he wrote. Amodei, who received a doctorate in biophysics at Princeton University before becoming enthralled with the potential of artificial intelligence, had for years been concerned about the existential risks of developing AI and seen parallels to the creation of nuclear weapons. One of his favorite books is The Making of the Atomic Bomb by Richard Rhodes, a nearly 900-page Pulitzer-winning account of how nuclear scientists ushered in a new and dangerous world through the technology they created. While a mix of discomfort and pride about becoming the new Robert Oppenheimer is common among CEOs of AI companies, part of the Amodeis' focus on existential risk has ties with a utilitarian movement known as "effective altruism", which became popular in Silicon Valley throughout the 2010s and advocated for projects that would maximize global good. The movement, which has since fallen out of vogue after a series of scandals such as its close association with the disgraced crypto billionaire Sam Bankman-Fried, also featured a subset of people concerned with AI safety - the idea that one of the biggest global threats is the development of AI that could turn against humanity. Although the Amodeis have denied being adherents of effective altruism, many of the company's core principles echo its language, such as vows to "maximize positive outcomes for humanity in the long run". Some of Anthropic's earliest investors, such as Facebook co-founder Dustin Moskovitz, also had connections to the effective altruism movement. Daniela Amodei's husband, Holden Karnofsky, meanwhile co-founded and for years was CEO of one of the largest effective-altruism based philanthropic funding organizations, Open Philanthropy. When Hegseth declared Anthropic a supply-chain risk this past week, he also criticized Anthropic as being "cloaked in the sanctimonious rhetoric of 'effective altruism'". The AI safety movement has its critics outside the Pentagon as well, including researchers who believe that concerns about existential threats from artificial intelligence are often a distraction from the more tangible, mundane harms and biases of AI. "They would talk about these existential risks and the misappropriation of AI for bioterrorism. I always thought that those were either too distant or too out of reach," said Sarah Kreps, director of the Tech Policy Institute at Cornell University. "That it didn't quite fully understand risk." The differences between the concerns of the capital S "AI Safety" movement versus the broader field of safety and ethics in AI is a long-running schism within the industry. It also offers an explanation for some of the dissonance about how Anthropic could be so worried about developing AI to benefit humanity while at the same time allowing its models to be used by intelligence and defense agencies for lethal purposes. "There seems to be a little bit of a misunderstanding in the discourse - that because Anthropic have clearly put themselves out as accountable, then they are against the use of their systems in warfare," said Margaret Mitchell, an AI ethics researcher and chief ethics scientist at the tech company Hugging Face. "But that's not true." "It's not that they don't want to kill people. It's that they want to make sure to kill the right people," Mitchell said. "And who the right people are is decided by the government." While Anthropic vowed to build a safer AI, it pursued a different sector of the AI market than its rivals. If OpenAI's ChatGPT is presented as a consumer-forward chatbot that many people treat like a search engine or AI companion, Anthropic has geared Claude more toward enterprise software solutions and integration into the organizational infrastructure of workplaces. The distinction, though boring on its face, has made Claude the preferred choice at many organizations and helped make it the first model permitted for classified use in military systems. Anthropic's integration into the military began with a 2024 deal with Palantir to allow Claude to be used within its systems, which already operated in classified environments. The two companies touted the agreement as a way to drastically reduce the resources and time needed for military operations and intelligence gathering. The following year, Anthropic, along with several other major AI companies, struck a $200m deal with the DoD to use their AI tools for military operations. What has since become apparent is that these deals did not include permanent agreements on how the government could use Anthropic's AI or what safety guardrails would be fixed on its models. With the military's indirect access via Palantir's system, Anthropic had less direct control over its technology's use than it would with Claude's website. That discrepancy came to a head in recent months as the government requested that Anthropic loosen its safety restrictions to allow a wider range of use, kicking off the current dispute between the company and the Pentagon. Anthropic's hiring in recent years of former Biden staffers, Amodei's political opposition to Trump and Hegseth's desire to eradicate "wokeness" from the military have all added a political dimension to the standoff. The Pentagon's chief technical officer Emil Michael also appears to hold a personal distaste for Amodei, publicly accusing him of being a "liar" and having a "God-complex". Giving a sense of urgency to the negotiations is the US military's use of Claude for a wide range of operations, including its mission to capture Venezuelan leader Nicolás Maduro and in its war with Iran. The Washington Post reported that the military is using Palantir's Maven smart system, which has Claude embedded into it, to determine which sites in Iran to bomb and provide analysis on its strikes. While the dispute Anthropic has run into with the Pentagon has elements unique to AI, it is also emblematic of problems around dual-use technologies, according to experts, meaning products that have both civilian and military applications. A technology that is developed for a broad consumer base and then adapted for use in classified military systems is bound to hit fault lines, since the technology is not tailor-made for specific use cases or built with parameters specifically for military use. Companies can find that their product is being repurposed in ways they may ethically oppose, but have little ability to prevent. "The same technology that underlies finding a bird in a picture underlies finding a civilian fleeing from their home," Mitchell gave as an example. "That's the same type of model, just very slightly different fine tuning." Another issue is that tech companies do not have a perfect window into how their technologies will be used in classified systems, while at the same time the military does not have knowledge of exactly how proprietary technologies like Anthropic's Claude actually work - an issue which law professor Ashley Deeks has called the "double black box". Even contracts on agreed-upon use can be fuzzy, especially given the Trump administration's distaste for legal oversight. "There is an expectation, generally, that parties to a contract are supposed to comply with the contract." said Deeks, a professor at the University of Virginia Law School. "But, of course, contracts need to be interpreted and the military might interpret a phrase one way where the company intended it to mean something else." Hanging over the feud is also the broader question of who should decide what AI is used for and a lack of detailed regulation from Congress on autonomous weapons systems. Although neither Anthropic nor the Pentagon believe that a private company should have decision-making power over AI's military applications, right now the company is functioning as one of the only checks on what appears to be the military's expansive desires for weaponizing AI. "Do we want the DoD to be using AI for autonomous weapon systems, and if so, in what settings, with what restrictions, at what level of confidence, what level of risk are we willing to take on?" Deeks said. "It's hard for us to have a sense out in the public about how the DoD is thinking about all this."
[25]
'Straight up lies': Anthropic CEO attacks OpenAI's US military announcement in leaked memo -- as new report suggests company is back in talks with Pentagon
* Anthropic CEO Dario Amodei has sent out a lengthy internal memo * It attacks OpenAI's messaging around its new Pentagon deal * Anthropic and Claude may still be in talks with the US government If you thought the debate around AI company dealings with the US military was going to simmer down, think again: Anthropic CEO Dario Amodei has rather candidly accused rival OpenAI of telling "straight up lies" about its agreements with the Pentagon. Late last week, Anthropic stepped away from a new deal with US intelligence, citing safety concerns over the use of AI in mass surveillance (especially on domestic citizens) and fully autonomous weapons. In response, US officials and the President himself declared that Anthropic's AI bot Claude would no longer be used by government agencies. OpenAI and CEO Sam Altman swiftly moved in, announcing their own deal with the US military that apparently had "more guardrails" than the one offered to Anthropic. Users were far from convinced about the ethics: in the days since, ChatGPT uninstalls have risen sharply, and Claude has rapidly risen up the App Store charts. In an internal memo (via The Information), Amodei's response has been to question OpenAI's claims. He calls OpenAI's overall messaging "mendacious", and suggests it includes phrases like "safety layer" that don't fully hold up. In Amodei's opinion, a lot of the reassurances OpenAI has given are "safety theater". OpenAI is "placating employees" Amodei goes on to say that OpenAI is more concerned with "placating employees" than actually safeguarding the use of AI, and questions the caveat in the Pentagon deal that mentions "all lawful use" -- something which can be rather a gray area when it comes to issues like domestic surveillance authorizations. The memo also highlights how AI can be fooled and misused - in the most basic way, by simply lying to it about the nature of the data it's processing - while emphasizing Anthropic's focus on safety and security. The approaches that OpenAI is taking here "mostly do not work", Amodei says. OpenAI chief Sam Altman will almost definitely have more to say, and has already admitted the initial OpenAI announcement was "rushed" and "sloppy". In the meantime, the Financial Times reports that Anthropic and Claude may be finding a way back into a deal with the US military after all -- though it's not clear what the terms would be. The report doesn't add much about the latest negotiations, or how they could potentially impact the OpenAI agreement, but it seems as though the relationship between Anthropic and the US government might not be quite over yet. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[26]
Not so fast: Anthropic and US military might do business after all
Anthropic, the AI company behind the popular Claude AI chatbot, received praise last week for standing up to the Trump administration over the U.S. military's use of its AI tools. However, the company may be reversing course. According to a new report from the Financial Times, Anthropic and the U.S. Department of Defense have reopened negotiations on how the government can leverage Anthropic tech for military purposes. The breakdown between Anthropic and the U.S. government began after the AI company received a $200 million contract from the U.S. Defense Department. However, Anthropic CEO Dario Amodei later wanted guarantees that the U.S. government would not utilize its Claude AI models for domestic surveillance or autonomous weapons. The Trump administration refused this request, saying it would use AI technology for any "lawful" purpose. As talks between Anthropic and the U.S. government broke down, Defense Secretary Pete Hegseth even threatened to designate the company as a supply chain risk to national security. President Trump called Anthropic a "radical left, woke company" in a post on Truth Social and ordered the federal government to cease using Anthropic's technology over the following six months. The Financial Times reports that Amodei has now re-entered negotiations in hopes of avoiding the supply chain risk designation. Amodei is now discussing terms of a potential deal with Undersecretary of Defense Emil Michael, who called the Anthropic CEO "a liar" with a "God-complex" in a social media post just last week. This Tweet is currently unavailable. It might be loading or has been removed. "Near the end of the negotiation the [department] offered to accept our current terms if we deleted a specific phrase about 'analysis of bulk acquired data' which was the single line in the contract that exactly matched this scenario we were most worried about," Amodei said in an internal memo to Anthropic employees as reported by The Information. "We found that very suspicious." Days after talks between Anthropic and the DoD fell apart, OpenAI announced that it had secured a deal with the U.S. government for the use of its AI tools for military use in "classified environments." OpenAI quickly received blowback from users, forcing CEO Sam Altman to attempt to address concerns. Just days later, an internal memo from Altman leaked, where the OpenAI CEO told employees that it would be amending its agreement with the federal government, as the deal was rushed. Altman stated that the U.S. government assured OpenAI it would not use its technology for domestic surveillance. Amodei's internal memo reportedly knocked Altman, calling OpenAI and the Pentagon's statements about the issues with Anthropic "just straight up lies." Amodei accused Altman of partaking in "safety theater" regarding his presentation of the deal and stated that OpenAI employees who believed the company were "sort of a gullible bunch." If Amodei is successful in securing a new agreement with the federal government, the U.S. military would continue to use the technology, which is reportedly already being used to launch strikes in Iran. "Want to learn more about your favorite tech? Sign up for Mashable's Top Stories and Deals newsletters today."
[27]
Anthropic ban may threaten the military's AI advantage over China
Why it matters: The international race for AI advantage is not measured in years, but weeks and days. * Alienating a leading American AI company and ripping and replacing existing tech could give other countries, especially China, a leg up. Driving the news: AI's use in Venezuela, leading to the capture of strongman Nicolas Maduro, and in Iran, still ongoing, gives the Defense Department highly sought-after real-world experience. * "One of the biggest differences between the United States military and China's military is America's extensive operational experience. This just adds to the ledger," Michael Horowitz, a former Pentagon official, told Axios. * That said, if the "dispute between Anthropic and the Pentagon makes it harder for the U.S. to access cutting-edge AI technology," Horowitz added, "it could undermine the benefits from some of that operational experience." Between the lines: That dispute centers, in large part, on how and when Anthropic's tools would be used. The Defense Department, for its part, argues such misgivings could paralyze the military and endanger troops. * "As I started to look at the contracts that had been written during the last administration for the use of AI, I had a whole 'holy cow' moment," Emil Michael, the department's chief technology officer, said Tuesday at the American Dynamism Summit in Washington. * "[There were] dozens of restrictions, and yet these AI models were baked into some of the most sensitive and important places in the U.S. military, where we do exercise combat power." State of play: Defense Secretary Pete Hegseth said the decision to blacklist Anthropic was "final" and the company's relationship with the government and military has "permanently altered." * But as of Tuesday afternoon, no formal supply chain designation had been sent, as the administration continued to rely on Claude for operations in Iran. How it works: The department has for years employed artificial intelligence, autonomy and automation. * The applications extend from intelligence parsing to image recognition to drone warfare to less-splashy boardroom-style decision-making. * But those applications are no substitute for testing the technology on the actual battlefield. Reality check: Losing Claude may not necessarily mean losing the hard-earned AI advantage. * "While ripping out Anthropic and replacing it with a comparable model ... brings some disruption, I think the technology is enough in its infancy that putting in place alternative systems will be sufficient to support DOD's overall military AI objectives," Steven Feldstein at the Carnegie Endowment for International Peace told Axios. The bottom line: The Pentagon isn't going to stop using AI on the battlefield.
[28]
Opinion | Anthropic is fighting a battle the country needs
Anthropic CEO Dario Amodei at the World Economic Forum annual meeting in Davos, Switzerland, on Jan. 23. (Fabrice Coffrini/AFP/Getty Images) Standing up to unreasonable and ethically challenging requests from the government ought to be commonplace in a free country. That's what the artificial intelligence powerhouse Anthropic did last week when it insisted on contract terms barring the Defense Department from using its AI software to conduct mass surveillance on the American public or to drive lethal weapons systems not overseen by people. Sadly, such defiance of the Trump administration is rare. While Anthropic stood on principle and countered the Defense Department at potentially tremendous cost to its business, the rest of corporate America, particularly Anthropic's Big Tech competitors, repeatedly respond to bullying with acquiescence. That's one reason we all should cheer for Anthropic in its lonely fight for responsible guardrails on the emerging technology of AI. More than merely winning back the company's lost government business, a victory would help rein in an administration that has made a habit of improperly leveraging its gargantuan power to bend businesses and other institutions to its will. In normal times, this would be a ho-hum business dispute between a prominent corporation and a headstrong branch of government over the proper use of cutting-edge tech. The government isn't wrong to suggest that one company can't dictate how its products should be used, and it certainly can stop working with Anthropic, within the confines of its existing contract, if it wants to. But these are not normal times, and this is no run-of-the-mill emerging technology. Claude, Anthropic's signature product, is one of the most popular artificial intelligence tools for businesses, and CEO Dario Amodei has become a leading voice in the industry on the potential perils of AI. He has spoken forthrightly of the job losses that could result, said that the new technology will "test who we are as a species" and stressed the need for regulation. Make no mistake, Anthropic is interested in being a warfighting tool, despite President Donald Trump calling it a "radical left, woke company" full of "leftwing nut jobs." It has developed one of the more effective generative AI applications on the market, which is why the Pentagon wanted so badly to continue working with it. Indeed, the five-year-old San Francisco company, last valued at $380 billion, has been supplying the Defense Department, including through a partnership with the analytical software company Palantir -- which has notably friendly relations with the Trump administration. Anthropic's original sin with Trumpworld was its advocacy of AI regulation. Much of Silicon Valley views AI guardrails as potential business handcuffs, a notion repeatedly expressed by the likes of venture capitalist Marc Andreessen, who accused Biden administration officials of trying to "kill" AI, and David Sacks, another Silicon Valley investor who is Trump's AI and crypto czar. Thus, it was probably inevitable that Emil Michael, the Defense Department's undersecretary for research and engineering, would go on the offensive last week when talks with the company soured. In a post on X, Michael labeled Amodei a "liar" with a "God-complex." Asked by Bloomberg to elaborate, Michael said he was concerned Anthropic and other AI companies are "making their own policies that sit on top of democratic policies that are voted on by the people, passed by Congress, [and] signed by the President." The irony is rich here. A decade ago, Michael was a top lieutenant to then-CEO Travis Kalanick at Uber -- a company that made a name for itself by flouting local taxi regulations. Uber's habit of asking neither for permission nor forgiveness is the stuff of Silicon Valley legend. That Trump's "Department of War" conducts extrajudicial killing operations against civilian boats it claims, but doesn't prove, are narcotics peddlers also doesn't speak well of its adherence to "democratic policies" or, for that matter, the law. Anthropic likely has a good legal case against the federal government for violating its right to due process, particularly in reference to Trump and Defense Secretary Pete Hegseth's designation of the company as a "supply-chain risk," banning all agencies from doing business with it. The situation echoes off-again, on-again litigation with law firms that charged Trump's government with unconstitutionally cutting off their business. OpenAI, Anthropic's bitter rival whose ChatGPT is a consumer favorite, demonstrated its opportunism by swooping in to take Anthropic's place with the Pentagon. It won concessions similar to what Anthropic had been seeking after popular backlash against ChatGPT. Anthropic's Claude, which hasn't focused on consumers, shot to the top of the list of most downloaded apps in Apple's app store over the weekend, amid widespread criticism of OpenAI's move. "I am confused about why the Pentagon would accept this language when they just tried to nuke Anthropic for asking for something very similar to this," observed AI legal researcher Charlie Bullock. The facts, coupled with comments by the president and his minions, would seem to validate Anthropic's litigation narrative that it had been unfairly targeted. AI needs rigorous regulation if there is to be hope that its use will be safe. And the country needs more companies with the courage to lose business when the Trump administration wants to dictate terms that could lead to dangerous outcomes.
[29]
Top Pentagon official recalls the 'whoa moment' when defense leaders realized how indispensable Anthropic is and saw the of risk losing access | Fortune
The Defense Department's reliance on Anthropic's AI came as a shocking realization that ultimately led to their dramatic schism, according to a top Pentagon official. Emil Michael, the department's under secretary for research and engineering as well as its chief technology officer, detailed the events leading up to the public feud in a Friday episode of the All-In podcast. After the U.S. military's raid on Venezuela in early January that captured dictator Nicolas Maduro, Anthropic asked Palantir if its AI was used in the operation. While Anthropic has characterized the inquiry as routine, the Pentagon and Palantir interpreted it as a potential threat to their access. "I'm like, holy shit, what if this software went down, some guardrail picked up, some refusal happened for the next fight like this one and we left our people at risk?" Michael recalled. "So I went to Secretary Hegseth, I said this would happen and that was like a whoa moment for the whole leadership at the Pentagon that we're potentially so dependent on a software provider without another alternative." Until recently, Anthropic's Claude was the only AI model authorized in classified settings. The San Francisco-based startup has said it's patriotic and seeks to defend the U.S., but won't allow its AI to be used in mass domestic surveillance or autonomous weapons. The Pentagon insisted it would use the AI in lawful scenarios and refused to abide by any limits from the company that would go beyond those constraints. After failing to reach a compromise last week, President Donald Trump ordered the federal government to stop using Anthropic while giving the Pentagon six months to phase it out. Defense Secretary Pete Hegseth also designated the company a supply-chain risk, meaning contractors can't use it for military work. For now, the military continues to use Anthropic during the U.S. war on Iran, as AI helps warfighters identify potential targets at a rapid pace. During his podcast appearance, Michael raised the concern that a rogue developer could "poison the model" to render it ineffective for the military, train it to hallucinate purposefully, or instruct it to not follow instructions. He then contacted OpenAI, which eventually reached a similar deal that Anthropic had. Elon Musk's xAI was also brought into the classified fold, while the Pentagon is trying to get Google's AI allowed into classified settings too. "I'm not biased," Michael said. "I just I want all of them. I want to give them all the same exact terms because I need redundancy." He acknowledged that Anthropic had become "deeply embedded" in the department while other AI companies hadn't pursued enterprise customers as aggressively by providing forward-deployed engineers. The falling-out between the Pentagon and Anthropic highlighted the clash of cultures between the defense establishment and Silicon Valley, which has its roots in military innovations but has since turned squeamish about seeing its technology used for war. In fact, a top robotics engineer at OpenAI announced her resignation from the company on Saturday, citing the same concerns Anthropic raised. "This wasn't an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got," Caitlin Kalinowski posted on X and LinkedIn.
[30]
Anthropic is having a huge 2026. It's only March
Anthropic CEO Dario Amodei (Krisztian Bocsi/Bloomberg via Getty Images) Anthropic spent years being the responsible AI company. In 2026, it became the most disruptive one. The same tools it developed under strict safety guidelines are now destabilizing enterprise software, reshaping how engineers work, and putting it at the center of a full-blown Pentagon standoff. The AI industry's main sport for the last few years has been benchmarks. Who scores highest, whose context window is longest, whose demo lands hardest at a conference. Anthropic played that game, too. It also made a sustained push into a specific market: software engineers, highly paid and spending their days doing exactly the kind of work AI was getting good at. That focus is now paying off in a way that's changed the conversation about who's actually winning this race. Claude Code had a ChatGPT moment, but for engineers. Engineers started shipping software at speeds that felt almost physically impossible. Anthropic executives have been vocal about what that means for the profession. At Davos, CEO Dario Amodei predicted AI could handle most or all of software engineering work, end to end, within six to 12 months. Claude Code's creator declared the job title itself might soon disappear. Anthropic's own hiring tells a more complicated story. The company's open software engineering roles have climbed 170% since January 2025, according to one tracker, with the curve accelerating. What's harder to dismiss is the outside evidence about the vibe shift on vibe coding. Paul Ford $F, a technologist and longtime software industry observer, wrote in The New York Times that something changed in November. Before then, AI coding tools were useful but halting. After, he was finishing projects that had sat in folders for a decade, on his subway commute. His friends were noticing the same thing. The software engineer corner of the internet lit up with similar accounts. "I am less valuable than I used to be," Ford wrote. When Anthropic published a blog post late last month claiming Claude Code could translate legacy COBOL into modern languages, IBM lost roughly $40 billion in market cap in a single session. The broader sell-off wiped more than a trillion dollars from Big Tech valuations. Legal software stocks dropped. Design stocks dropped. Analysts pointed out that mainframe modernization involves far more than converting code, and IBM's technical moat runs deep. Nvidia $NVDA CEO Jensen Huang called the panic "illogical." Franklin Templeton's CEO told the Financial Times it looked like a genuine long-term threat to enterprise software's business model. For Anthropic at least, the disruption was good for business. The share of U.S. companies paying for its tools hit 20% in January, up from roughly 4% a year earlier. OpenAI is still larger, but its share of enterprise spending fell from 50% to 27% over the same period while Anthropic's climbed to 40%. Then came demands from the Pentagon. The Trump administration ordered federal agencies to stop using Anthropic's technology and labeled the company a supply-chain risk, a designation normally reserved for Chinese firms under espionage suspicion. The trigger was a dispute over guardrails, with Anthropic refusing to give blanket permission for its tools in autonomous weapons systems or mass surveillance. Within hours, OpenAI announced a new Pentagon deal. CEO Sam Altman said publicly that it included the same prohibitions on autonomous weapons and mass surveillance that Anthropic had sought. Not everyone believes that, and the fallout has been swift in both directions. Anthropic's app shot to the top of the App Store. A boycott campaign targeted OpenAI. It turns out that sticking to your principles, or at least being seen to, is its own kind of marketing. The supply-chain risk designation is a real threat though, one that could ripple through Anthropic's key relationships with Amazon $AMZN and Google $GOOGL, both significant federal contractors and two of the company's biggest backers. And the company is not yet profitable. But while the Pentagon drama plays out, engineers are still on their subway commutes, finishing in an hour what used to take a week. That's the thing that started all this. And it's still only March.
[31]
Dario Amodei Says Trump Is a Dictator
Can't-miss innovations from the bleeding edge of science and tech Anthropic CEO Dario Amodei's insistence that the company's AI models may not be used for mass surveillance of Americans or directing killer drones has kicked up a major storm. Defense secretary Pete Hegseth and president Donald Trump came out swinging, directing all government agencies to stop using the company's software "effective immediately" and labeling the company as a "supply chain risk," sending shockwaves across the entire tech industry. Very little love has been lost between the AI leader and the White House. In a leaked Friday memo to employees obtained by The Information, Amodei ignited the powder keg by calling out the president -- as well as his arch nemesis and fellow OpenAI cofounder Sam Altman for bending the knee. "The real reasons [Department of War] and the Trump admin do not like us is that we haven't donated to Trump," he wrote, adding that "we haven't given dictator-style praise to Trump (while Sam has)." It's true that OpenAI president Greg Brockman has donated $25 million to a Trump super PAC. Altman also donated $1 million to Trump's inauguration fund in late 2024. After talks between Anthropic and the Pentagon fell apart -- a feud that reportedly started after Anthropic's Claude was found to have been used during the attacks on Venezuela -- Altman seemingly saw an opportunity to swoop in and cash in, triggering a major PR crisis as a considerable number of users accused the company of giving in to the Trump administration's demands. In the memo, Amodei also argued that "we have supported AI regulation, which is against their agenda, we've told the truth about a number of AI policy issues (like job displacement), and we've actually held our red lines with integrity rather than colluding with them to produce 'safety theater' for the benefit of employees." Whether Anthropic will continue to hold those "red lines" going forward remains to be seen. As Bloomberg reported on Wednesday, less than a week after the memo was sent, talks between the company and the Pentagon have resumed, highlighting ongoing efforts to patch things up. If they were to bury the hatchet, Altman's attempts to shoehorn OpenAI into the situation could be greatly complicated. What that would mean for Anthropic's recent classification of being a supply chain risk remains entirely unclear as well. The feud put the military in a very awkward position as Anthropic's Claude chatbot continues to serve a critical function during the United States' attacks on Iran -- regardless of the president and Hegseth's clear-cut order to stop using it immediately. "Ultimately, this is about our warfighters having the best tools to win a fight and you can't trust Claude isn't secretly carrying out Dario's agenda in a classified setting," an administration official told Axios on Wednesday. Another source said that Anthropic doesn't want to control the Pentagon's use of its chatbot, a facet of the talks that apparently wasn't captured in the media. In short, now that the feud has grown into a major battle of AI companies trying to establish themselves as the ethical choice -- if there even could be one given the atrocities being committed in Iran -- the broader tech industry remains unimpressed. Trump's decision to label Anthropic as a supply chain risk, a term usually reserved for companies being run by US adversaries, has taken Silicon Valley leaders aback. A Big Tech industry group, whose members include AI chipmaker Nvidia, Amazon, and Apple, sent a public letter to Hegseth, arguing that they are "concerned" about the "Department of War's consideration of imposing a supply chain risk designation in response to a procurement dispute," as quoted by Reuters. The designation could "undermine the government's access to the best-in-class products and services from American companies that serve all agencies and components of the federal government," the letter reads.
[32]
Trump's Former AI Adviser Is Furious
Dean Ball warns that the targeting of Anthropic is just one piece of a much larger political breakdown. Dean Ball helped devise much of the Trump administration's AI policy. Now he cannot believe what the Department of Defense has done to one of its major technology partners, the AI firm Anthropic. After weeks of negotiations, the Pentagon was unable to force Anthropic to accede to terms that, in Anthropic's telling, could involve using AI for autonomous weapons and the mass surveillance of Americans, as my colleague Ross Andersen reported over the weekend. So the government has labeled the company a supply-chain risk, effectively plastering it with a scarlet letter. The Pentagon says that this means Anthropic will be unable to work with any company that contracts with the administration. That could include major technology companies that provide infrastructure for Anthropic's AI models, such as Amazon. The supply-chain-risk designation is normally reserved for companies run by foreign adversaries, and if the order holds up legally, it could be a death blow for Anthropic. Read: Inside Anthropic's killer-robot dispute with the Pentagon Ball, now a senior fellow at the Foundation for American Innovation, was traveling in Europe as all of this was unfolding last week, staying up as late as 2 a.m. to urge people in the administration to take a less severe approach: simply canceling the contract with Anthropic, without the supply-chain-risk designation. When his efforts failed, Ball told me in an interview yesterday, "my reaction was shock, and sadness, and anger." In the aftermath of the decision, Ball published an essay on his Substack casting the conflict in civilizational terms; the Pentagon's ultimatum, in his reckoning, is "a kind of death rattle of the old republic, the outward expression of a body that has thrown in the towel." The action, he wrote, is a repudiation of private property and freedom of speech, two of the most fundamental principles of the United States. In today's America, Ball argued, the executive branch has become so unstoppable -- and passing laws has become so challenging -- that the president and his officials can do whatever they want. (When reached for comment, a White House spokesperson told me in a statement that "no company has the right to interfere in key national security decision-making.") Yesterday, I called Ball to discuss his essay and why the standoff with Anthropic feels, to him, like such a dire sign for America. Ball is far from a likely source of such harsh criticism: He's a Republican with close ties to the Trump administration who departed on good terms after its AI Action Plan was published, and an avid believer that AI is a transformational technology. Other figures who are influential among conservatives in the tech world, including the Anduril Industries co-founder Palmer Luckey and the Stratechery tech analyst Ben Thompson, have vigorously supported Defense Secretary Pete Hegseth's move. Luckey, a billionaire who builds drones for the military, suggested on X that crushing Anthropic is necessary to defend democracy from oligarchy. Thompson wrote yesterday in his widely read newsletter that "it simply isn't tolerable for the U.S. to allow for the development of an independent power structure -- which is exactly what AI has the potential to undergird -- that is expressly seeking to assert independence from U.S. control." Thompson likened the necessity of destroying Anthropic to that of bombing Iran. But Ball sees the Trump administration's strong-arming of the tech industry as a sign of his country falling apart -- a decline, he told me, that he has been watching for decades, and which the AI revolution might only accelerate. This conversation has been edited for length and clarity. Matteo Wong: A number of people have described the Pentagon's designation of Anthropic as a supply-chain risk as illegal or poorly thought-out. Why did you take a step further in saying that this is not just bad policy, but catastrophic? Dean Ball: What Secretary Pete Hegseth announced is a desire to kill Anthropic. It is true that the government has abridged private-property rights before. But it is radical and different to say, brazenly: If you don't do business on our terms, we will kill you; we will kill your company. I can't imagine sending a worse signal to the business community. It cuts right at heart at everything that makes us different from China, which roots in this idea that the government can't just kill you if you say you don't want to do business with it, literally or figuratively. Though in this case, I'm speaking figuratively. Wong: Walk me through the multi-decade decline you situate the Pentagon-Anthropic dispute in. What precisely about the American project do you see as being in decay? Ball: America rests on a foundation of ordered liberty. The state sets broad rules that are intended to be timeless and universal, and implements those rules. We have not always done that perfectly, but the idea was that we were always getting better. And during my lifetime, a lot of things have started to break down. It reminds me very much of the science of aging. A very large number of systems start to break down, all at similar times for correlated reasons, and then each one breaking down causes the others to do worse. I think that something similar happens with the institutions of our republic. The fact that you can't, for example, really change laws means that more and more gets pushed onto executive power. Once that's the case, you have this boomerang -- I only know that I'm going to be in power for four years in the White House, so what I need to do is use as much executive power as I can to cram through as much of my agenda as possible. And we've seen that just get more and more and more extreme, really, since George W. Bush. It's just these swings back and forth, and it feels like we're departing from the equilibrium more and more. It's possible for something to go from being a crime in one presidential administration to not a crime in another, with no law changing. The state can deprive you of your liberty -- that's the most important thing in the world. We can't have that at the stroke of the executive's pen. Read: Anthropic is at war with itself There are already Democrats who are talking about how if you work too closely with the Trump administration, when they get in power, they're going to break your companies up. Right now, with Anthropic, Republicans are punishing a company that is associated with the Democrats, and I suppose in some sense that because I'm a Republican, I can cheer that on. But the point of ordered liberty is for that never to happen -- because if I do that to you, when you take power, you're going to do it to me even worse, and then around and around we'll go. If you read any "new tech right" thinker on these topics -- Ben Thompson, whom I've loved for years -- saying it's a dog-eat-dog world, that's the way it goes. Palmer Luckey, same thing -- equating property expropriation with democracy. These are people who have fully accepted that we live in the tribal world and that the republic is already dead. Wong: You were the primary author of the White House's main AI-policy document. How does the Pentagon's targeting of Anthropic differ from your own vision for good AI policy? Ball: I don't think the actions of the Department of War are consistent with the persuasion toward AI laid out in the AI Action Plan. But more important than that, they're not consistent with the persuasions toward AI articulated by the president in many, many public appearances. The people who were involved with this incident were not, by and large, involved in the creation of the AI Action Plan. They looked at the cards on the table and made their calls. I assume that they did what they thought was best at the time. I don't think they acted with particularly great wisdom. Maybe I'm wrong; I don't know. But they made very different decisions from the ones I would have made. Wong: As all of these negotiations were happening, the Pentagon was also preparing to bomb Iran. The war seems like a pretty clear example of the stakes of the growing executive authority you're describing. Ball: We live in a state of perpetual emergency being declared, and that has all sorts of corrosive effects. Because then it's like, Oh, well, did you know that Anthropic attempted to impose usage restrictions on the U.S. military during a national-security emergency? And it's like, yeah, we've been living in a national-security emergency for my entire life, or at least since 9/11. We've been living in a state of endless emergency, perpetual emergencies, perpetual war. This is just cancerous. Wong: One other possibility, of course, is that the growing backlash to the Pentagon's decision to target Anthropic could actually strengthen the nation's institutions -- that the courts or Congress, for instance, could ultimately protect Anthropic or prevent such future standoffs. Ball: The optimistic version of my interpretation is that there's enough about the American system that's resilient that these things will be reined in by the judiciary. I don't think you can bet against America. The country has been remarkably resilient over time. At the same time, I view the sickness that we face as being pretty deep. And I also view the challenges that we have to navigate together as being more profound than any we've faced in our history. So I harbor fairly significant concerns that this time will be different. But I remain fundamentally an optimist. If I were a pessimist, I wouldn't be sitting here talking to you.
[33]
Anthropic CEO: We're trying to "deescalate" Pentagon AI standoff to reach "some agreement that works for us and works for them"
Anthropic CEO Dario Amodei told investors on Tuesday that his company is still in talks with the Pentagon "to try to deescalate the situation" following a clash over AI guardrails in the military. Speaking at the Morgan Stanley Technology, Media and Telecom Conference in San Francisco, Amodei said Anthropic and the Department of Defense "have much more in common than we have differences." After expressing his belief in "defending America," Amodei added "we've never questioned specific military operations. We don't see ourselves as having an operational role." Amodei told the audience that Anthropic is still talking to the Pentagon "to try to de-escalate the situation and come to some agreement that works for us and works for them." His remarks came on the heels of a public standoff with the Pentagon that culminated in President Trump ordering the military to stop using Anthropic and Defense Secretary Pete Hegseth labeling the company a "supply chain risk." That designation, which Amodei said he would challenge in court, effectively limits military contractors from working with Anthropic. A source directly familiar with the situation said that in the five days since Mr. Trump canceled Anthropic's government contracts, company executives have expressed regret to Pentagon officials over the misunderstanding over Anthropic's role in military action. The Department of Defense declined our request for comment on this story. Hours after Hegseth said the company would be deemed a supply chain risk, Anthropic CEO Dario Amodei told CBS News exclusively that the label was "retaliatory and punitive," and he pledged to fight the designation in court. Amodei said Anthropic sought to draw "red lines" in the government's use of its technology, specifically preventing its use for mass surveillance of Americans or for fully autonomous weapons. He said "we believe that crossing those lines is contrary to American values, and we wanted to stand up for American values." "Disagreeing with the government is the most American thing in the world," Amodei said. "And we are patriots. In everything we have done here, we have stood up for the values of this country." Emil Michael, the Pentagon's chief technology officer, told CBS News last Thursday the military had offered written acknowledgements of the federal laws and military policies that restrict mass surveillance and autonomous weapons -- though Anthropic said that offer was "paired with legalese" that allowed the guardrails to be ignored. "At some level, you have to trust your military to do the right thing," Michael said.
[34]
Congress must prevent AI surveillance. The Anthropic feud proves it | Ashley Gorski and Patrick Toomey
The company's clash with the Pentagon is a fight over the future of American privacy The US military wants to use its state-of-the-art AI tools to supercharge surveillance against Americans, making it easier than ever to monitor our movements, our search history, and our private associations. That's one of the major takeaways from a dramatic dispute between the Department of Defense and some of the leading AI companies in America. What this clash highlights most of all, however, is just how easily AI surveillance systems can be turned against the people in this country, and the urgent need for Congress to intervene. Last week, the Pentagon and Donald Trump announced that the government would cease using Anthropic's AI products, asserting that the safety guardrails proposed by the company - no mass domestic surveillance or fully autonomous weapons - were unacceptable. The Trump administration went even further, claiming that these positions render Anthropic a "supply chain risk", and prohibited anyone doing business with the US military from conducting commercial activity with Anthropic in their military work. But this is no ordinary contract dispute. This is a fight over the future of American privacy, and it will ultimately affect all of us. At the heart of the dispute is the government's assertion that it should be able to use AI for any "lawful" purpose. The problem is that the law is running decades behind the technology. The law doesn't account for a world where cellphones are tracking devices; our internet browsing is as revelatory as our personal diary; our data can be bought on the open market; and where AI would let the government seamlessly integrate this data it buys into the most comprehensive and largest set of domestic dossiers ever created. Compounding this problem, as we saw with some of the worst surveillance abuses after September 11, the executive branch often secretly decides what is "lawful". Without clear and specific rules from Congress, the Trump administration could rubberstamp a domestic spying program and deem it lawful because they said so. Given what we already know about the government's quenchless thirst for our data-and how willing it has already been to sidestep our fourth amendment rights against unreasonable searches and seizures - this prospect is chilling to say the least. The defense department and other federal agencies already take the position that they can "lawfully" purchase Americans' private data - including location history and web-browsing records - and search that data without a court order. Although bipartisan coalitions in Congress have long criticized these warrantless searches, they've so far failed to end them. But the problem is poised to become far worse with AI. According to New York Times reporting on the contract negotiations, the Pentagon wanted to apply AI to "the collection and analysis of unclassified, commercial bulk data on Americans, such as geolocation and web browsing data". Not only does the reporting confirm that the government is, in fact, collecting Americans' private data in bulk, but it shows that the Pentagon wants to deploy the world's most powerful tools to exploit this immense and controversial pool of data. AI tools could allow the government, at the touch of a button, to extract information and inferences about a person that previously might have taken an agent or analyst days or weeks to develop. These tools promise to combine data from disparate sources, find patterns, and distill the results into a detailed picture of someone's movements, political views or associations. As just one example: the government may purchase a large dataset containing the movements of thousands of cellphones, but often those trails of digital data don't have names assigned to them, requiring additional analysis. AI can conduct that kind of analysis faster than any human, while integrating other data streams for an even more comprehensive picture of a person's life. And it can do this work at scale. That's especially alarming when one considers the Trump administration's race to access voting data, health records, and tax information. Anthropic's fight might be with the Pentagon, but other government agencies purchase commercial data in bulk too. As the ACLU's Freedom of Information Act work has confirmed, ICE has repeatedly bought cellphone location data and information from license plate databases to go after immigrant communities. And over the past few months, federal agents have also been collecting license plate data and faceprints from some of the people protesting and documenting their activities in public. Against this backdrop, there is every reason to be concerned about the powers that these agencies are amassing via AI. As in other contexts, AI tools remove human friction from the work of surveillance, magnifying the dangers of digital spying by making it cheaper, faster and more detailed. If Congress fails to step in, the application of AI to "lawfully" acquired data could quickly lead to a dystopian government database filled with the most telling details about all of us. The consequences of an AI-powered mass domestic database would be devastating: large-scale invasions of privacy, an extreme chill on the freedoms of speech and association, and the targeting of vulnerable or unpopular populations for further scrutiny or worse. For society as a whole, these effects are corrosive. And as we know firsthand from our clients at the ACLU, government surveillance can feed into discriminatory profiling and watchlists. It can result in unwarranted investigations and prosecutions. And it leaves people looking over their shoulders for decades. On Monday night, OpenAI - which had announced an agreement with the Pentagon after Anthropic's fell through - said it was amending its deal to add language protecting the civil liberties of US citizens and permanent residents. While this is a welcome development, the new language is riddled with loopholes. And it underscores a bigger problem: our rights shouldn't rise or fall with the whims of one CEO. Whether government agencies should be buying Americans' private data, and whether they should be applying AI tools to analyze that data, are immensely consequential questions. The answers to these questions shouldn't depend on contracts that can change at any time (and in secret); nor should they depend on the individual viewpoints or market motives of tech executives. People in the United States deserve a real and lasting legislative solution to protect their privacy. Congress must step in. Congress can start by passing the bipartisan Fourth Amendment Is Not For Sale Act, a commonsense reform bill that bans the government from buying data that it would otherwise need a warrant to obtain. Congress must also impose basic guardrails on the government's use of novel AI tools: safeguards that protect against warrantless invasions of our privacy and prohibit uses that threaten our ability to speak out and associate freely online and off.
[35]
OpenAI just dragged its own brand
It sounds like a brag-worthy business coup: not just snagging a high-profile client, but doing so just after your chief rival's deal with that same client unraveled in a brutally public way. But artificial intelligence pioneer OpenAI's Pentagon deal didn't end up being a brand-halo event. To the contrary, "it just looked opportunistic and sloppy" -- and that's the judgment of OpenAI's CEO, Sam Altman. Given widespread concerns about the potential downsides of AI, ranging from mass layoffs to robot overlords, "opportunistic and sloppy" are just about the last attributes OpenAI wants to be associated with, perhaps especially in the context of a Department of War partnership. But this isn't just an image headache; the brand backlash has included a surge of signups for the rival OpenAI seemed to have bested, Anthropic, whose Claude AI leapt past OpenAI's ChatGPT to the top of the app charts. Some of that surge can be attributed to Anthropic's behavior and rhetoric matching up to its brand image as a thoughtful steward of AI that's mindful of its possible consequences. It's a brand image that was tested recently when Anthropic wanted to add some caveats to the Pentagon's desire to use its tech for "all legal purposes." Anthropic's Claude, then the only AI agent cleared for use in classified operations, had already been used to plan the recent military action against Venezuela (and was used in preparing for the attack on Iran.) But this evidently harmonious relationship snagged on Anthropic seeking guardrails that would prevent its technology from being used to enable mass surveillance or autonomous lethality. The Pentagon pushed back, and over a few weeks, this spiraled into an acrimonious and very public split that included petulant criticism from the president. The Department of War not only signalled it wanted more compliance as it added AI partners, but threatened to kneecap Anthropic by labeling it a "supply chain risk."
[36]
Back to the negotiating table? Might there be peace in our time between Anthropic and Trump 2.0 or has the war of words gone too far?
I fired Anthropic. Anthropic is in trouble because I fired them like dogs. Having already attacked the "left-wing nut jobs" at the company, Donald Trump wasn't holding back on his view about AI provider Anthropic on Thursday. The comment from the US President came shortly after Anthropic CEO Dario Amodei told the Morgan Stanley Technology, Media and Telecom Conference that the firm had more in common with the Department of War than differences, despite being canned from its $200 million contract last week due to insisting on ethical red lines around the use of its tech remained intact. It also comes as reports circulate that Anthropic and the Department have re-opened negotiations to reach a compromise, despite the former now being barred by Trump 2.0 from any form of Federal Government work. But if that is the case, it remains to be seen what the impact might be since an internal memo from Amodei to staff went public, with some allegations and assertions that seem unlikely to improve the Presidential mood. In the memo, revealed by The Information, the Anthropic CEO suggested that rival OpenAI was more acceptable to the current administration in Washington for political funding reasons: The real reasons [the Department of War] and the Trump admin do not like us are that we haven't donated to Trump (while OpenAI/[OpenAI President] Greg [Brockman] have donated a lot). (OpenAI President Greg Brockman and his wife reportedly donated around $25 million to a pro-Trump super PAC, making them the largest donor, while CEO Sam Altman handed over a $1 million personal donation to Trump's inauguration fund.) Amodei added: We haven't given dictator-style praise to Trump (while Sam has), we have supported AI regulation which is against their agenda, we've told the truth about several AI policy issues (like job displacement), and we've actually held our red lines with integrity rather than colluding with them to produce 'safety theatre' for the benefit of employees, (which I absolutely swear to you is what literally everyone at DoW, Palantir [Anthropic's business partner for government work], our political consultants etc, assumed was the problem we were trying to solve. And while Altman, who signed his firm up to take Anthropic's place, has gone out of his way several times this week to say that he does not believe his competitor should be blacklisted as a supply chain risk, Amodei was rather more bombastic about his counterpart, citing "straight-up lies" and calling OpenAI's proclaimed own red lines as "safety theater": The main reason they accepted the deal and we did not is that they cared about placating employees and we actually cared about preventing abuses. Ouch! Meanwhile Altman used his appearance at the Morgan Stanley gig on Thursday to fire some shots back over Amodei's head, arguing that government is supposed to be more powerful than private companies: We have to trust the democratic process...This process is messy. This process has some deep flaws, but it is better than all other systems. If we start abandoning that process and our commitment to it because, you know, some people don't like the person or people currently in charge, that is challenged no matter what. I think it's bad for society no matter what. So what are the chances of some form of compromise being reached? Is Anthropic ready to back down on its red lines after nearly a week of sticking steadfastly to them? In the memo to staffers, Amodei give more information on what the sticking points had been with the initial discussions, suggesting that there wasn't that much that needs to be dealt with: Near the end of the negotiation the [Department] offered to accept our current terms if we deleted a specific phrase about 'analysis of bulk acquired data' which was the single line in the contract that exactly matched this scenario we were most worried about. One step forward, two steps back? The name calling goes on - Pete Hesgeth, US Secretary of State at the DoW, called Amodei a liar with a god complex - which isn't a healthy backdrop for any renewed negotiations that may or may not be underway. That said, while most vendors dealing with government are maintaining a pragmatic silence while they can, tech sector lobbying for a change of stance is growing. The Information Technology Industry Council (ITI), whose members include the likes of Amazon, Google, and Nvidia, has warned the Pentagon that a supply chain risk designation could "undermine the government's access to the best-in-class products and services from American companies that serve all agencies and components of the federal government." While it didn't mention Anthropic by name, there can be little doubt who it is talking about. The ITI urges officials to resolve disputes through dialog rather than threats. A noble sentiment, but has that ship sailed? Earlier today, the DoW told Bloomberg: DoW officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately. In an interview with CBS last weekend, Amodei had said: All we've received is a tweet. We haven't received an actual supply chain designation. When we receive some kind of formal action, we will look at it, we will understand it, and we will challenge it in court. Now he has that formal notification. Game on? More to come, no doubt as the controversy rumbles into its second week.
[37]
Anthropic's investors could be the key to ending its Pentagon standoff -- but some investors have opposite views | Fortune
In 2023, as Dario Amodei was fundraising for the company's $750 million Series D round, an investor was seated with the CEO at a dinner when he recalled him getting worked up in a conversation about safety issues around artificial intelligence. "When he was talking about the risks of AI, he contorted," says the investor. "His body twisted. He was really emotionally showing how scared he was." It made an impression on the investor, who spoke on condition of anonymity due to fear of impact to their business, and said they believed large language models would never be successful if they weren't trustworthy. Now Anthropic's strong stance on AI safety, and its investors' commitment to that position, is being tested like never before as the company navigates a high-stakes standoff with the U.S. Department of Defense. By insisting that its Claude AI technology adhere to certain restrictions when used by the military, Anthropic has incurred the wrath of President Donald Trump and War Secretary Pete Hegseth, who have retaliated by trying to short-circuit Anthropic's business. For investors in Anthropic, which recently raised $30 billion at a $380 billion valuation and is widely expected to have an initial public stock offering soon, the government's move to designate Anthropic as a "supply-chain risk" could have devastating consequences. How these investors lobby Anthropic behind the scenes -- either pushing for conciliation or urging it to hold firm -- could shape the outcome of the standoff. Fortune spoke with six people who have invested in Anthropic to get a sense of how this key constituency is feeling about the situation, and found that opinions were not unified despite the company's longstanding forthrightness about its values. "I'm disappointed matters of national security implications are being aired in public," says J.D. Russell, who runs the investment firm Alpha Funds, and holds a position in Anthropic. Russell said he respected Anthropic's positions on mass surveillance and autonomous weapons, but said that "you have to be realistic that adversaries to the U.S. are pursuing those capabilities with far fewer constraints." Jacques Tohme, managing partner of the firm Amerocap, put simply that he "did not agree" with the position the company had taken. Still, many of Anthropic's investors backed the company in the dispute -- particularly because of its disciplined stances on some of the most disputed topics in AI right now. The cofounders, after all, left OpenAI in 2021 explicitly to develop AI systems that were powerful, but also safe for humanity. Many of Anthropic's early investors also have ties to the effective altruism community, a research field focused on how to do the "most good" possible, and the company has a strong investor base in Europe, which tends to be much less sympathetic to the U.S. Department of Defense. One of those investors, Alberto Emprin, an investor who runs the firm 3LB Seed Capital, published his perspectives and support of Anthropic, in Italian, on Substack earlier this week, noting that Amodei, through his position, had become "a kind of champion of ethics in the AI era." "Amodei's argument is, on the surface, unimpeachable: artificial intelligence is still imperfect, it makes mistakes, and the idea that due to a hallucination or a training bias the 'wrong person' could be killed is ethically intolerable," Emprin wrote. Among the investors that Fortune spoke to, some invested directly, while others did so via special-purpose vehicles, and one of the investors had recently sold their position on the secondary market. Ultimately, the voice of the largest investors will weigh more than the roughly 270 others on Anthropic's cap table. Among the largest is Amazon, whose CEO Andy Jassy, met with Hegseth recently and declined to take Anthropic's side when the matter came up, according to Semafor. Jassy has also met with Anthropic's Amodei in recent days, according to Reuters, while Lightspeed and Iconiq have reached out to other investors to explore a solution. Finding consensus among Anthropic's investors may not be easy, however. While not all investors have been pleased with the hardline stance that Anthropic CEO Dario Amodei has taken, there's also a variety of views about how damaging the Pentagon spat could be for the company. The U.S. government contract was small, reportedly about $200 million, or roughly 1% of Anthropic's annual revenue, according to Bloomberg. Russell, the Alpha Funds manager, said he didn't expect the Pentagon's move to be "any real negative impact on them," as it's "really just one contract." Depending on how the supply chain risk designation is interpreted, however (Anthropic is widely expected to fight it in court), it could lead to broader fallout by forcing any company doing business with the DoD to stop using Anthropic products. Other federal agencies, including the State Department and Treasury Department, have also said they will no longer use Anthropic. On the flip side, some Anthropic investors say they're heartened by the surge in goodwill the company has reaped by standing firm on its principles. Patrick Hable, an investor who runs the firm 3 Comma Capital, said he believed the whole issue would be a "net positive" for the company. "Contracts lost but millions of supporters won," he said. But he added that "Even if that would be a net negative, he [did] the right thing," he said. In the days since the Pentagon announced a deal with OpenAI instead of Anthropic, Anthropic became the most downloaded app in the Apple and Android app stores. And Anthropic had the most user signups ever on Monday, the company said. As Amodei reportedly told employees in a lengthy internal memo published by the Information that criticizes Sam Altman of OpenAI and explaining the fallout with the Defense Department, the public is seeing Anthropic "as the heroes."
[38]
Anthropic vs. the Pentagon: A threat to America's AI boom
Anthropic CEO Dario Amodei (Samyukta Lakshmi/Bloomberg via Getty Images) The ongoing feud between Anthropic and the Department of Defense has been covered as a tech story, a political soap opera -- culture wars meet AI policy. But the real story is arguably even larger: Can the U.S. win an AI arms race against China when its own government attacks the American companies doing the racing? Here's what to know. More than that, the designation could make Anthropic untouchable across the larger U.S. economy, cutting off not just Pentagon business but any company that does Pentagon business or wants to preserve favor and optionality -- which would amount to a corporate death sentence for Anthropic. With a $380 billion valuation, annual revenues thought to amount to about $20 billion, and 80% of that enterprise revenue, Anthropic is essentially facing government-ordered destruction. On Thursday, The Information reported a memo CEO Dario Amodei circulated within Anthropic, which named the deeper disagreements plainly. Anthropic hadn't donated to President Donald Trump or given him "dictator-style praise," Amodei said. Anthropic had also welcomed regulation, told the truth about AI policy issues like job displacement, and helped expose the the threat of mass government surveillance of citizens rather than engaging in what Amodei termed "safety theater." Within days of the Pentagon's threat, Amodei was on the phone personally with Andy Jassy, the CEO of Amazon $AMZN, which is among Anthropic's largest investors. Major venture firms with stakes in Anthropic were simultaneously working their own contacts inside the Trump administration, and coordinating with other investors on potential solutions. The immediate goal appears to be preventing the supply-chain risk designation from being formally implemented. The larger and longer-term goal, it is reasonable to conclude, is to preserve the possibility of large-scale liquidity events like an IPO. Some investors told Reuters they were frustrated that Amodei had "antagonized rather than cultivated" Pentagon officials -- "an ego and diplomacy problem," one said. But they also acknowledged that Amodei is trapped, stuck between adherence to principles on the one hand and, on the other, capitulating in a way that could alienate the employees and customers who have flocked to Anthropic precisely because of his stance. Investors in Chinese AI firms must navigate meaningful and sometimes expensive political risks, with government crackdowns looming large in recent memory, and state shifts in policy sometimes functioning to wipe out large swaths of investor capital. The U.S.'s comparative credibility is what's now at stake. If the U.S. government is willing to use procurement designations as political punishment -- publicly trumpeted as retaliatory, and also arbitrary because they ignore companies' legal protections -- then the risk premium on American AI starts looking less different from the risk premium on Chinese AI. Sources in the investment and venture community say this is the calculus of the memos now pinging between Wall Street, Washington, and San Francisco. The long and short? Summary execution of companies for political noncompliance makes for a poor environment for the kind of capital formation that funds frontier AI development at scale. The Anthropic-Pentagon story is huge not just because of the entertaining drama on X $TWTR or the App-Store horserace between Claude and ChatGPT. It's huge because the capital that hangs in the balance is huge, and necessary, and because it tends to go where rules are predictable and exits are safe.
[39]
AI could be giving US lethal edge in Iran war - but there are dangers
Forget science fiction. The age of AI in war is here. Israel has used AI systems in Gaza to flag potential targets and help prioritise operations. The United States military reportedly used Anthropic's model, Claude, during its operation to abduct Nicolas Maduro from Venezuela. And even after Anthropic got into difficulties with the US administration over exactly how AI should be used in war, the US military still apparently used Claude in its attack on Iran. Iran latest: Trump criticises Starmer over UK stance It is highly possible, experts say, that the missiles flying over Tehran today are being targeted by systems powered by AI. "AI is changing the nature of modern warfare in the 21st century. It is difficult to overstate the impact that it has and will have," says Craig Jones, a senior lecturer in political geography from Newcastle University. "It is a potentially terrifying scenario." Terrifying or not, it seems there's no going back. If you want a sense of the importance the US military places on AI, a good place to start is a memo sent by defence secretary Pete Hegseth, who styles himself Secretary of War, to all senior military leaders early this year. "I direct the Department of War to accelerate America's Military AI Dominance by becoming an 'AI-first' warfighting force across all components, from front to back," Mr Hegseth wrote. This is not an experiment, this is a command - to adopt AI quickly, and at scale. Or as Hegseth puts it: "Speed Wins". Yet the scenario in question is not the one that might first spring to mind. Yes, autonomy is increasing in some areas. In Ukraine, for example, there are drones capable of continuing a mission even after losing contact with a human operator. But we are not at the stage of autonomous killer robots stalking the battlefield. "We're not in the Terminator era just yet," says David Leslie, professor of ethics, technology and society at Queen Mary University of London. The systems in which AI is being embedded - known as "decision support systems" in military jargon - are advisers which flag targets, rank threats and suggest priorities. AI systems can pull together satellite imagery, intercepted communications, logistics data and social media streams - thousands, even hundreds of thousands of inputs - and surface patterns far faster than any human team. The idea is that they help cut through the fog of war, allowing commanders to focus resources where they matter most, while potentially being more accurate than tired, overwhelmed, stressed human soldiers. This means they're not just a tool, says Dr Jones, but a new way of making decisions. "AI, as we see in our own lives, is more like an infrastructure," he says. "It's built into the system." "We have this ability to collect that surveillance that we've been doing for some years. "But now AI gives a stability to act on that and to kill the leader of Iran and to take out serious adversaries and serious enemies and find them in improbable ways in which they may have not been found before." 'A very persuasive tool' Professor Leslie agrees that the new systems are extremely capable from a military perspective. "The race for speed is what's driving this uptake," he says. "Making decision-making cycles faster is what brings military advantage of lethality." An important feature of decision support systems is that the AI doesn't press the button. A human does. That has been the central reassurance in debates about military AI. There is always "a human in the loop". As OpenAI, the company which makes ChatGPT, put it after announcing a partnership to supply the Pentagon with AI: "We will have cleared forward-deployed OpenAI engineers helping the government, with cleared safety and alignment researchers in the loop." OpenAI has also emphasised that it had secured agreement with the Pentagon that its technology would not be used in ways that cross three "red lines": mass domestic surveillance, direct autonomous weapons systems and high-stakes automated decisions. But even with a human in the loop, a question remains. Read more: AI willing to 'go nuclear' in wargames, study finds Claude Opus 4.6: This AI just passed 'vending machine test' When you're fighting a war, can a human really check each decision from an AI? When time is compressed and information is incomplete, what does "human oversight" really mean? "Humans are technically in the loop," says Dr Jones. "That doesn't mean, in my opinion, that they are in the loop enough to have effective decision-making power and oversight of exactly what's happened. The AI... is a very persuasive tool to people that make decisions." Or as Professor Leslie puts it: "We are really facing a potential scaled hazard of... rubber stamping, where because of the speed involved, you don't have active human, critical human engagement to assess the recommendations that are being put out by these systems." And then there's the question of AI's own fallibility. Read more: UK will deploy HMS Dragon in Cyprus, PM confirms Iran Q&A: Why Trump could try to declare quick victory Testing by Sky News found that neither Claude nor ChatGPT could tell how many legs a chicken had, if the chicken didn't look as it expected. What's more, the AI insisted it was right, even when it was clearly wrong. The example came from a paper which illustrated dozens of examples of similar failures. "It's not a one-off example of animal legs," said lead author Anh Vo. "The problem is general across types of data and tasks," Vo added. The reason is that AI doesn't really see the world in the human sense - they guess what's most probable based on past data. Most of the time, that kind of statistical reasoning is astonishingly effective. The world is predictable enough that probabilities work. But some environments are by their very nature unpredictable and high stakes. We are testing the boundaries of this technology in the most unforgiving circumstances imaginable.
[40]
Anthropic resumes $200 million US Defense Department contract talks
Anthropic reportedly resumed talks with the U.S. Defense Department to prevent the government from designating the company a supply chain risk. Negotiations broke down after Anthropic refused to delete a contract clause prohibiting the analysis of bulk acquired data. The Defense Department threatened to cancel its $200 million contract and label Anthropic a supply chain risk, a designation typically reserved for Chinese companies. President Trump ordered government agencies to stop using Anthropic's technology, though a six-month phase-out period allowed the government to use Anthropic's AI tools to stage an air attack on Iran. The dispute highlights the escalating conflict between major AI developers over defense contracts and surveillance prohibitions. Anthropic's refusal to remove the clause underscores its concern about enabling mass surveillance, while OpenAI's subsequent contract with the Defense Department positions it as a more compliant vendor. The fallout could reshape the competitive landscape for government AI procurement. Anthropic CEO Dario Amodei stated in a memo that the Defense Department offered to accept the company's terms if it deleted the specific phrase about "analysis of bulk acquired data." Amodei said the phrase "was the single line in the contract that exactly matched" the scenario the company was "most worried about." He accused OpenAI CEO Sam Altman of spreading "just straight up lies" and suggested Anthropic's government fallout resulted from not giving "dictator-style praise to Trump." OpenAI announced its Defense Department contract shortly after Anthropic's issues surfaced. Altman stated he told the government Anthropic should not be designated a supply chain risk. Altman later posted on X that OpenAI would amend its deal to explicitly prohibit the use of its AI system for mass surveillance against Americans. In an all-hands meeting, Altman stated that OpenAI does not make operational decisions on military use. He cited the Iran strike and Venezuela invasion as examples where the company does not weigh in. Altman said he did not know the details of Anthropic's contract but thought Anthropic should have agreed to it if it matched OpenAI's terms. Anthropic signed a $200 million deal with the Defense Department in 2025. The company's Claude chatbot rose to the top of Apple's Top Free Apps leaderboard after OpenAI announced its Defense Department contract, beating out ChatGPT.
[41]
Anthropic chief back in talks with Pentagon about AI deal
San Francisco | Anthropic chief executive Dario Amodei is making a last-ditch attempt to strike a deal with the US Defence Department after the breakdown of negotiations last week left his company at risk of being frozen out of the military's supply chain. Amodei has been holding talks with Emil Michael, under secretary of defence for research and engineering, in a bid to iron out a contract governing the Pentagon's access to Anthropic's artificial intelligence models, according to multiple people with knowledge of the matter.
[42]
What does the US military's feud with Anthropic mean for AI used in war?
Tech policy professor who served in US air force explains how a feud between an AI startup and the US military illuminates ethical fault lines Anthropic's ongoing fight with the Department of Defense over what safety restrictions it can put on its artificial intelligence models has captivated the tech industry, acting as a test of how AI may be used in war and the government's power to coerce companies to meet its demands. The negotiations have revolved around Anthropic's refusal to allow the federal government to use its Claude AI for domestic mass surveillance or autonomous weapons systems, but the dispute also reflects the messy nature of what happens when tech companies have their products integrated into conflict. The Pentagon this week declared Anthropic a supply chain risk for its refusal to agree to the government's terms, while Anthropic has vowed to challenge the designation in court. The Guardian spoke with Sarah Kreps, a professor and director of the Tech Policy Institute at Cornell University who previously served in the United States air force, about how the feud has played out. You've worked for a while on problems around "dual use technology". What happens when there's a consumer technology that also gets used for classified or military purposes? I've thought about this a lot because I was in the military and I was on the side of the military that was developing and acquiring new technologies. We were always getting criticism about why it was taking so long, and now watching what's happening I realize why it takes so long. What you would develop for classified and military contexts is very different from what Anthropic has developed for when I use Claude. The challenge for the military is that these technologies are so useful they can't wait until a military grade version is available. They need to act quickly because of how valuable these tools are, but it's not surprising that they ran into cultural differences between not just an AI platform and the military, but an AI platform that has tried to cultivate a reputation as being more safety conscious. One element in this feud is that Anthropic has branded itself as a safety-forward company, but then it did sign onto a deal with the military. Yes, there is a way in which it's surprising that Anthropic would be surprised by where this ended up. Part of the challenge is that Anthropic seems to have made the decision a year or two ago that ChatGPT was going to be for individual users and Anthropic was going to try to corner the enterprise market. That means they're trying to do business with organizations, rather than trying to sell individual plans. The puzzle to me is that they were then doing business with the Pentagon and Palantir, which is in the business of using AI for what some people would say are questionable purposes. So that decision was surprising to me because it was very much at odds with the brand that Anthropic was trying to curate. It seems like that Anthropic was OK with a pretty wide use of its technology, but that there was a red line that they got to with domestic mass surveillance and lethal autonomous weapons. There are a couple of possibilities. One is that some of this had to do with relationships between the people in Anthropic and the Trump administration, which led to a downward spiral of distrust. Second, there was the situation in Venezuela and then the politics around ICE activities. There is this question of what does it actually mean to be using these technologies lawfully? One person's definition of lawful might look very different from another's. The Pentagon's argument was, in part, that if there's a national defense issue we shouldn't have to call up Dario Amodei to get approval. It does seem like there is an actual question here around what role private tech companies have in national security decision-making. If you recall the case of the San Bernardino killer's iPhone, authorities were worried that this was a ticking bomb situation and they needed Apple to get into the phone. [In 2016, the FBI demanded Apple create a backdoor to grant them access to a mass shooter's phone. Apple refused on privacy grounds, resulting in the FBI seeking out an independent third party to hack into the device]. The difference here with Anthropic's AI is that once you hand this over to the military, you no longer need Anthropic's approval to use it as you see fit. It's the difference between hardware and software. You can repurpose this software and use it in ways that maybe weren't part of the explicit agreement, but now you can justify it on the basis of national security. Then Anthropic has lost all its leverage because it's in the hands of these national security professionals. And Anthropic wouldn't be able to tell what it's even being used for, correct? Yeah, exactly right. It goes into not just a black box, but Black Ops and classified systems that are closed off. I've found it interesting this week that it seems like a lot of really longstanding questions on AI use in the military are coming to a head. You've been following these issues for a long time, what are you thinking about watching this current fight? When I would hear the CEO of Anthropic talk, he would talk about these existential risks and the misappropriation of AI for bioterrorism. I always thought that those were either too distant or too out of reach. I thought this sort of more mundane case was more of a risk. There have also been people for a long time foreshadowing these questions about autonomous weapons. The challenge is how do you ever know whether there's actually a human in the loop. This was a concern that Anthropic had - how do we know if these systems are being used in a fully autonomous way? The US says we are not going to use AI in a fully autonomous capacity, but it's not clear what that process looks like for ensuring that doesn't happen. This was some time coming, but I guess it was sort of inevitable that we would go in that direction, just because the technology has gotten more and more sophisticated. The fact of now being involved in a conflict just kind of accelerates those timelines. We talk a lot about threats from AI and these red lines that people backed away from, but how is AI already being used in warfare? You can see how it's extremely useful in a military setting. I did some work on the intel side and one of the challenges is not the lack of content, it's the signal to noise ratio. You have a huge volume of information but it can be really hard to connect the dots, and that's something that AI is so good at. You feed it large amounts of information and it generates outputs that help identify what the signal is. If you're looking for pattern recognition, AI is really good at pattern recognition. You can identify what the kind of correlates or characteristics are that you're looking for and then it can go out and identify things, say an Iranian naval vessel, based on what you've programmed it to identify. That's not been super controversial in some ways, because those targets are fairly concrete. Where people get more uncomfortable is in a setting where the US, for example, would do counter-terrorism strikes. You have an individual on the ground that doesn't have a lot of identifiable characteristics and so that is a much more precarious situation for AI where you'd really want to make sure you're triple-checking. He could be a combatant, he could be a civilian. It's not a naval vessel or surface to air missile, where it's harder to get that wrong.
[43]
Anthropic Took a Stand Against the Pentagon. Now It's Scrambling to Save Its Defense Business
Anthropic CEO Dario Amodei is attempting to repair the company's relationship with the U.S. military after a heated dispute with the Pentagon. Amodei has resumed negotiations with the Department of Defense in an effort to preserve Anthropic's defense work and prevent the company from being designated a "supply chain risk," a classification that could effectively shut it out of future military contracts, according to the Financial Times. The standoff marks one of the most visible clashes yet between Silicon Valley's AI developers and the U.S. government over how far military agencies should be allowed to push emerging AI technologies. A growing rift over military AI Anthropic and the U.S. government have maintained a working relationship in which the company provides generative AI tools to the Defense Department. Its chatbot Claude has reportedly been used in military operations, including the U.S. raid of Venezuela and recent strikes in Iran, The Verge reported.
[44]
Palantir CEO's rant about the Anthropic-Pentagon feud threatening his company was about a lot more than a dirty word | Fortune
AI "seems much worse for the math people than the word people," Peter Thiel tersely said in 2024. He likely wasn't anticipating that just two years later his Palantir co-founder, CEO Alex Karp, would use some decidedly flowery language to describe people he thought were stupid. "If Silicon Valley believes we are going to take away everyone's white-collar job ... and you're gonna screw the military -- if you don't think that's gonna lead to nationalization of our technology, you're retarded," Karp said while speaking at the a16z American Dynamism Summit. "You might be particularly retarded, because you have a 160 I.Q." Karp was commenting in reference to the topic that has taken the AI world by storm: in what capacity do AI companies collaborate with the government? A closer look explains why a dust-up between the Pentagon and two totally separate companies (Anthropic and OpenAI) goes toward explaining Karp's displeasure. Katherine Boyle, General Partner at a16z, moderated the breakout session, which was entitled "AI in Defense of the West." "If Silicon Valley believes we are going to take away everyone's white collar job -- meaning primarily Democratic-shaped people that you might grow up with, highly educated people who went to elite schools or went to schools that are almost elite for one party -- and you're going to sue the military. If you don't think that's going to lead to nationalization of our technology, you're retarded." Whoa. So what's bothering Mr. Karp? While Karp could have chosen less offensive language to make his point, he was touching on a raw nerve -- one that is acutely personal for Palantir. "You cannot have technologies that simultaneously take away everyone's job," he said, and then be perceived as screwing the military. That tension isn't abstract for Palantir. It could very well be a live operational crisis. Companies including Anthropic, OpenAI, Google and xAI have all signed contracts with the Department of Defense, each with restrictions on whether their technologies can be used in settings that might violate their terms of service. The DOD has been in negotiations with AI companies to remove those restrictions and instead allow use of their tech for "all lawful purposes." Karp has little patience for companies that treat that ask as a moral red line: "There's a difference between U.S. military and surveillance," he said at the summit. "Despite what everyone thinks, Palantir is the anti-surveillance company," he said, pushing back on claims that the company named after an all-seeing surveillance device from Lord of the Rings is fundamentally about surveillance. Every technical expert knows this to be the case but the proverbial "person online" simply has the wrong idea, Karp argued, "so I end up in every conversation that I don't want to be in." Anthropic CEO Dario Amodei famously said he could not "in good conscience" support the "all lawful purposes" clause. Then, after hitting Anthropic with the threat of being deemed a military supply chain risk, the government penned a deal with OpenAI to use its tools in classified missions. (Anthropic is reportedly in talks with the Pentagon yet again, with the Pentagon confirming that Anthropic's Claude Opus was key to its preparations for the historic strike by the U.S. and Israeli military on Iran.) For Palantir, that sequence of events is not an abstraction -- it is a direct operational threat. Palantir's flagship AI Platform (AIP) relies on plugging best-in-class frontier models into its defense and intelligence workflows. Claude Opus is among the most capable of those models, prized for its reasoning depth and reliability in high-stakes environments. If Anthropic is blacklisted as a military supply chain risk -- or if its terms of service effectively bar it from the classified settings where Palantir operates -- Palantir would lose access to one of its most powerful AI engines. It would be forced to retool its platform around alternative models mid-contract, a costly and reputationally damaging disruption for a company whose entire brand promise is mission-critical reliability. "Again, there's a lot of subtlety here behind the curtain," Karp acknowledged. "I've been heavily involved in that subtlety -- what can be deployed, where it can be deployed." The stakes, Karp argued, go well beyond any single Pentagon contract or any single company's policy decision. "The danger for our industry," he warned, "is that you get a famous horseshoe effect where there's only one thing people agree on -- and that's that this is not paying the bills, and people in our industry should be nationalized." That populist convergence -- where left and right alike turn on tech -- becomes inevitable, in Karp's telling, if AI companies strip white-collar workers of their livelihoods while simultaneously refusing to serve the military. He was pointed about who those workers are: "Primarily Democratic-shaped people that you might grow up with -- highly educated people who went to elite schools, or went to schools that are almost elite, for one party." Those fears are already materializing at an economic scale that lends urgency to Karp's argument. Experts warn of an imminent AI doomsday scenario where white-collar workers' days are numbered -- a destabilizing force that would leave most employees jobless. These aren't merely panic-inducing ideas; they carry real-world consequences, like a viral essay from Citrini Research that triggered mass market upheaval. In Karp's view, the government would not allow AI companies to amass the power they already hold and still operate in a self-regulatory, non-governmental oversight capacity -- let alone dictate terms of use back to the government itself. "This is where that path is going," he said simply. The only way for companies like Palantir to retain their position, their contracts, and their access to the frontier AI models that power their platforms is to play by the government's rules when called upon. For Palantir, losing that seat at the table doesn't just mean bad optics. It means losing the technological inputs that make its core product work. It would be a dramatic reversal for a company that printed what Karp called just a month ago "one of the truly iconic performances in the history of corporate performance or technology" in Palantir's latest quarterly earnings.
[45]
A guide to the Pentagon's dance with Anthropic and OpenAI - The Economic Times
If Anthropic did not allow the Pentagon to deploy these technologies for "all lawful uses," Hegseth said, he would sever ties with the San Francisco startup.Late last month, Defense Secretary Pete Hegseth delivered an ultimatum to Anthropic, the only company that had provided the Pentagon with artificial intelligence technologies for use on classified systems. If Anthropic did not allow the Pentagon to deploy these technologies for "all lawful uses," Hegseth said, he would sever ties with the San Francisco startup. The threat set off a chain of events that resulted in the Defense Department's labeling Anthropic a "supply chain risk," which would prevent all military contractors from using the company's technologies, and signing an agreement with OpenAI, its biggest rival. The negotiations were, to say the least, confusing. How does the Pentagon use Anthropic's technology? Anthropic's technologies are widely used inside the Defense Department because the startup agreed last year to integrate its systems with technology from Palantir, a data analytics company that is approved for classified operations. Separately from Anthropic's partnership with Palantir, the Pentagon also uses Anthropic's technology to analyze imagery and other intelligence data as part of a $200 million AI pilot program. Anthropic's technology is being used as U.S. military forces engage in a widening war against Iran, two people familiar with the technology said on the condition of anonymity. Google, OpenAI and Elon Musk's xAI are also part of the pilot program, but are not yet used on classified systems. Anthropic was a step ahead of its rivals thanks to its partnership with Palantir. Why did the Pentagon get angry at Anthropic? On Feb. 15, The Wall Street Journal reported that Anthropic had raised concerns with Palantir about the role its technologies played in the U.S. military operation to capture Venezuela's president, Nicolás Maduro. The story inflamed earlier tensions, as Hegseth and others at the Pentagon argued that Anthropic was resisting the military's use of these AI systems. The Defense Department was already in talks with Anthropic to establish new contractual language that allowed the Pentagon to use the company's technologies for any lawful purpose. But Anthropic was reluctant to agree to those terms. Why was Anthropic reluctant? Anthropic wanted contractual language that prevented the Pentagon from using its technology with autonomous weapons or for mass surveillance of Americans. It argued that specific language was needed to ensure that the technologies were used only in ways that aligned with what they could "reliably and responsibly do." The Pentagon said private companies should not try to control how the military operated. On Feb. 24, Hegseth met with Anthropic's CEO, Dario Amodei, and said that if Anthropic failed to agree to the Pentagon's demands by 5:01 p.m. on the next Friday, he would designate the company a supply chain risk. What does it mean to be a supply chain risk? It means that a company's technology cannot be used by the Pentagon or any of its contractors in their work with the government. The designation is typically applied only to firms with ties to the government of China. Did cooler heads prevail? No. The company published a blog post saying it could not "accede" to the Pentagon. Minutes after the deadline passed, Hegseth deemed Anthropic a supply chain risk in a post to social media. He added that "no contractor, supplier or partner that does business with the United States military may conduct any commercial activity" with the company. But the Pentagon planned to continue to use Anthropic's technologies for up to six months as it arranged for alternatives. The Pentagon later sent a letter to Anthropic saying it had officially designated the company as a supply chain risk. Does Hegseth have the power to do that? A court will probably decide. Anthropic has said it intends to sue the government, and legal scholars say a suit would most likely be successful. "Anthropic's case is very strong," said Alan Rozenshtein, a professor of law at the University of Minnesota. Legal scholars also say the Pentagon does not have the power to bar its contractors from commercial activity with the startup beyond just using its technology. For instance, it cannot prevent contractors from investing in Anthropic, they said. "The commercial activity language is flatly illegal," Rozenshtein said. That is an important point because Amazon and Google -- two of Anthropic's biggest investors -- are also Defense Department contractors. In a statement on Anthropic's website, Amodei said Anthropic was still in discussions with the Pentagon over their contract. But Emil Michael, chief of technology for the Defense Department, quickly responded on social media that there were "no active" negotiations between the two. Why didn't the Pentagon just stop using Anthropic? That would have been an easier solution to the dispute. "The correct response is to just cancel the contract and walk away," Rozenshtein said. Instead, the Pentagon appeared to make a political statement by labeling Anthropic a supply chain risk. "It seems like the Pentagon just does not like Anthropic's general political vibe and wants to destroy its entire business," said Dean Ball, a senior fellow at the Foundation for American Innovation who was previously a policy adviser for AI under President Donald Trump. "That is beyond the pale." How did OpenAI get involved? A day after Hegseth met with Amodei, OpenAI's CEO, Sam Altman, started his own talks with the Defense Department. Altman told the Pentagon that it should not give Anthropic the supply chain risk label because it would have a chilling effect on the department's relationship with the tech industry. Like Anthropic, he said, OpenAI did not want its technologies used for mass surveillance of Americans or with autonomous weapons. But Altman and OpenAI also worked on their own contract with the Pentagon. Just hours after Anthropic missed its deadline, he announced that they had reached an agreement. OpenAI agreed to let the Pentagon use its AI systems for any lawful purpose. But OpenAI also said it had negotiated terms that allowed the company to uphold its safety principles by installing specific technical guardrails on its systems. Can technical guardrails prevent AI from being used for mass surveillance? No. The guardrails built into today's AI do not always work as they are designed. And even when these guardrails hold firm, there are many ways AI systems could still be used to feed surveillance or the use of autonomous weapons. Three days later, OpenAI announced that it had amended its agreement with the Pentagon. It added language saying its AI systems "shall not be intentionally used for domestic surveillance of U.S. persons and nationals." People following this odd contract shuffle argued that the Pentagon had made an agreement with OpenAI that it refused to make with Anthropic. This was another sign, they said, that the Pentagon's response to Anthropic was politically motivated. Does the amendment uphold OpenAI's safety principles? Maybe not. Legal experts point out that the Pentagon could inadvertently collect data about Americans as it worked to monitor foreigners and that it would still be allowed to analyze this data under the terms of the contract. A contract like this is also difficult for a private company to enforce, because a violation of the terms may not be obvious, Rozenshtein said. In other words, whether a technology has been used for mass surveillance is sometimes open to debate. Even if the government breaches the contract, OpenAI can at most cancel service and sue for damages, but it cannot force the government to live up to its end of the bargain, Rozenshtein said. Altman and OpenAI also said the Pentagon had assured the company that its technology would not be used by defense intelligence agencies, including the National Security Agency. But OpenAI could, of course, sign a separate agreement that allows the NSA to use its technologies. So, what does all this mean? "This is not just some dispute over a contract. This is the first conversation we have had as a country about control over AI systems," Ball said. "What should the limitations be? And who gets to decide?" But he and other experts said this was not the best way to decide these questions. They say Congress should step in to set firmer laws. "Congress should be asking hard questions about this," said David Bader, a professor at the New Jersey Institute of Technology. "We need deliberate bipartisan framework for the governance of AI."
[46]
Trump is using AI to fight his wars - this is a dangerous turning point | Chris Stokel-Walker
The technology most people use only as a chatty tool for daily tasks is reportedly aiding US military aggression. And there is not much we can do about it There are a lot of things that AI can do. It can sort out your shopping list, and it can keep your kids entertained when they're mutinous by spinning up a tailor-made bedtime story for them. It can make you more efficient at work, and can help our government operate more effectively. What is written less about, and what we need to shout louder about now, are the risks inherent in the militarisation of AI. In the last three months Donald Trump's White House has reportedly used AI twice to effect regime change, or to - in the most recent case in Iran - get as close to doing so as possible, and leaving it up to rank-and-file Iranians to finish the job. First, Anthropic's Claude AI model - which most people use as a slightly more discerning alternative to ChatGPT - was supposedly used both to plan and execute the snatching of Nicolás Maduro from his compound in Venezuela, but it's unclear how the model was used in detail. Then this weekend, we learn that the AI tool was used again, to parse through intelligence that helped aid the hugely damaging barrage of missiles that have rained down on Iran, apparently for identifying targets and running simulations. It's hard to overstate how significant both moments are. AI has been used in the planning and execution of military operations that have led to an unknown number of casualties, and roiled the Middle East. If that makes you feel uneasy, you're not alone. The CEO of Anthropic, Dario Amodei, has been embroiled in a very ugly, public spat with the US president after he refused to relax two "red lines" for Claude: that it should not be used for mass domestic surveillance, nor to build fully autonomous weapons that select and engage targets without meaningful human control. OpenAI quickly swooped in and signed an agreement with the Pentagon, though it claims that the terms of its agreement mean that it actually has stronger protections than the ones Anthropic wanted. Regardless of the specific subclauses in the contract, it bears repeating: a tool that began public life as a chatty interface for summarising emails and helping you write a cover letter is now sitting somewhere along the chain that turns information into violence. It used to be that questions such as "Who should control AI and what happens if it gets used militarily?" were debated among academics at panels in the abstract. There were worries, but they felt remote because they hadn't come to fruition. When Maduro got swept up by special forces in January, and the bombs started dropping on Iran, apparently all with AI help, that calculus changed. The basic principles of armed conflict have been that you wield big scary weapons but never use them. They're for deterrence. The theory of mutually assured destruction meant that people shied away from pushing the button on nuclear bombs. (Worryingly, the early indications from war games scenarios are that AI decision-makers are trigger-happy with nuclear weapons.) Now that excuse is no more. More countries will use AI in their military planning and actions - rightly, because it's been shown to be effective, although there are obvious moral questions if AI is used to make military decisions. When military historians look back at what has happened in the last few months, it's easy to see them thinking the use of AI in this way will be similar to the nuclear weapons dropped on Japan: marking a moment where there was a clear before, and an unclear after. So what can we do about it? Very little. We should have had a blanket ban on the use of military AI. We've been creeping away from that for more than a decade now since Demis Hassabis took a principled stand and said he would only sell his company, DeepMind, to Google if it agreed not to allow the technology to be used militarily. Last year the company, now called Alphabet, quietly dropped its promise that it wouldn't use AI for weapons. And Trump's actions have loudly blown a hole in the idea. But now the international community needs to work hard to bring Trump back from the brink. Allies should put pressure on Trump's White House not just to be responsible in its use of AI militarily, but to accept binding constraints. That should include international commitments, transparent procurement standards and meaningful oversight, to which others should also sign up, rather than treating ethics as a brake on action. Because if the world's most powerful military normalises consumer-grade AI models as part of regime-change operations, we will be through the looking-glass on AI: we'll be in a whole new, altogether more dangerous world.
[47]
Anthropic's Dario Amodei Pushes To Salvage Pentagon Deal After Heated Negotiations: Report - Lockheed Martin (NYSE:LMT)
Anthropic CEO Dario Amodei is making a push to revive a Pentagon contract after talks collapsed last week, creating a supply chain designation risk for the privately held AI company. Amodei Pushes to Salvage Pentagon Deal Amid Tensions Amodei met with Emil Michael, undersecretary of Defense for Research and Engineering, to revive contract terms governing military access to Anthropic's AI models, the Financial Times reported. According to the report, the breakdown of the initial talks was marked by a heated exchange between Michael and Amodei, with the former accusing the Anthropic CEO of dishonesty and a "God complex." The negotiations ultimately collapsed after the two parties failed to agree on language that would prevent the use of Anthropic's AI for mass domestic surveillance, one of the company's stated red lines alongside lethal autonomous weapons. The report also noted that Amodei's memo to staff, first reported by The Information on Wednesday, in which he accused the Pentagon and OpenAI of spreading misinformation, is likely to complicate the ongoing negotiations. The dispute between Anthropic and the US government escalated when the Pentagon demanded that AI companies allow their technology to be used for any "lawful" purpose. Defense Contractors Pull Back The Pentagon controversy has propelled Claude to the top spot on Apple's App Store, causing outages reported by nearly 2,000 U.S. users. Photo: Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[48]
Trump's strike on Iran and the new breed of AI wars means bombs can drop faster than the speed of thought | Fortune
AI has entered the war room, and it's not going anywhere anytime soon, according to experts. Despite President Donald Trump telling federal agencies and military contractors to cease business with Anthropic, the U.S. military reportedly used the company's AI model, Claude, in its attack on Iran, according to The Wall Street Journal. Now, some experts are raising concerns about the use of AI in war operations. "The AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought," Dr. Craig Jones, author of The War Lawyers: U.S., Israel and the Spaces of Targeting, which examines the role of military lawyers in modern war, told The Guardian. In a conversation with Fortune, Jones, a lecturer at Newcastle University on war and conflict, said AI has vastly accelerated the "kill chain," compressing the time from initial target identification to final destruction. He said the U.S.-Israel strikes on Iran, which resulted in the death of Ayatollah Ali Khamenei, might not have happened absent AI. "It would have been impossible, or almost impossible, to do in that way," Jones told Fortune. "The speed it was carried out, and the magnitude and the volume of the strikes, I think are AI-enabled." The Pentagon has enlisted the help of AI companies to speed up and enhance war planning, entering a partnership with Anthropic in 2024 that came crumbling down last week thanks to disagreements over use of the company's AI model, Claude. But OpenAI quickly inked a deal with the Pentagon, and Elon Musk's xAI reached a deal to use the company's AI model, Grok, in classified systems. The U.S. Army also uses data-mining firm Palantir's software for AI-enabled insights for decision-making purposes. Jones said the U.S. Air Force has used the "speed of thought" as a benchmark for the pace of decision-making for years. He said the time elapsed from collecting intelligence, such as aerial reconnaissance, to executing a bombing mission could take up to six months during WWII and the Vietnam War. AI has significantly compressed that timeline. The key role of AI tools in the war room is to quickly analyze vast amounts of data. "We're talking terabytes and terabytes and terabytes of data," Jones said, "everything from aerial imagery, human intelligence, internet intelligence, mobile phone tracking, anything and everything." Dr. Amir Husain, co-author of Hyperwar: Conflict and Competition in the AI Century, said that AI is being used to compress the U.S. military's decision-making framework, known as the OODA loop -- an acronym for observe, orient, decide, and act. He said AI is already playing a significant role in observation, or in interpreting satellite and electronic data, tactical-level decision-making, and the "act" phase, specifically through autonomous drones that must operate without human guidance when signals are jammed. Some of those drones are actually copycats of Iran's own autonomous Shahed drones. AI has also appeared on other battlefields. Israel reportedly used AI to identify Hamas targets during the Israel-Hamas war. And autonomous drones are on the frontlines in the Russia-Ukraine war, with both Russia and Ukraine employing some variation of autonomous technology. However, Jones flagged a number of concerns around AI-enabled warfare. "The problem when you add AI to that is you multiply, by orders of magnitude I would argue, the degrees of error," Jones said. To be sure, Jones said, human error exists with or without AI technology, citing the 2003 U.S. invasion of Iraq as a conflict built upon flawed intelligence gathering. But he said AI could exacerbate such mistakes thanks to the magnitude of data the technology analyzes. There's also a string of ethical questions AI warfare raises, mainly around the question of accountability, something Husain said the Geneva Convention and the laws of armed conflict already require states to comply with. With AI blurring the lines between machine and human-level decision-making, he said the international community must ensure human responsibility is assigned to all actions on the battlefield. "The laws of armed conflict require us to blame the person," Husain said. "The person has to be accountable no matter what level of automation is used in the battlefield."
[49]
'AI-first' warfare: America's algorithmic edge in Operation Epic Fury - opinion
AI may have contributed to tactical successes in Tehran and elsewhere In the rapidly unfolding conflict with Iran (known in the US as Epic Fury and in Israel as Roaring Lion), artificial intelligence has ceased to be a back-office analytical tool and has become operationally embedded in battlefield decision-making and war planning. Reports indicate that the US military deployed AI systems provided by the start-up Anthropic - specifically its large-language model "Claude" - to support intelligence analysis, target identification, and in operational simulations during recent strikes on Iranian targets, even hours after US President Donald Trump ordered a federal ban on the technology. This extraordinary sequence of events - in which AI's role in kinetic operations outpaced public policy - reflects both the deep integration of advanced models into combat systems and the Pentagon's urgent push to field AI across its mission sets. From intelligence support to operational acceleration According to reports from The Wall Street Journal and other outlets, US Central Command utilized Claude in conjunction with conventional assets - including Tomahawk missiles, stealth aircraft, and AI-driven drones - to process vast quantities of battlefield and sensor data in real time. The AI model assisted commanders by synthesizing intelligence, prioritizing high-value targets, and running "what-if" scenarios that had traditionally taken hours of human analysis. Even as the Trump administration publicly denounced Anthropic's technology and gave federal agencies six months to phase it out, the reality of its use in an actual war zone underscores the operational value military planners see in these models. Rescue and war planners reportedly resisted immediate cutoff because Claude was already deeply embedded in mission-critical workflows, including through partnerships with firms such as Palantir that integrate commercial AI into secure military systems. The tensions between technological utility and political leadership are stark. While commanders in the theater of war rely on the AI's ability to collapse sensor-to-commander timelines, civilian leadership is still grappling with the authority and ethics of accelerating such integration without clear oversight. The Pentagon's 'AI-first' directive The US Department of War (DoW) - the modern name for the Pentagon's operational arm - has formally embraced an 'AI-first' strategy, a blueprint to make AI foundational to how the US armed forces fight, gather intelligence, and organize operations across domains. The strategy memo directs the DoW to become an "AI-first warfighting force" that accelerates experimentation with frontier models, removes bureaucratic barriers to AI deployment, prioritizes asymmetric advantage in compute and data, and incorporates AI into core decision loops. Seven "Pace-setting projects" highlighted in the strategy roadmap span disciplines from tactical swarm coordination to AI-augmented battle management agents - signaling that AI isn't only for intelligence support but is being woven into how campaigns are planned and executed. In practical terms, the strategy is not an abstract wish list. The Department has already rolled out GenAI.mil, a secure AI platform designed to bring generative models and analytics into both classified and unclassified networks, expanding AI access to millions of service members and civilian personnel. Silicon Valley meets the war machine Defense's rapid adoption of AI has provoked significant industry debate. Anthropic, initially an approved provider of AI models for classified missions, has resisted Pentagon demands to remove safeguards - particularly regarding autonomous weapons and mass surveillance - arguing that such uses exceed current safe boundaries for the technology. Defense officials, meanwhile, have threatened contract cancellation and even labeling the company a "supply chain risk" to compel broader access, injecting political pressure into what was once a technical negotiation. These clashes have triggered internal tech industry pushback, including employee petitions opposing military AI use in certain domains, reflecting broader tensions over ethics, governance, and national security. The new 'rules' of war The US experience in the Iran conflict highlights a transformative moment in modern warfare: AI models are no longer confined to predictive maintenance or administrative support but are actively processed as force multipliers in combat scenarios. This shift carries profound implications for how wars are planned, fought, and governed - from tactical autonomy to strategic escalation. At the same time, scholars and policymakers caution that the rush to embed AI into lethal operations must be paired with robust ethical and legal frameworks, lest the technology outpace the norms that govern its use. The evolution of international law, rules of engagement, and accountability mechanisms will be tested as AI systems influence decisions once exclusively in human hands. The AI arms race is on The US military's deployment of AI in the Iran conflict, in the face of a political ban and amid an AI-first institutional strategy, reveals both the strategic imperatives and the dilemmas that advanced technology introduces into contemporary warfare. As AI becomes deeply woven into command cycles, intelligence synthesis, and operational planning, the United States is effectively pioneering a future where the boundary between human judgment and algorithmic decision support is continually renegotiated. The outcome of this negotiation among military planners, policymakers, industry partners, and international audiences will shape the rules of war in the AI era. The writer is the head of the Institute for Applied Research in Responsible AI at HIT and of the Deep-Tech & National Security Project at the Institute for National Security Studies (INSS). She is also a former senior director at the National Security Council (NSC).
[50]
Big tech group supports Anthropic in Pentagon fight as investors push to de-escalate clash over AI safeguards - The Economic Times
A Big Tech industry group, including Amazon and Nvidia, raised concern over the United States Department of Defense considering labelling Anthropic a supply-chain risk. Investors are seeking a solution as tensions grow over military use of its AI. The dispute could affect customers, revenue growth and the company's possible IPO plans.A Big Tech industry group consisting of major Anthropic backers Amazon and Nvidia on Wednesday expressed concern over the Pentagon's decision to declare the artificial intelligence company a supply-chain risk as other investors raced to contain fallout from the lab's fight with the U.S. Defense Department. In a letter dated Wednesday, the Information Technology Industry Council, whose members include Nvidia, Amazon.com, Apple and OpenAI said, "We are concerned by recent reports regarding the Department of War's consideration of imposing a supply-chain risk designation in response to a procurement dispute." The letter does not name Anthropic. In recent days, CEO Dario Amodei has discussed the matter with some of Anthropic's major investors and partners, including Amazon.com CEO Andy Jassy, two of the people said. Venture capital firms, including Lightspeed and Iconiq have also been in contact with Anthropic executives, two sources said. Lightspeed and Iconiq are also talking to other investors about potential solutions, according to one of the sources. Some investors are also reaching out to their contacts in the Trump administration in hopes of tamping down the tensions, two sources said. The discussions focus on avoiding a ban of Anthropic's AI from all Pentagon contractors, the people said. Anthropic and the Pentagon are continuing some talks in the meantime, one of the people said. Reuters was unable to determine what such talks entailed. U.S. President Donald Trump has called on Anthropic to help the government phase out its AI systems. The Pentagon declined to comment. Investors, including Amazon, did not immediately respond to a request for comment. Anthropic and the Defense Department, which the Trump administration renamed the Department of War, have been in a months-long dispute over how the military can use its technology on the battlefield. The clash is widely seen as a referendum on how much control AI companies can have over the technology they've built, systems they hope can transform education, public services and other aspects of society. The Pentagon has pushed AI companies to drop red lines in favour of abiding by an all-lawful use clause. But Anthropic has refused to back down on bans for its Claude AI to power autonomous weapons and mass U.S. surveillance. Anthropic was first among peer AI companies to work with classified information through a supply deal via cloud provider Amazon. OpenAI said Friday that it reached its own classified deal with the Pentagon and that Anthropic should not be labelled a risk to the department. "Our red lines were the same as Anthropic's, which is at this point in time, no domestic surveillance and no use of AI for autonomous weapons," Connie LaRossa, who works on national security policy at OpenAI, said on a panel at an Aspen Digital conference in Northern California on Wednesday. "We are actually working to have the secure risk designation removed from Anthropic ... That shouldn't be applied to a U.S. industry counterpart with such an important tool." Funding risks During talks with Anthropic executives, investors have reiterated their support for the San Francisco-based AI lab while also expressing their desire to find a solution with the Pentagon, the seven people said. Some investors told Reuters they were frustrated that CEO Amodei antagonised rather than cultivated Pentagon officials. "It's an ego and diplomacy problem," one of the people briefed on the matter said. At this point, some investors said, Amodei cannot be seen as capitulating to the administration without alienating a core group of employees and consumers who have flocked to Anthropic because of his stance. Amodei, who did not respond to a request for comment, has said Anthropic cannot "in good conscience accede to their request." While speaking to investors late Tuesday, Amodei said the company would "continue to work to figure out a solution with the DoW." The investors taking a stance on Pentagon talks are focused on helping Anthropic avoid being designated a "supply-chain risk" by the U.S. government, which, if implemented, could deliver a severe blow to the startup's fast-growing sales to business customers. Demand has risen for Anthropic's products, such as its chatbot Claude and coding assistant Claude Code. Claude was the most-downloaded free app in the Apple App Store on Monday, surpassing OpenAI's ChatGPT. Defense Secretary Pete Hegseth has said such a risk designation would require all government contractors to stop using Anthropic's technology in any part of their business. Anthropic has publicly pushed back on Hegseth's comments, saying he does not have the statutory authority to block use of its AI outside of defence contracts. The Pentagon did not answer a request for comment on Anthropic's claim. Anthropic also said Friday it would challenge any supply-chain risk designation in court. Still, some investors worry the spat could scare off potential customers who are looking to avoid being in the administration's crosshairs, generally, one of the people said. These worries come at a critical time for the startup. Anthropic has raised tens of billions of dollars on lofty expectations for its enterprise sales, which make up about 80% of Anthropic's revenue, the startup has said. The success of future share sales, including its widely anticipated initial public offering, hinges on Anthropic's continuing to build its business revenue. Anthropic is in the process of letting employees sell shares to investors, and the company has previously said there is no decision yet on its IPO. Anthropic's revenue run rate, or its projected annual revenue based on current data, is about $19 billion, one of the people said, up from $14 billion just a few weeks ago. The push from investors came as several U.S. government agencies started terminating their use of Anthropic's technology, with the State Department switching to rival OpenAI, following Trump's order on Friday to dump Anthropic within the next six months.
[51]
Anthropic Returns to Negotiations With Pentagon Over AI Guardrails | PYMNTS.com
The renewed talks follow a dispute that quickly drew attention across Silicon Valley and Washington. According to a Wednesday (March 3) report by Bloomberg, Amodei had been negotiating with Emil Michael, the U.S. undersecretary of defense for research and engineering, on a contract governing the Defense Department's access to Anthropic's AI systems. Those negotiations were derailed after the startup demanded assurances that its models would not be used for mass surveillance of Americans or deployed in autonomous weapons systems. The disagreement escalated further when Defense Secretary Pete Hegseth declared Anthropic a "supply-chain risk," a designation typically reserved for foreign adversaries. The move raised the possibility that the Pentagon could effectively blacklist the company from government contracts and sensitive technology deployments. This comes amid intensifying competition among AI companies to supply technology to the U.S. government. OpenAI last week announced that it had reached an agreement allowing the Pentagon to deploy its AI models within a classified network used by the Defense Department. The company said it is also working with defense officials to add safeguards around potential surveillance uses of the technology. The Pentagon dispute has also cast a spotlight on one of the AI industry's fastest-growing companies. Anthropic, which develops the Claude family of large language models, is now valued at roughly $380 billion and is approaching a $20 billion annual revenue run rate. Despite the tensions with the Defense Department, the company has continued to gain traction with consumers and enterprise customers. Bloomberg also reported that Anthropic's Claude recently topped Apple's download charts, reflecting a surge in interest from everyday users. At the same time, the episode is drawing attention to a new category of enterprise risk emerging in the AI economy. As PYMNTS reported, the Pentagon's designation of Anthropic as a potential supply-chain risk underscores how AI models are increasingly treated as critical infrastructure within technology supply chains, creating new vendor-dependency and governance challenges for organizations deploying advanced AI systems.
[52]
Iran war heralds era of AI-powered bombing quicker than 'speed of thought'
Speed and scale of US military's AI war planning raises fears human decision-making may be sidelined The use of AI tools to enable attacks on Iran heralds a new era of bombing quicker than "the speed of thought", experts have said, amid fears human decision-makers could be sidelined. Anthropic's AI model, Claude, was reportedly used by the US military in the barrage of strikes as the technology "shortens the kill chain" - meaning the process of target identification through to legal approval and strike launch. The US and Israel, which previously used AI to identify targets in Gaza, launched almost 900 strikes on Iranian targets in the first 12 hours alone, during which Israeli missiles killed Iran's supreme leader, Ayatollah Ali Khamenei. Academics studying the field say AI is collapsing the planning time required for complex strikes - a phenomenon known as "decision compression", which some fear could result in human military and legal experts merely rubber-stamping automated strike plans. In 2024 the San Francisco-based Anthropic deployed its model across the US Department of War and other national security agencies to speed up war planning. Claude became part of a system developed by the war-tech company Palantir with the Pentagon to "dramatically improve intelligence analysis and enable officials in their decision-making processes". "The AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought," said Craig Jones, a senior lecturer in political geography at Newcastle University and an expert in kill chains. "So you've got scale and you've got speed, you're [carrying out the] assassination-style strikes at the same time as you're decapitating the regime's ability to respond with all the aerial ballistic missiles. That might have taken days or weeks in historic wars. [Now] you're doing everything at once." The latest AI systems can rapidly analyse mountains of information on potential targets from drone footage to telecommunications interceptions as well as human intelligence. Palantir's system uses machine learning to identify and prioritise targets and recommend weaponry, accounting for stockpiles and previous performance against similar targets. It also uses automated reasoning to evaluate legal grounds for a strike. "This is the next era of military strategy and military technology," said David Leslie, professor of ethics, technology and society at Queen Mary University of London, who has observed demonstrations of AI military systems. He also warned that reliance on AI can result in "cognitive off-loading". Humans tasked with making a strike decision can feel detached from its consequences because the effort to think it through has been made by a machine. On Saturday 165 people, many children, were killed in a missile strike that hit a school in southern Iran, according to state media. It appeared to be close to a military barracks and the UN called it "a grave violation of humanitarian law". The US military has said it is looking into the reports. It is not known what AI systems, if any, Iran has embedded into its war-fighting machine, although it claimed in 2025 to use AI in its missile-targeting systems. Its own AI programme, hampered by international sanctions, appears negligible by contrast with the AI superpowers of the US and China. In the days before the Iran strikes, the US administration had said it would banish Anthropic from its systems after it refused to allow its AI to be used for fully autonomous weapons or surveillance of US citizens. But it remains in use until it is phased out. Anthropic's rival, OpenAI, quickly signed its own deal with the Pentagon for military use of its models. "The advantage is in the speed of decision-making, the collapsing of planning from what might have taken days or weeks before to minutes or seconds," said Leslie. "These systems produce a set of options for human decision makers but [they've] got a much narrower time band ... to evaluate the recommendation." "The deployment of AI is expanding," said Prerana Joshi, research fellow at the Royal United Services Institute, a defence thinktank. "It is being done across countries' defence estates ... across logistics, training, decision management, maintenance." She added: "AI is a technology that will allow decision makers, and anyone in that chain, to improve the productivity and efficiency of what they do. It's a way of synthesising data at a much faster pace that is helpful to decision makers."
[53]
The Anthropic Imbroglio: Who Stands to Lose What and Why
Both sides have lots to lose and it remains to be seen whether the issue is brushed under the carpet or Trump's vengeance will weaken the AI hegemony? When the US government officially notified Anthropic as a supply-chain risk, the first question we asked was how does Dario Amodei get his company out of this mess? However, two days later, we are wondering whether the Trump administration may just be the one needing help to sweep the entire matter under the carpet. Look at what is on the table? More than $60 billion in terms of investor wealth that went into Anthropic. Continuation of a military deal that has been of considerable value to (and is helping even now per some reports) the US-Israel alliance in the Iran war. A real possibility of losing out on an AI hegemony that the Trump White House is eagerly chasing. Will President Trump want his Department of Defense (DoD) to bust a part of the trifecta of tech companies that can help the US maintain its hegemony on all things AI and data? We think not, which is why reports of an apology from Amodei makes sense. He reportedly told the Economist he wanted to apologise for the internal memo where he had criticised Trump. Of course, the DoD may not be convinced of Amodei's intent, given that he also shot off a formal response to them confirming legal recourse against their "legally unsound" move. Of course, that statement came within hours of the US administration's move to designate Anthropic a supply-chain risk for the American government, while the attempt to apologise came later. At this point in time, Amodei appears to be taking two-steps-forward and one-step-back. He is confident that the government action "applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts." At the same time, he also wants to de-escalate. For now, Anthropic is also getting the full support of the OG Big-3 - Microsoft, Amazon and Google - who have implied that despite the so-called sanctions, Anthropic would remain available to non-defence customers. Microsoft was off-the-blocks first to assure Anthropic that they would continue to be available to its customers despite the Trump administration's Department of War (a sign of warmongering by this regime) escalating its battle with the AI startup. The company, which sells everything from its Office suite to Azure cloud to federal agencies, will continue making Anthropic models available within its own products as well as to Microsoft customers. "Our lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers -- other than the Department of War -- through platforms such as M365, GitHub, and Microsoft's AI Foundry, and that we can continue to work with Anthropic on non-defence related projects," A Microsoft spokesperson said, which was first reported by CNBC. Close on the heels of this response, Google followed suit. The company sells cloud computing, AI and productivity tools to several government agencies as well. "We understand that the Determination does not preclude us from working with Anthropic on non-defence related projects, and their products remain available through our platforms, like Google Cloud," a Google spokesperson was quoted as saying. Right on cue, CNBC also reported that Amazon customers led those using AWS would continue to keep using Claude for their non-defence workloads. Are these companies echoing what Amodei had said earlier while vowing to battle the Trump regime? If so, can we call it a show of strength by the Big Tech to a President who has sought to dismantle their authority? "With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts. Even for Department of War contractors, the supply chain risk designation doesn't (and can't) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts" - Dario Amodei Just so that readers are clear about how Trump had woken up one day and taking potshots at Anthropic, a company that was collaborating with the White House and several government bodies including the Pentagon. Here is what he put out via his Truth Social account: The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars! That decision belongs to your commander-in-chief, and the tremendous leaders I appoint to run our Military. The Leftwing nut jobs at Anthropic have made a disastrous mistake trying to strong-arm the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting American lives at risk, our Troops in danger, and our National Security in jeopardy. Therefore, I am directing every Federal Agency in the United States Government to immediately cease all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again! There will be a six month phase out period for Agencies like the Department of War who are using Anthropic's products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow. The tone and tenor of the post clearly indicates that Trump's audience isn't Anthropic but all tech companies. He is invoking his rights as an elected leader to claim technology prowess as a matter of right for actions that are hardly supervised by the Congress or the Senate. So, can Amodei be an obedient boy at least till such time as the court decides on the DoD's unilateral action? By the looks of it, Anthropic does not want the issue to prolong as Amodei reiterated that they have had productive conversations with the Department of War for several days and thereafter apologised for the leak of his internal memo, claiming that it wasn't intentional and stating that "it is not in the interest to escalate the situation." Of course, opinion is divided on whether Amodei grovelled in his apology note where he actually apologised for its tone and calling it "a difficult day for the company" where the memo actually did not reflect his "careful or considered views." He finished off by saying Anthropic's top priority was to ensure American soldiers and ensure that security experts had access to important tools in the middle of the ongoing major combat operations. What we can say confidently is that Anthropic is following Trump's position of delivering service during the "changeover" phase where OpenAI shall takeover AI support to the government departments under the current regime. Amodei said the company would continue to provide its models to the DoW at "nominal cost" for "as long as necessary to make that transition." So, that's settled then. Now all that is left to be seen is whether Anthropic actually takes the administration to court in Washington or waits for some time for the dust to settle? Legal experts argue that the Pentagon's broad discretionary powers on national security matters makes such a lawsuit tough to handle. Because the judges might just have to second-guess the Trump administration on what they would like to designate as a "national security issue." What other options does Anthropic have? Well, if one were to go by the invitation that London Mayor Sadiq Khan has sent to Amodei to expand in London, it could be a leverage of sorts. Maybe, some more adventurous leaders will do the same in the near future. Meanwhile, the consumer downloads in the US for Claude's mobile app has outgrown that of ChatGPT, at around the same time that ChatGPT uninstalls have also grown. Will Trump care to smell the coffee? Or will Amodei decide to call a truce for now so he can fight another day, perhaps?
[54]
AI emerges as key player in modern warfare - The Korea Times
U.S. Department of War and Anthropic logos are seen in this illustration taken Sunday. Reuters-Yonhap Artificial intelligence (AI) has moved closer to the center of modern warfare, as evidenced by its role in the recent U.S.-Israeli military strike on Iran. No longer confined to serving as a purely analytical tool, AI functioned as an operational support layer that helped compress the time between intelligence gathering and battlefield execution. According to U.S. media reports, the U.S. military used Anthropic's AI model Claude for "intelligence assessments, target identification and simulating battle scenarios" during the massive joint U.S.-Israel strikes on Iran. Palantir's Gotham data platform is said to have played a key role in pinpointing key military facilities of Iran's Islamic Revolutionary Guard Corps and its leadership hideouts. In practice, when Palantir organized and summarized vast volumes of defense‑related data from satellites, signals intelligence and other classified sources, Claude then supported commanders by using that information to compare and analyze different operational scenarios. Experts say the episode underscores a broader trend: AI's role in military applications is poised to expand further, driven by its ability to accelerate decision-making and enhance operational precision. "The recent case shows that AI has become so central to modern warfare that it is no exaggeration to call this an 'AI war,'" said Kim Gi-il, professor of military studies at Sangji University. Choi Byoung-ho, a professor at Korea University's Human-Inspired AI Research Lab, also noted that AI technology is likely to be adopted across the full spectrum of military operations, ranging from intelligence analysis to direct combat operations. "It's most likely that Claude was used primarily to analyze information, process and summarize data, and then report up to the stage right before a decision is made," Choi said. "We'll reach a point where, when a human orders an agentic AI to attack, it could draw up an operations plan on its own, select the appropriate weapons, choose specific targets and carry out the actual weapons deployment -- what Anthropic seems to have rejected (in this case). Technically it is already possible, though the error margins are still quite large, and the technology will eventually get there." For Korea, the U.S. case highlights structural gaps, with domestic defense companies arguing that standards defining "defense AI" remain ambiguous and that access to sensitive military data, which are essential for training and deployment, is limited. Meanwhile, the military seeks systems ready for immediate operational use, creating friction between urgency and capability. "(Military) tends to have little real understanding of the maturity of private sector technology or the constraints companies are facing, and that disconnect is creating serious friction. Expanding points of contact and closing that gap in speed and expectations is one of the biggest challenges for Korea's defense AI today," Kim said. Choi noted that the Iran strike is a preview of choices that Korea will face as the country seeks to build its own foundation models, which would also be applied to defense. "The fact that a foundation model was used in a war means it is really efficient. Thus, (Korea) will probably adapt its models to be used in war as well," he said. At the same time, experts warned that military adoption has outpaced global governance. "Military and ethical positions, values and even ideological perspectives are now colliding. There needs to be an international agreement, some kind of normative framework or protocol, governing the military use of defense AI, but at present, such standards are virtually nonexistent," Kim said. Choi also noted that discussions on how countries can prevent foundation models developed by big tech firms in the U.S., China and elsewhere from harming humanity are essential, but would be hard to achieve imminently. "At the international level, there needs to be a U.N.‑style convention that restricts these uses, but the problem is that Donald Trump has already torn down much of that framework," he said. "So meaningful international solidarity is effectively not in place. Someone will have to rebuild that system of global cooperation and sanctions from scratch because Trump dismantled it, and that is likely to be a long way off."
[55]
Anthropic back in talks with Pentagon -- days after CEO said he...
Anthropic is back in "last-ditch" talks with the Pentagon to resolve a bitter dispute over AI safeguards -- days after CEO Dario Amodei claimed the clash stemmed partly from its refusal to give "dictator-style praise" to President Trump, according to reports. Amodei has been holding discussions with Emil Michael, the War Department's undersecretary for research and engineering as part of a "last-ditch effort" to reach a contract governing the military's use of the company's AI models, the Financial Times reported on Thursday. Talks have reportedly resumed just days after Amodei circulated a 1,600-word memo to staff that accused rival OpenAI of concocting "just straight up lies" about its disputes with the Pentagon over surveillance and autonomous weapons. The Anthropic boss also told employees in the Friday memo he believes the administration's animus stems from the fact that he declined to "donate to Trump," tech news site The Information reported earlier. "The real reasons DoW and the Trump admin do not like us is that we haven't donated to Trump (while OpenAI/Greg have donated a lot)..." Amodei wrote, referencing Greg Brockman, OpenAI's president and co-founder. A deal would allow the Pentagon to continue using Anthropic's technology and could help the company avoid being formally designated a "supply chain risk," a step threatened by Defense Secretary Pete Hegseth that would force firms in the military supply chain to cut ties with the startup. Hegseth has yet to make the designation. Amodei, who donated to failed Democratic presidential nominee Kamala Harris, blasted what he described as dishonest messaging from OpenAI and the Pentagon, writing in the memo: "I want to be very clear on the messaging that is coming from OpenAI, and the mendacious nature of it." He added that "a lot of OpenAI and DoW messaging just straight up lies about these issues or tries to confuse them," and insisted that "it is false that 'OpenAI's terms were offered to us and we rejected them.'" The memo was sent just as OpenAI announced it would provide AI services to the Pentagon following the breakdown of negotiations between Anthropic and the Trump administration. The administration ordered all federal agencies to halt use of Anthropic's services -- prompting defense contractor Lockheed Martin to follow suit. Amodei, who has been urged by investors to make peace with the Trump administration, also took aim at OpenAI's approach to military safeguards, claiming the company's Pentagon deal relies on protections that are "maybe 20% real and 80% safety theater." He argued the Pentagon rejected stronger safeguards proposed by Anthropic while accepting weaker ones from OpenAI. The Anthropic boss also accused OpenAI CEO Sam Altman of trying to undercut his company while striking his own Pentagon deal, writing that Altman was "presenting himself as someone who wants to 'set the same contract for everyone in the industry'" while "behind the scenes" working with the Department of War to replace Anthropic "the instant we are designated a supply chain risk." The talks follow a heated breakdown in negotiations between Amodei and Pentagon officials over language Anthropic wanted included in the contract to block the use of its AI for mass domestic surveillance and fully autonomous weapons. Meanwhile, OpenAI is scrambling to add language in its contract with the Pentagon that would imposed additional safeguards designed to prevent the use of its technology to spy on American citizens, according to the FT. The ChatGPT maker has already revised contract language to prohibit "intentional," "deliberate" or "targeted" surveillance of US citizens and is working to add further protections during a three-month implementation period, according to people familiar with the talks. The effort comes after rival Anthropic refused to accept similar contract terms over concerns about domestic surveillance and autonomous weapons, prompting the Pentagon to pursue an agreement with OpenAI instead, the FT reported. The Post has sought comment from the White House, OpenAI, Anthropic and the Department of War.
[56]
Anthropic CEO back in talks with Pentagon over AI deal- FT By Investing.com
Investing.com-- Anthropic CEO Dario Amodei spoke with U.S. Department of War officials as part of a last-ditch effort to hash out a contract for the use of its artificial intelligence models, the Financial Times reported on Wednesday. Amodei has been holding talks with Emil Michael, under-secretary of defence for research and engineering, the FT report said, citing multiple people with knowledge of the matter. Get more breaking news on the biggest AI firms by subscribing to InvestingPro The report comes just days after talks between Anthropic and the Pentagon broke down, with defense secretary Pete Hegseth threatening to designate the AI startup as a supply chain risk. Disagreements were largely over terms that Anthropic felt were essential to prevent its AI from being used for mass domestic surveillance and lethal autonomous weapons. In an internal memo to Anthropic employees, Amodei wrote that disagreements with the Pentagon stemmed around the "analysis of bulk acquired data," which the startup found suspicious. He also claimed that Anthropic had been dropped because the company had not provided "dictator-style" praise to U.S. President Donald Trump, reports showed. Anthropic-- which is backed by several major U.S. tech firms, including Amazon and Alphabet-- had first signed a $200 million deal with the Pentagon in July 2025, with its AI models being the first to be used with the defense department's classified data. Rival OpenAI had announced a contract with the Pentagon just shortly after talks with Anthropic broke down. Reports showed OpenAI also pursuing a contract with the North Atlantic Treaty Organization.
[57]
Anthropic investors grow frustrated with CEO after feds ban AI startup
Some Anthropic investors are growing frustrated with CEO Dario Amodei's combative stance toward the Trump administration -- even as as defense contractor Lockheed Martin said it will comply with the government's ban on the AI startup, according to reports. Investors have privately complained that Amodei has antagonized Pentagon officials rather than working to smooth relations as the dispute escalated. Those backers have urged Anthropic to find a way to contain the fallout -- even as they continue to support the company's broader stance -- because they fear Amodei's posture could worsen tensions and deepen the risk of wider business blowback tied to the Pentagon fight, Reuters reported. "It's an ego and diplomacy problem," one person briefed on the discussions told the outlet. Earlier this week, The Post reported that oddball blog posts written by Anthropic researcher and in-house "philosopher" Amanda Askell had resurfaced after President Trump blasted the AI startup as "woke" and "radical left" while announcing a ban on the company serving federal agencies. The posts -- including one comparing eating meat to "ritual cannibalism" and another criticizing incarceration -- fueled concerns among some officials in Washington about the political leanings and ideological influences shaping the company behind the Claude chatbot. The clash centers on Anthropic's refusal to drop safeguards that prevent its Claude AI from being used for autonomous weapons or mass US surveillance. The tensions are already rippling through the defense industry. Lockheed Martin said it would comply with the government's directive to phase out Anthropic's technology, and other defense contractors are expected to follow suit if the startup is formally designated a "supply-chain risk." "We will follow the president's and the Department of War's direction," Lockheed Martin told The Post in a statement when asked about its Anthropic use following the moves by the Trump administration. "We expect minimal impacts," the company said, adding that it doesn't depend on any single AI vendor "for any portion of our work." Reuters quoted lawyers close to government contractors as saying that they anticipate other defense firms following Lockheed's lead. "Most companies that do significant business with the government are hyper-aware of what the US government wants and they're likely already taking steps to cleanse their supply chains of Anthropic," Franklin Turner, an attorney who specializes in government contracts, told Reuters. "Regardless of the legal justification, I think the threat is the point ... it has already done harm, significant harm to the company," he added, referring to Anthropic. A spokesperson for L3Harris declined to comment. The Post has sought comment from Anthropic, General Dynamics and Raytheon parent RTX. The AI startup is backed by a who's who of tech and finance, including Amazon, Google, Microsoft and Nvidia, as well as venture investors such as Lightspeed Venture Partners, Iconiq Capital and Coatue. Amazon CEO Andy Jassy has also spoken with Amodei about the dispute in recent days, people familiar with the matter told Reuters, though it remains unclear what position he took during those conversations. Lightspeed and Iconiq have also been in contact with Anthropic executives, Reuters reported. The Post has sought comment from Amazon, Lightspeed and Iconiq. The dispute escalated in late February after the Pentagon pushed AI companies to agree to an "all lawful use" clause that would allow the military to deploy their technology without carve-outs. Anthropic refused, maintaining safeguards that prohibit its Claude AI from being used for fully autonomous weapons or mass domestic surveillance. The standoff intensified last week when the Trump administration ordered federal agencies to stop using Anthropic's technology and begin phasing it out within six months. At the same time, Defense Secretary Pete Hegseth moved to label the startup a potential "supply-chain risk," a designation that could bar government contractors from using its tools. In the days that followed, agencies and contractors began scrambling to comply with the directive. OpenAI, a rival to Anthropic, has emerged as an early beneficiary of the fallout. The company said last week it had secured its own classified agreement with the Pentagon and publicly argued that Anthropic should not be labeled a supply-chain risk -- even as critics warn the dispute could push AI firms to relax safeguards in order to win lucrative defense contracts. After the administration ordered departments to phase out Anthropic's technology, the State Department moved to replace its internal "StateChat" system with an OpenAI model. The Post has sought comment the Pentagon and the White House.
[58]
Palantir faces challenge to remove Anthropic from Pentagon's AI software
NEW YORK, March 4 (Reuters) - Palantir is the latest company to face the painful task of unwinding from Anthropic in the wake of the AI lab's dispute with the Pentagon over safety guardrails, raising questions about a key military software platform. Palantir's Maven Smart Systems - a software platform that supplies militaries with intelligence analysis and weapons targeting - uses multiple prompts and workflows that were built using Anthropic's Claude code, according to two people familiar with the matter. U.S. President Donald Trump last week ordered the government to stop working with Anthropic after the AI lab reached an impasse in its row with the Pentagon over whether its policies could constrain autonomous weapons and government surveillance. Palantir, which holds Maven-related contracts with the Defense Department and other U.S. national security agencies that have a potential value of more than $1 billion, will have to replace Claude with another AI model and rebuild parts of its software, one of the sources said. Reuters could not determine how long this process would take. Defense Secretary Pete Hegseth has suggested the change must be immediate, stating last week: "Effective immediately, no contractor, supplier or partner that does business with the United States military may conduct any commercial activity" with Anthropic. The Pentagon, Anthropic and Palantir declined to comment. Palantir CEO Alex Karp weighed in on the Pentagon's dispute on Tuesday without naming Anthropic, stating that Silicon Valley companies that claim AI will take white-collar jobs and also "screw the military" could lead toward "the nationalization of our technology," according to his comments made at a defense tech conference in Washington, which were posted on X. Anthropic's role inside Maven underscores the messy and potentially costly challenge facing the Pentagon, other government agencies and U.S. companies as they face unwinding ties with a pivotal AI supplier that has become deeply embedded across public and private-sector systems. U.S. defense contractors, like Lockheed Martin, are expected to follow the Pentagon's order to purge Anthropic's prized AI tools from their supply chains, government contracting and technology attorneys said, even though the Trump administration's ban on their use may fail in court. Maven is the Pentagon's flagship artificial-intelligence program, designed to ingest data from multiple sources to identify military points of interest and speed up intelligence analysis and targeting decisions. The system has played a role in recent U.S. military operations. Reuters could not immediately determine whether the software platform was used during the January raid in Venezuela that captured former President Nicolas Maduro, or during the recent strikes on Iran. Palantir's software has become deeply embedded in the Pentagon's drive to integrate artificial intelligence into military operations, a position that has elevated the company from a niche intelligence contractor into a core supplier for U.S. defense modernization efforts and helped propel its market value to around $350 billion. (Reporting by David Jeans in New York and Mike Stone in Washington; Editing by Joe Brock and Matthew Lewis)
[59]
Big tech group supports Anthropic in Pentagon fight as investors push to de-escalate clash over AI safeguards
SAN FRANCISCO, March 4 (Reuters) - A big tech industry group consisting of major Anthropic backers Amazon and Nvidia on Wednesday expressed concern over the Pentagon's decision to declare the artificial intelligence company a supply-chain risk as other investors raced to contain fallout from the lab's fight with the U.S. Defense Department. In a letter dated Wednesday, the Information Technology Industry Council, whose members include Nvidia, Amazon.com, Apple and OpenAI said "We are concerned by recent reports regarding the Department of War's consideration of imposing a supply-chain risk designation in response to a procurement dispute." The letter does not name Anthropic. In recent days, CEO Dario Amodei has discussed the matter with some of Anthropic's major investors and partners, including Amazon.com CEO Andy Jassy, two of the people said. Venture capital firms including Lightspeed and Iconiq have also been in contact with Anthropic executives, two sources said. Lightspeed and Iconiq are also talking to other investors about potential solutions, according to one of the sources. Some investors are also reaching out to their contacts in the Trump administration in hopes of tamping down the tensions, two sources said. The discussions focus on avoiding a ban of Anthropic's AI from all Pentagon contractors, the people said. Anthropic and the Pentagon are continuing some talks in the meantime, one of the people said. Reuters was unable to determine what such talks entailed. U.S. President Donald Trump has called on Anthropic to help the government phase out its AI systems. The Pentagon declined to comment. Investors including Amazon did not immediately respond to a request for comment. Anthropic and the Defense Department, which the Trump administration renamed the Department of War, have been in a months-long dispute over how the military can use its technology on the battlefield. The clash is widely seen as a referendum on how much control AI companies can have over the technology they've built, systems they hope can transform education, public services and other aspects of society. The Pentagon has pushed AI companies to drop red lines in favor of abiding by an all-lawful use clause. But Anthropic has refused to back down on bans for its Claude AI to power autonomous weapons and mass U.S. surveillance. Anthropic was first among peer AI companies to work with classified information through a supply deal via cloud provider Amazon. OpenAI said Friday that it reached its own classified deal with the Pentagon and that Anthropic should not be labeled a risk to the department. "Our red lines were the same as Anthropic's, which is at this point in time, no domestic surveillance and no use of AI for autonomous weapons," Connie LaRossa, who works on national security policy at OpenAI, said on a panel at an Aspen Digital conference in Northern California on Wednesday. "We are actually working to have the secure risk designation removed from Anthropic ... That shouldn't be applied to a U.S. industry counterpart with such an important tool." FUNDING RISKS During talks with Anthropic executives, investors have reiterated their support for the San Francisco-based AI lab while also expressing their desire to find a solution with the Pentagon, the seven people said. Some investors told Reuters they were frustrated that CEO Amodei antagonized rather than cultivated Pentagon officials. "It's an ego and diplomacy problem," one of the people briefed on the matter said. At this point, some investors said, Amodei cannot be seen as capitulating to the administration without alienating a core group of employees and consumers who have flocked to Anthropic because of his stance. Amodei, who did not respond to a request for comment, has said Anthropic cannot "in good conscience accede to their request." While speaking to investors late Tuesday, Amodei said the company would "continue to work to figure out a solution with the DoW." The investors taking a stance on Pentagon talks are focused on helping Anthropic avoid being designated a "supply-chain risk" by the U.S. government, which, if implemented, could deliver a severe blow to the startup's fast-growing sales to business customers. Demand has risen for Anthropic's products such as its chatbot Claude and coding assistant Claude Code. Claude was the most-downloaded free app in the Apple App Store on Monday, surpassing OpenAI's ChatGPT. Defense Secretary Pete Hegseth has said such a risk designation would require all government contractors to stop using Anthropic's technology in any part of their business. Anthropic has publicly pushed back on Hegseth's comments, saying he does not have the statutory authority to block use of its AI outside of defense contracts. The Pentagon did not answer a request for comment on Anthropic's claim. Anthropic also said Friday it would challenge any supply-chain risk designation in court. Still, some investors worry the spat could scare off potential customers who are looking to avoid being in the administration's crosshairs generally, one of the people said. These worries come at a critical time for the startup. Anthropic has raised tens of billions of dollars on lofty expectations for its enterprise sales, which make up about 80% of Anthropic's revenue, the startup has said. The success of future share sales, including its widely anticipated initial public offering, hinges on Anthropic's continuing to build its business revenue. Anthropic is in the process of letting employees sell shares to investors, and the company has previously said there is no decision yet on its IPO. Anthropic's revenue run rate, or its projected annual revenue based on current data, is about $19 billion, one of the people said, up from $14 billion just a few weeks ago. The push from investors came as several U.S. government agencies started terminating their use of Anthropic's technology, with the State Department switching to rival OpenAI, following Trump's order on Friday to dump Anthropic within the next six months. (Reporting by Deepa Seetharaman and Krystal Hu in San Francisco; additional reporting by Mike Stone in Washington D.C. and Kenrick Cai in San Francisco; editing by Kenneth Li and Nick Zieminski) By Deepa Seetharaman, Karen Freifeld, Krystal Hu and Jeffrey Dastin
[60]
Anthropic investors push to de-escalate Pentagon clash over AI safeguards, sources say
SAN FRANCISCO, March 4 (Reuters) - Some Anthropic investors are racing to contain fallout from the AI research lab's dispute with the Pentagon, seven people familiar with the matter said, for fear that an ongoing spat could devastate the company's business. In recent days, CEO Dario Amodei has discussed the matter with some of Anthropic's major investors and partners, including Amazon.com CEO Andy Jassy, two of the people said. Venture capital firms including Lightspeed and Iconiq have also been in contact with Anthropic executives, two sources said. Some investors are also reaching out to their contacts in the Trump administration in hopes of tamping down the tensions, two sources said. The discussions focus on avoiding a ban of Anthropic's AI from all Pentagon contractors, the people said. Anthropic and the Pentagon are continuing some talks in the meantime, one of the people said. Reuters was unable to determine what such talks entailed. U.S. President Donald Trump has called on Anthropic to help the government phase out its AI systems. The Pentagon and investors including Amazon did not immediately respond to a request for comment. Anthropic and the Defense Department, which the Trump administration renamed the Department of War, have been in a months-long dispute over how the military can use its technology on the battlefield. The clash is widely seen as a referendum on how much control AI companies can have over the technology they've built, systems they hope can transform education, public services and other aspects of society. The Pentagon has pushed AI companies to drop red lines in favor of abiding by an all-lawful use clause. But Anthropic has refused to back down on bans for its Claude AI to power autonomous weapons and mass U.S. surveillance. Anthropic was first among peer AI companies to work with classified information through a supply deal via cloud provider Amazon. OpenAI said Friday that it reached its own classified deal with the Pentagon and that Anthropic should not be labeled a risk to the department. FUNDING RISKS During talks with Anthropic executives, investors have reiterated their support for the San Francisco-based AI lab while also expressing their desire to find a solution with the Pentagon, the seven people said. Some investors told Reuters they were frustrated that CEO Amodei antagonized rather than cultivated Pentagon officials. "It's an ego and diplomacy problem," one of the people briefed on the matter said. At this point, some investors said, Amodei cannot be seen as capitulating to the administration without alienating a core group of employees and consumers who have flocked to Anthropic because of his stance. Amodei, who did not respond to a request for comment, has said Anthropic cannot "in good conscience accede to their request." While speaking to investors late Tuesday, Amodei said the company would "continue to work to figure out a solution with the DoW." The investors taking a stance on Pentagon talks are focused on helping Anthropic avoid being designated a "supply-chain risk" by the U.S. government, which, if implemented, could deliver a severe blow to the startup's fast-growing sales to business customers. Demand has risen for Anthropic's products such as its chatbot Claude and coding assistant Claude Code. Claude was the most-downloaded free app in the Apple App Store on Monday, surpassing OpenAI's ChatGPT. Defense Secretary Pete Hegseth has said such a risk designation would require all government contractors to stop using Anthropic's technology in any part of their business. Anthropic has publicly pushed back on Hegseth's comments, saying he does not have the statutory authority to block use of its AI outside of defense contracts. The Pentagon did not answer a request for comment on Anthropic's claim. Anthropic also said Friday it would challenge any supply-chain risk designation in court. Still, some investors worry the spat could scare off potential customers who are looking to avoid being in the administration's crosshairs generally, one of the people said. These worries come at a critical time for the startup. Anthropic has raised tens of billions of dollars on lofty expectations for its enterprise sales, which make up about 80% of Anthropic's revenue, the startup has said. The success of future share sales, including its widely anticipated initial public offering, hinges on Anthropic's continuing to build its business revenue. Anthropic is in the process of letting employees sell shares to investors, and the company has previously said there is no decision yet on its IPO. Anthropic's revenue run rate, or its projected annual revenue based on current data, is about $19 billion, one of the people said, up from $14 billion just a few weeks ago. The push from investors came as several U.S. government agencies started terminating their use of Anthropic's technology, with the State Department switching to rival OpenAI, following Trump's order on Friday to dump Anthropic within the next six months. (Reporting by Deepa Seetharaman and Krystal Hu in San Francisco; additional reporting by Mike Stone in Washington D.C. and Kenrick Cai in San Francisco; editing by Kenneth Li and Nick Zieminski) By Deepa Seetharaman, Krystal Hu and Jeffrey Dastin
Share
Share
Copy Link
The Pentagon designated Anthropic a supply chain risk after the AI company refused unrestricted military access to Claude. OpenAI quickly stepped in with its own deal, triggering user backlash and internal resignations. The dispute centers on domestic surveillance and autonomous weapons, revealing a governance vacuum where contract negotiations between CEOs and defense officials are setting AI policy instead of Congress.
A high-stakes contract dispute between the Pentagon and Anthropic has escalated into a legal confrontation that raises fundamental questions about who sets the boundaries for military use of AI
1
. The Department of Defense (DoD) formally designated Anthropic a supply chain risk after the company refused to grant unrestricted access to its Claude AI models, a label typically reserved for foreign adversaries2
. Defense Secretary Pete Hegseth reportedly gave Anthropic CEO Dario Amodei a deadline to allow the DoD unrestricted use of its AI systems for "all lawful purposes," but the company stood firm on two red lines: preventing domestic surveillance of U.S. citizens and prohibiting fully autonomous weapons without human oversight3
.
Source: New York Post
The dispute centers on whether AI guardrails should be embedded in the technology itself or left to government oversight. Anthropic invested heavily in training its systems to refuse certain high-risk tasks, including assistance with surveillance
2
. Hegseth objected to what he described as "ideological constraints" in commercial AI systems, declaring that "we will not employ AI models that won't allow you to fight wars"2
. The Pentagon's designation means no contractor, supplier, or partner doing business with the U.S. military may conduct commercial activity with Anthropic, though this action will almost certainly face legal challenges2
.
Source: Korea Times
Within hours of Anthropic's blacklisting, OpenAI announced it had signed a defense contract to deploy its models on military classified networks, securing the deal its rival just lost
3
. The move triggered immediate backlash, with users uninstalling ChatGPT and pushing Claude to the top of App Store charts1
. At least one OpenAI executive quit over concerns that the announcement was rushed without appropriate guardrails in place1
. OpenAI CEO Sam Altman later posted that the Pentagon affirmed its AI would not be used by the department's intelligence agencies5
.Amodei reportedly sent a message to Anthropic staff calling the OpenAI deal "safety theater" and the messaging around it "straight up lies," adding that "the main reason they accepted and we did not is that they cared about placating employees, and we actually cared about preventing abuses"
4
. Despite the public vitriol, reporting from the Financial Times and Bloomberg suggests Amodei resumed negotiations with Pentagon official Emil Michael in an attempt to compromise on a contract4
.Swapping out one AI model on a classified network for another takes minutes, but retraining personnel who've learned to rely on it will take much longer
3
. Claude became the first large language model publicly known to operate in the Pentagon's classified environment in late 2024, accessed through tools like Claude Gov3
. Lauren Kahn, a researcher at Georgetown University's Center for Security and Emerging Technology and former Pentagon official, describes its deployment as more like a chatbot than a free-roaming agent, sitting "on top" of existing software in tightly controlled corners3
.Each integration must be offboarded piece by piece, and whatever replaces Claude must clear strict security reviews before touching a classified system. Software changes inside the Pentagon can be "excruciating"—even installing Microsoft Office "takes months and months and months," according to Kahn
3
. Every AI model fails in characteristic ways, and operators who spent months using Claude learned those quirks through trial and error. Kahn worries about "a slightly heightened risk of automation bias in the early stages as they're working out the kinks" with the replacement model3
.Related Stories
The controversy reveals a fundamental governance vacuum where critical policy decisions about AI are being settled through contract negotiations between CEOs and defense officials rather than through democratic processes
5
. "This week exposed a real governance vacuum, and it should be a wake-up call for Congress," said Hamza Chaudhry, AI and national security lead at the Future of Life Institute5
.The ethical concerns center on two substantive issues. First, opposition to domestic surveillance touches on well-established civil liberties concerns, though current laws aren't actually clear on AI's role
5
. The risk isn't that Claude will spy on Americans directly, but that AI tools will process data the government already has—or could buy from private data brokers without a warrant—into information that would otherwise require one5
. Second, Amodei argued that today's frontier models "are simply not reliable enough to power fully autonomous weapons" without human oversight5
.
Source: Sky News
The question now is whether this controversy will scare other startups away from defense work
1
. The situation is unusual because OpenAI and Claude make products that "no one can shut up about," drawing a spotlight that most defense contractors don't face1
. General Motors makes defense vehicles for the Army and has worked on autonomous versions, but that work flies under the radar1
.Stripped of rhetoric, this resembles a procurement disagreement in a market economy: the military decides what it wants to buy, and companies decide what they're willing to sell under what conditions
2
. Where it becomes troubling is using the supply chain risk designation—a tool meant to address foreign adversaries—to blacklist an American company for rejecting preferred contractual terms2
. OpenAI research scientist Noam Brown posted that he's "afraid of a slippery slope where we become accustomed to circumventing the democratic process for important policy decisions"5
. Greg Nojeim, senior counsel at the Center for Democracy and Technology, noted it's striking that "the Pentagon is rejecting that advice and insisting on being able to use this AI tool to kill people without human intervention"5
.Summarized by
Navi
[1]
[3]
[4]
12 Feb 2026•Policy and Regulation

14 Feb 2026•Policy and Regulation

30 Jan 2026•Policy and Regulation

1
Policy and Regulation

2
Policy and Regulation

3
Entertainment and Society
