2 Sources
2 Sources
[1]
Anthropic's safety-first AI collides with the Pentagon as Claude expands into autonomous agents
As Anthropic releases its most autonomous agents yet, a mounting clash with the military reveals the impossible choice between global scaling and a "safety first" ethos On February 5 Anthropic released Claude Opus 4.6, its most powerful artificial intelligence model. Among the model's new features is the ability to coordinate teams of autonomous agents -- multiple AIs that divide up the work and complete it in parallel. Twelve days after Opus 4.6's release, the company dropped Sonnet 4.6, a cheaper model that nearly matches Opus's coding and computer skills. In late 2024, when Anthropic first introduced models that could control computers, they could barely operate a browser. Now Sonnet 4.6 can navigate Web applications and fill out forms with human-level capability, according to Anthropic. And both models have a working memory large enough to hold a small library. Enterprise customers now make up roughly 80 percent of Anthropic's revenue, and the company closed a $30-billion funding round last week at a $380-billion valuation. By every available measure, Anthropic is one of the fastest-scaling technology companies in history. But behind the big product launches and valuation, Anthropic faces a severe threat: the Pentagon has signaled it may designate the company a "supply chain risk" -- a label more often associated with foreign adversaries -- unless it drops its restrictions on military use. Such a designation could effectively force Pentagon contractors to strip Claude from sensitive work. If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. Tensions boiled over after January 3, when U.S. special operations forces raided Venezuela and captured Nicolás Maduro. The Wall Street Journal reported that forces used Claude during the operation via Anthropic's partnership with the defense contractor Palantir -- and Axios reported that the episode escalated an already fraught negotiation over what, exactly, Claude could be used for. When an Anthropic executive reached out to Palantir to ask whether the technology had been used in the raid, the question raised immediate alarms at the Pentagon. (Anthropic has disputed that the outreach was meant to signal disapproval of any specific operation.) Secretary of Defense Pete Hegseth is "close" to severing the relationship, a senior administration official told Axios, adding, "We are going to make sure they pay a price for forcing our hand like this." The collision exposes a question: Can a company founded to prevent AI catastrophe hold its ethical lines once its most powerful tools -- autonomous agents capable of processing vast datasets, identifying patterns and acting on their conclusions -- are running inside classified military networks? Is a "safety first" AI compatible with a client that wants systems that can reason, plan and act on their own at military scale? Anthropic has drawn two red lines: no mass surveillance of Americans and no fully autonomous weapons. CEO Dario Amodei has said Anthropic will support "national defense in all ways except those which would make us more like our autocratic adversaries." Other major labs -- OpenAI, Google and xAI -- have agreed to loosen safeguards for use in the Pentagon's unclassified systems, but their tools aren't yet running inside the military's classified networks. The Pentagon has demanded that AI be available for "all lawful purposes." The friction tests Anthropic's central thesis. The company was founded in 2021 by former OpenAI executives who believed the industry was not taking safety seriously enough. They positioned Claude as the ethical alternative. In late 2024 Anthropic made Claude available on a Palantir platform with a cloud security level up to "secret" -- making Claude, by public accounts, the first large language model operating inside classified systems. The question the standoff now forces is whether safety-first is a coherent identity once a technology is embedded in classified military operations and whether red lines are actually possible. "These words seem simple: illegal surveillance of Americans," says Emelia Probasco, a senior fellow at Georgetown's Center for Security and Emerging Technology. "But when you get down to it, there are whole armies of lawyers who are trying to sort out how to interpret that phrase." Consider the precedent. After the Edward Snowden revelations, the U.S. government defended the bulk collection of phone metadata -- who called whom, when and for how long -- arguing that these kinds of data didn't carry the same privacy protections as the contents of conversations. The privacy debate then was about human analysts searching those records. Now imagine an AI system querying vast datasets -- mapping networks, spotting patterns, flagging people of interest. The legal framework we have was built for an era of human review, not machine-scale analysis. "In some sense, any kind of mass data collection that you ask an AI to look at is mass surveillance by simple definition," says Peter Asaro, co-founder of the International Committee for Robot Arms Control. Axios reported that the senior official "argued there is considerable gray area around" Anthropic's restrictions "and that it's unworkable for the Pentagon to have to negotiate individual use-cases with" the company. Asaro offers two readings of that complaint. The generous interpretation is that surveillance is genuinely impossible to define in the age of AI. The pessimistic one, Asaro say, is that "they really want to use those for mass surveillance and autonomous weapons and don't want to say that, so they call it a gray area." Regarding Anthropic's other red line, autonomous weapons, the definition is narrow enough to be manageable -- systems that select and engage targets without human supervision. But Asaro sees a more troubling gray zone. He points to the Israeli military's Lavender and Gospel systems, which have been reported as using AI to generate massive target lists that go to a human operator for approval before strikes are carried out. "You've automated, essentially, the targeting element, which is something [that] we're very concerned with and [that is] closely related, even if it falls outside the narrow strict definition," he says. The question is whether Claude, operating inside Palantir's systems on classified networks, could be doing something similar -- processing intelligence, identifying patterns, surfacing persons of interest -- without anyone at Anthropic being able to say precisely where the analytical work ends and the targeting begins. The Maduro operation tests exactly that distinction. "If you're collecting data and intelligence to identify targets, but humans are deciding, 'Okay, this is the list of targets we're actually going to bomb' -- then you have that level of human supervision we're trying to require," Asaro says. "On the other hand, you're still becoming reliant on these AIs to choose these targets, and how much vetting and how much digging into the validity or lawfulness of those targets is a separate question." Anthropic may be trying to draw the line more narrowly -- between mission planning, where Claude might help identify bombing targets, and the mundane work of processing documentation. "There are all of these kind of boring applications of large language models," Probasco says. But the capabilities of Anthropic's models may make those distinctions hard to sustain. Opus 4.6's agent teams can split a complex task and work in parallel -- an advancement in autonomous data processing that could transform military intelligence. Both Opus and Sonnet can navigate applications, fill out forms and work across platforms with minimal oversight. These features driving Anthropic's commercial dominance are what make Claude so attractive inside a classified network. A model with a huge working memory can also hold an entire intelligence dossier. A system that can coordinate autonomous agents to debug a code base can coordinate them to map an insurgent supply chain. The more capable Claude becomes, the thinner the line between the analytical grunt work Anthropic is willing to support and the surveillance and targeting it has pledged to refuse. As Anthropic pushes the frontier of autonomous AI, the military's demand for those tools will only grow louder. Probasco fears the clash with the Pentagon creates a false binary between safety and national security. "How about we have safety and national security?" she asks.
[2]
Trump team livid about Dario Amodei's principled stand to keep the Defense Department from using his AI tools for warlike purposes | Fortune
Anthropic's $200 million contract with the Department of Defense is up in the air after Anthropic reportedly raised concerns about the Pentagon's use of its Claude AI model during the Nicolas Maduro raid in January. "The Department of War's relationship with Anthropic is being reviewed," Chief Pentagon Spokesman Sean Parnell said in a statement to Fortune. "Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people." Tensions have escalated in recent weeks after a top Anthropic official reportedly reached out to a senior Palantir executive to question how Claude was used in the raid, per The Hill. The Palantir executive interpreted the outreach as disapproval of the model's use in the raid and forwarded details of the exchange to the Pentagon. (President Trump said the military used a "discombobulator" weapon during the raid that made enemy equipment "not work.") "Anthropic has not discussed the use of Claude for specific operations with the Department of War," an Anthropic spokeperson said in a statement to Fortune. "We have also not discussed this with, or expressed concerns to, any industry partners outside of routine discussions on strictly technical matters." At the center of this dispute are the contractual guardrails dictating how AI models can be used in defense operations. Anthropic CEO Dario Amodei has consistently advocated for strict limits on AI use and regulation, even admitting it becomes difficult to balance safety with profits. For months now, the company and DOD have held contentious negotiations over how Claude can be used in military operations. Under the Defense Department contract, Anthropic won't allow the Pentagon to use its AI models for mass surveillance of Americans or use of its technology in fully autonomous weapons. The company also banned the use of its technology in "lethal" or "kinetic" military applications. Any direct involvement in active gunfire during the Maduro raid would likely violate those terms. Among the AI companies contracting with the government -- including OpenAI, Google and xAI -- Anthropic holds a lucrative position placing Claude as the only large language model authorized on the Pentagon's classified networks. This position was highlighted by Anthropic in a statement to Fortune. "Claude is used for a wide variety of intelligence-related use cases across the government, including the DoW, in line with our Usage Policy." The company "is committed to using frontier AI in support of US national security," the statement read. "We are having productive conversations, in good faith, with DoW on how to continue that work and get these complex issues right." Palantir, OpenAI, Google and xAI didn't immediately respond to a request for comment. Although the DOD has accelerated efforts to integrate AI into its operations, only xAI has granted the DOD the use of its models for "all lawful purposes," while the others maintain usage restrictions. Amodei has been sounding the alarms for months on user protections, offering Anthropic as a safety-first alternative to OpenAI and Google in the absence of governmental regulations. "I'm deeply uncomfortable with these decisions being made by a few companies," he said back in November. Although it was rumored that Anthropic was planning to ease restrictions, the company now faces the possibility of being cut out of the defense industry altogether. A senior Pentagon official told Axios Defense Secretary Pete Hegseth is "close" to removing Anthropic from the military supply chain, forcing anyone who wishes to conduct business with the military to also cut ties with the company. "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this," the senior official told the outlet. Being deemed a military supply risk issue is a special designation usually reserved only for foreign adversaries. The closest precedent is the government's 2019 ban on Huawei over national security concerns. In Anthropic's case, sources told Axios that defense officials have been looking to pick a fight with the San Francisco-based company for some time. The Pentagon's comments are the latest in a public dispute coming to a boil. The government claims that having companies set ethical limits to its models would be unnecessarily restrictive, and the sheer number of gray areas would render the technologies futile. As the Pentagon continues to negotiate with the AI subcontractors to expand usage, the public spat becomes a proxy skirmish for who will dictate the uses of AI.
Share
Share
Copy Link
Anthropic's $200 million Department of Defense contract hangs in the balance as tensions escalate over the company's restrictions on military use of its Claude AI model. The Pentagon threatens to designate the safety-first AI company as a supply chain risk—a label typically reserved for foreign adversaries—after Anthropic questioned how Claude was used during the January raid on Venezuela. The clash tests whether ethical AI boundaries can survive inside classified military networks.
Anthropicʼs commitment to Anthropic's safety-first AI is colliding with the Pentagon's demand for unrestricted access to artificial intelligence tools. The conflict erupted after U.S. special operations forces raided Venezuela on January 3 and captured Nicolás Maduro, with forces reportedly using the Claude AI model during the operation through Anthropic's partnership with Palantir
1
. When an Anthropic executive reached out to Palantir to ask whether the technology had been used in the raid, the inquiry triggered immediate alarms at the Pentagon1
. The $200 million Department of Defense contract is now under review, with Defense Secretary Pete Hegseth reportedly "close" to severing the relationship2
.
Source: Scientific American
The Pentagon has signaled it may designate Anthropic a supply chain risk unless the company drops its restrictions on military use—a label more often associated with foreign adversaries like Huawei, which faced a similar ban in 2019
2
. Such a designation could effectively force Pentagon contractors to strip Claude from sensitive work1
. "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this," a senior Pentagon official told Axios2
. Chief Pentagon Spokesman Sean Parnell stated: "Our nation requires that our partners be willing to help our warfighters win in any fight"2
.Anthropicʼs CEO Dario Amodei has drawn two red lines: no mass surveillance of Americans and no fully autonomous weapons
1
. The company also banned the use of its technology in "lethal" or "kinetic" military applications2
. Any direct involvement in active gunfire during the Maduro raid would likely violate those terms. Amodei has said Anthropic will support "national security in all ways except those which would make us more like our autocratic adversaries"1
. The Pentagon, however, has demanded that AI be available for "all lawful purposes"1
.
Source: Fortune
The timing of this dispute is significant. On February 5, Anthropic released Claude Opus 4.6, its most powerful model yet, featuring the ability to coordinate teams of autonomous agents—multiple AIs that divide up work and complete it in parallel
1
. Twelve days later, Sonnet 4.6 launched with near-matching capabilities. These models can now navigate web applications and fill out forms with human-level capability, according to Anthropic1
. Claude holds a unique position as the only large language model authorized on the Pentagon's classified networks, making it particularly valuable for intelligence-related use cases2
.Related Stories
Among AI companies contracting with the government—including OpenAI, Google, and xAI—only xAI has granted the Department of Defense the use of its models for "all lawful purposes," while others maintain usage restrictions
2
. Other major labs have agreed to loosen safeguards for use in the Pentagon's unclassified systems, but their tools aren't yet running inside military's classified networks1
. The public dispute becomes a proxy battle for who will dictate the uses of AI in military operations2
.Emelia Probasco, a senior fellow at Georgetown's Center for Security and Emerging Technology, notes the complexity: "These words seem simple: illegal surveillance of Americans. But when you get down to it, there are whole armies of lawyers who are trying to sort out how to interpret that phrase"
1
. The question now is whether an ethical framework can function once technology is embedded in classified military operations. Enterprise customers now make up roughly 80 percent of Anthropic's revenue, and the company closed a $30-billion funding round at a $380-billion valuation1
. The company stated it "is committed to using frontier AI in support of US national security" and is "having productive conversations, in good faith, with DoW on how to continue that work"2
. Whether those conversations can bridge the gap between safety principles and military demands will determine not just Anthropic's future, but set precedent for AI military use across the industry.Summarized by
Navi
[1]
14 Feb 2026•Policy and Regulation

12 Feb 2026•Policy and Regulation

30 Jan 2026•Policy and Regulation

1
Policy and Regulation

2
Policy and Regulation

3
Business and Economy
