Anthropic stands firm against Pentagon's demand for unrestricted military AI access

Reviewed byNidhi Govil

174 Sources

Share

Defense Secretary Pete Hegseth has threatened to invoke the Defense Production Act or label Anthropic a supply chain risk unless the $380 billion AI company grants unfettered access to its Claude models for all military applications by Friday. CEO Dario Amodei refuses to budge on two red lines: mass surveillance of Americans and fully autonomous weapons with no human in the loop.

Anthropic Faces Pentagon Ultimatum Over Military AI Use

US Defense Secretary Pete Hegseth has issued a stark ultimatum to Anthropic: grant the Pentagon unrestricted access to its AI models for all lawful military applications by Friday at 5:01 PM, or face severe consequences. The threat marks a dramatic escalation in tensions between the $380 billion AI company and the military over ethical boundaries in military AI deployment

1

.

During tense talks in Washington on Tuesday, Hegseth summoned Anthropic CEO Dario Amodei and threatened to either label the company a supply chain risk—a designation typically reserved for foreign adversaries—or invoke the Defense Production Act, a Cold War-era measure that would compel Anthropic to comply regardless of its objections

1

. A senior Pentagon official stated that if Anthropic doesn't "get on board," Hegseth "will ensure the Defense Production Act is invoked on Anthropic, compelling them to be used by the Pentagon regardless of if they want to or not"

1

.

Source: ET

Source: ET

Dario Amodei Maintains Red Lines on Mass Surveillance and Autonomous Weapons

Amodei has drawn two firm ethical boundaries that Anthropic refuses to cross: no mass surveillance of Americans and no fully autonomous weapons with no human in the loop. In a statement released Thursday, just hours before the Pentagon deadline, Amodei declared he "cannot in good conscience accede to [the Pentagon's] request" for unrestricted access to AI systems

4

.

Source: ET

Source: ET

"In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei wrote. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do"

4

. He pointed out the inherent contradiction in Hegseth's dual threats: "One labels us a security risk; the other labels Claude as essential to national security"

4

.

Anthropics has expressed particular concern about its models being used for lethal missions that do not have a human in the loop, arguing that state-of-the-art AI models are not reliable enough to be trusted in those contexts

1

. The company had also pushed for new rules to govern the use of AI models for mass domestic surveillance, even where that might be legal under current regulations

1

.

Tech Industry Rallies Behind Anthropic's Ethical Stance

As the Friday deadline approaches, over 300 Google employees and over 60 OpenAI employees have signed an open letter urging their company leaders to support Anthropic and refuse unilateral military use of AI for domestic mass surveillance and autonomous weaponry

2

. "They're trying to divide each company with fear that the other will give in," the letter states. "That strategy only works if none of us know where the others stand"

2

.

Sam Altman, CEO of OpenAI, told CNBC on Friday morning that he doesn't "personally think the Pentagon should be threatening DPA against these companies"

2

. OpenAI subsequently reached a new agreement with the Pentagon that allows the US military to "deploy our models in their classified network" while maintaining prohibitions on domestic mass surveillance and ensuring "human responsibility for the use of force, including for autonomous weapon systems," according to Altman

5

. Altman wrote that OpenAI is "asking the DoW to offer these same terms to all AI companies"

5

.

Google DeepMind Chief Scientist Jeff Dean also expressed opposition to mass surveillance by the government, writing on X that "mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression" and that "surveillance systems are prone to misuse for political or discriminatory purposes"

2

.

Claude's Role in Classified Military Operations Sparks Controversy

The standoff intensified after January 3, when US special operations forces raided Venezuela and captured Nicolás Maduro. The Wall Street Journal reported that forces used Claude during the operation via Anthropic's partnership with Palantir

3

. When an Anthropic executive reached out to Palantir to ask whether the technology had been used in the raid, the question raised immediate alarms at the Pentagon

3

.

Source: The Hill

Source: The Hill

Anthropics Claude tool has until recently been the only model working on classified missions as a result of its partnership with Palantir

1

. The company made Claude available on a Palantir platform with a cloud security level up to "secret" in late 2024, making Claude, by public accounts, the first large language model operating inside classified systems

3

.

Hegseth is now negotiating with AI labs, including Google, OpenAI and Elon Musk's xAI, to replace Anthropic and integrate their technology into classified military systems. A senior Pentagon official said Musk's Grok "is on board with being used in a classified setting, while the rest of the companies are close"

1

.

Safety-First AI Collides With Military Scaling Demands

The collision exposes fundamental tensions as Anthropic scales rapidly while maintaining its safety-first ethos. On February 5, Anthropic released Claude Opus 4.6, its most powerful AI model, featuring the ability to coordinate teams of autonomous agents that divide up work and complete it in parallel

3

. Twelve days later, the company dropped Sonnet 4.6, a cheaper model that nearly matches Opus's coding and computer skills. Sonnet 4.6 can navigate web applications and fill out forms with human-level capability, and both models have working memory large enough to hold a small library

3

.

Enterprise customers now make up roughly 80 percent of Anthropic's revenue, and the company closed a $30 billion funding round last week at a $380 billion valuation

3

. By every available measure, Anthropic is one of the fastest-scaling technology companies in history

3

.

The Pentagon released its AI strategy last month, with Hegseth stating in a memo that "AI-enabled warfare and AI-enabled capability development will redefine the character of military affairs over the next decade"

1

. He added that the US military "must build on its lead" over foreign adversaries to make soldiers "more lethal and efficient," and that the AI race was "fueled by the accelerating pace" of innovation coming from the private sector

1

.

Amodei has said Anthropic will support "national defense in all ways except those which would make us more like our autocratic adversaries"

3

. The company was founded in 2021 by former OpenAI executives who believed the industry was not taking AI safety seriously enough, positioning Claude as the ethical alternative

3

. The standoff now tests whether ethical boundaries and AI regulation can hold once autonomous agents capable of processing vast datasets and acting on conclusions are running inside classified networks for national security purposes.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo