Pentagon threatens to cut Anthropic's $200M contract over AI safety restrictions in military ops

Reviewed byNidhi Govil

12 Sources

Share

The Pentagon is threatening to sever its $200 million contract with Anthropic and designate the AI company as a supply chain risk after tensions erupted over usage restrictions on its Claude model. The dispute intensified following reports that US military used Anthropic's Claude during the Venezuela raid to capture Nicolás Maduro, raising questions about AI safety and military use.

Pentagon Anthropic Tensions Reach Breaking Point Over Military AI Restrictions

The relationship between Pentagon Anthropic has deteriorated to a critical juncture, with the Department of Defense threatening to terminate its contract worth up to $200 million with the AI safety-focused company

1

. The dispute centers on Anthropic's refusal to lift certain restrictions on its Claude AI model for unrestricted military application of AI, particularly concerning autonomous weapons and mass domestic surveillance

3

. Chief Pentagon spokesperson Sean Parnell confirmed the company was under review, stating that "our nation requires that our partners be willing to help our warfighters win in any fight"

1

. The Pentagon is even considering designating Anthropic as a supply chain risk, a label typically reserved for companies doing business with scrutinized countries like China, which would prevent defense contractors from using Anthropic's technology

1

.

Source: Jerusalem Post

Source: Jerusalem Post

US Military Used Anthropic's Claude in Venezuela Operation

Tensions escalated after reports emerged that US military used Anthropic's Claude during the operation to capture Venezuelan President Nicolás Maduro in January

2

. Claude's deployment came through Anthropic's partnership with Palantir Technologies, whose platforms are widely used by the Department of Defense and federal law enforcement

2

. The Venezuela raid involved bombing across Caracas and resulted in 83 deaths according to Venezuela's defense ministry

4

. While it remains unclear exactly how Claude was used in the operation, the incident reportedly raised concerns from an Anthropic employee during a routine meeting with Palantir

5

. However, Anthropic has maintained it has not found any violations of its policies and denied expressing mission-related disagreements with the military

5

.

Source: Axios

Source: Axios

AI Companies Ethical Guidelines Clash With National Security Demands

The core tension revolves around AI companies ethical guidelines versus the Pentagon's demand for lawful use of AI without company-imposed restrictions. Anthropic's usage policies explicitly forbid using Claude to support violence, design weapons, or carry out surveillance

2

. CEO Dario Amodei has been particularly vocal about not wanting Claude involved in autonomous weapons or AI government surveillance

1

. Defense Secretary Pete Hegseth released a new AI strategy document in January calling for contracts with AI companies to eliminate company-specific guardrails, allowing "any lawful use" of AI for Department of Defense purposes

5

. Department of Defense CTO Emil Michael told reporters the government won't tolerate AI companies limiting how the military uses AI in its weapons, asking rhetorically about responding to drone swarms when "human reaction time is not fast enough"

1

.

Source: Wired

Source: Wired

Other AI Labs Face Similar Dilemmas as Pentagon Pushes for Classified Access

Crucially, Claude is the only model currently available in the military's classified systems through Anthropic's partnership with Palantir Technologies

3

. Three other models—OpenAI's ChatGPT, Google's Gemini, and xAI's Grok—are available in unclassified systems and have lifted their ordinary safeguards as part of those agreements

3

. Negotiations to bring these companies into the classified domain are now more urgent as the Pentagon considers how to replace Claude if necessary, a process officials conceded would be massively disruptive

3

. A senior administration official revealed that xAI, founded by Elon Musk, has already agreed to "all lawful use" at any classification level and was the only frontier lab bidder in the Pentagon's autonomous drone software contest

3

. OpenAI is bidding in a limited way to translate voice commands but not for drone control, weapon integration, or target selection

3

.

AI Safety and Military Use Present Unprecedented Challenges

The standoff raises fundamental questions about AI safety and military use as companies grapple with how advanced AI should engage with national security operations. Anthropic, which raised $30 billion in its latest funding round and is valued at $380 billion, has carved out a space as the most safety-conscious AI company, with guardrails deeply integrated into its models

2

1

. Sources familiar with dynamics at OpenAI and Google indicate executives at these companies share some concerns about how their models might be used in AI in military operations, and may fear employee revolts similar to Google's 2018 experience with Project Maven

3

. One source noted a critical challenge: "The companies themselves don't fully understand how their models will respond in certain scenarios, or why," adding that "if there's a one in a million chance that the model might do something unpredictable, is that one in a million chance so catastrophic"

3

. Administration officials acknowledged the fight with Anthropic serves as a useful way to set the tone for defense contracts negotiations with other labs

3

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo