Pentagon labels Anthropic supply chain risk after AI firm rejects unrestricted military use

Reviewed byNidhi Govil

3 Sources

Share

Defense Secretary Pete Hegseth designated Anthropic as a supply chain risk, barring any Pentagon contractor from doing business with the AI company. The move follows Anthropic's refusal to allow unrestricted military use of its Claude AI model for autonomous weapons and mass surveillance without human oversight, ending a week of tense negotiations.

Pentagon Escalates Standoff with Anthropic Over AI Use Policies

Defense Secretary Pete Hegseth designated Anthropic as a supply chain risk on Friday, effectively barring any company working with the Department of Defense from conducting commercial activity with the AI firm

1

. The decision came nearly two hours after Donald Trump announced on Truth Social that he was ordering federal agencies to ban Anthropic products, though the Pentagon and certain agencies can continue using the technology for up to six months while transitioning to alternative services

2

.

Source: The Verge

Source: The Verge

The supply chain risk designation, typically reserved for companies with foreign government ties that pose national security threats, represents a dramatic escalation in a week-long dispute over acceptable use policies for Anthropic's Claude AI model

1

. Pete Hegseth declared in a post on X that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic," a move with potentially wide-ranging impact given the thousands of military contractors

2

.

The Ultimatum That Sparked the Crisis

The Pentagon gave Anthropic a Friday 5:30 PM EST deadline during negotiations: agree to let the military use Claude for "all legal purposes," including autonomous lethal weapons without human oversight and mass surveillance, or face designation as a supply chain risk

1

. Anthropic, the only AI firm whose model is deployed on the Pentagon's classified networks, sought guardrails preventing its technology from conducting mass surveillance of Americans or carrying out military operations without human approval

2

.

Hegseth accused Anthropic and CEO Dario Amodei of "duplicity," claiming they "attempted to strong-arm the United States military into submission" through what he called "cowardly corporate virtue-signaling that places Silicon Valley ideology above American lives"

1

. The Defense Secretary insisted the military must have "full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic"

1

.

Source: CBS

Source: CBS

Competing Visions of AI Safety and Military Authority

Dario Amodei defended the company's position in a Thursday statement, arguing that guardrails are necessary because Claude is not infallible enough to power fully autonomous weapons and could raise serious privacy concerns

2

. "In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei said, adding that "some uses are also simply outside the bounds of what today's technology can safely and reliably do"

2

.

The Pentagon's position is that existing federal laws already prohibit mass surveillance of Americans, and internal policies restrict the military from using fully autonomous weapons

2

. Pentagon chief technology officer Emil Michael told CBS News on Thursday that the military had offered written acknowledgements of these laws and policies during negotiations. "At some level, you have to trust your military to do the right thing," Michael said, noting "we'll never say that we're not going to be able to defend ourselves in writing to a company"

2

.

An Anthropic spokesperson called the Pentagon's offer inadequate, saying the new language was "paired with legalese that would allow those safeguards to be disregarded at will"

2

. The breakdown in negotiations prompted the Pentagon to consider invoking the Defense Production Act before ultimately choosing the supply chain risk designation

1

.

What This Means for Big Tech and the Federal Government

The confrontation sets a precedent for how the federal government will engage with Big Tech companies that seek to impose restrictions on military use of AI models. Companies doing business with the Pentagon now face a choice: divest from Anthropic within six months or lose lucrative military contracts

1

. Hegseth emphasized that "Anthropic's stance is fundamentally incompatible with American principles" and that the company's "relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered"

3

.

The dispute highlights fundamental disagreements about the role of AI in national security and whether tech companies should have authority to set boundaries on military applications. As warfighters increasingly rely on AI capabilities, questions about lethal weapons systems, human oversight requirements, and the balance between innovation and safety will continue to shape policy debates. The standoff also raises concerns about whether the Pentagon's aggressive stance might discourage other AI firms from partnering with the military, potentially limiting access to cutting-edge technology at a time when competitors like China are racing ahead in AI development.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo