Anthropic tells court it has no control over AI once deployed in Pentagon classified networks

2 Sources

Share

San Francisco-based Anthropic filed a 96-page brief asserting it cannot manipulate its Claude AI tool after deployment in classified Pentagon military networks. The legal dispute centers on the Trump administration's supply chain risk designation following disagreements over autonomous weapons and mass surveillance policies. A hearing is scheduled for May 19.

Anthropic Contests Pentagon Supply Chain Risk Designation

San Francisco-based Anthropic has filed a 96-page brief with the U.S. Court of Appeals in Washington D.C., directly challenging the Pentagon's characterization of the AI company as a supply chain risk

1

. The company asserts it has no visibility, technical ability, or any kind of kill switch for its Claude AI technology once deployed in classified Pentagon military networks

2

. This legal dispute emerged after the Trump administration canceled a $200 million contract with Anthropic, subsequently awarding a deal to rival OpenAI to provide AI technology to the U.S. military

1

.

Source: Axios

Source: Axios

Core of the Legal Dispute Over AI in Military Systems

The conflict centers on Anthropic's usage policies that prohibit Claude for autonomous weapons or mass surveillance applications. The Pentagon dismissed these restrictions as red herrings, arguing that Anthropic is inappropriately interfering with how its AI technology can be used in sensitive military operations

2

. Anthropic contends the Pentagon is illegally retaliating against it by applying a designation typically reserved for protecting against sabotage of national security systems by foreign adversaries

1

. The company emphasizes that the Pentagon has the opportunity to test models before deployment, reinforcing its argument that it cannot manipulate the technology after implementation

2

.

Source: AP

Source: AP

Split Court Decisions Create Uncertain Landscape

The appeals court in Washington D.C. rejected Anthropic's request for an order blocking the Pentagon's actions while evidence collection continues

1

. However, in a parallel case in San Francisco federal court, a judge granted Anthropic's request, prompting the Trump administration to remove stigmatizing labels from the company according to court filings

1

. The split decision means Anthropic cannot participate in new Pentagon contracts but can continue working with other government agencies while litigation unfolds

2

.

Complications Around Mythos Model Deployment

As the Pentagon argues Anthropic poses a national security threat, the Trump administration simultaneously moves to deploy Anthropic's new Mythos model across the federal government. Agency heads are now scrambling to determine how they can protect their systems from cyber attacks using Mythos, potentially undermining the administration's argument that the company represents a supply chain risk

2

. Oral arguments before the appeals court are scheduled for May 19, when the Trump administration will have an opportunity to present its response to Anthropic's latest court filings

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo