2 Sources
2 Sources
[1]
Anthropic seeks to debunk Pentagon's claims about its control over AI technology in military systems
WASHINGTON (AP) -- Anthropic on Wednesday told an appeals court that it can't manipulate its artificial intelligence tool Claude once it is deployed in classified Pentagon military networks -- an assertion aimed at debunking the Trump administration's attempt to brand the rapidly growing technology company as a supply chain risk. The statement made as part of 96-page filing with the U.S. Court of Appeals in Washington D.C. provided a glimpse at the arguments that Anthropic's lawyers intend to make as part of a lawsuit filed last month in the fallout of a contract dispute over how AI technology can be used in fully autonomous weapons and potential surveillance of Americans. San Francisco-based Anthropic contends the Pentagon is illegally retaliating against it by stigmatizing it with a designation meant to protect against sabotage of national security systems by foreign adversaries. Earlier this month, the appeals court rejected Anthropic's request for an order that would have blocked the Pentagon's actions while the panel is still collecting evidence about the case. Anthropic's new filing is meant to directly address some of the court's questions ahead of oral arguments scheduled for May 19. The Trump administration will have an opportunity to file its response before that hearing. Anthropic's temporary setback in the Washington case came after it already had prevailed in a separate case focused on the same issues in San Francisco federal court. That decision prompted the Trump administration to remove the stigmatizing labels from Anthropic, according court filings. But the lack of a similar order in the parallel case in Washington continues to cast a cloud over Anthropic, whose AI tools have turned it into a rising tech star along with rival OpenAI. After the Pentagon canceled a $200 million contract with Anthropic in the wake of their disagreement, OpenAI struck a deal to provide its technology to the U.S. military.
[2]
Anthropic: No "kill switch" for AI in classified settings
Why it matters: The Pentagon designated Anthropic a supply chain risk, contending the AI firm is inappropriately getting involved in how its technology can be used in sensitive military operations. What's inside: Anthropic argues in the filing to a federal appeals court in D.C. that it has no visibility, technical ability or any kind of "kill switch" for its technology once it's deployed. * The company also says the Pentagon has the opportunity to test models before deployment. Catch up quick: The company's usage policies include no Claude for autonomous weapons or mass surveillance, red lines that the Pentagon dismissed as red herrings and led to the dispute. * The D.C. court previously rejected Anthropic's request for a pause on the supply chain risk designation. A judge in California for an ongoing parallel case granted Anthropic's request. * The split decision means Anthropic can't participate in new Pentagon contracts, but can continue working with other government agencies while the litigation plays out. Friction point: The Pentagon is arguing in court that Anthropic is a supply chain risk as the Trump administration moves to deploy its new Mythos model across the federal government. * Now, agency heads are scrambling to figure out how they can protect their systems from cyber attacks using Mythos, potentially complicating the administration's argument that the company poses a national security risk. What's next: A hearing is scheduled for May 19.
Share
Share
Copy Link
San Francisco-based Anthropic filed a 96-page brief asserting it cannot manipulate its Claude AI tool after deployment in classified Pentagon military networks. The legal dispute centers on the Trump administration's supply chain risk designation following disagreements over autonomous weapons and mass surveillance policies. A hearing is scheduled for May 19.
San Francisco-based Anthropic has filed a 96-page brief with the U.S. Court of Appeals in Washington D.C., directly challenging the Pentagon's characterization of the AI company as a supply chain risk
1
. The company asserts it has no visibility, technical ability, or any kind of kill switch for its Claude AI technology once deployed in classified Pentagon military networks2
. This legal dispute emerged after the Trump administration canceled a $200 million contract with Anthropic, subsequently awarding a deal to rival OpenAI to provide AI technology to the U.S. military1
.
Source: Axios
The conflict centers on Anthropic's usage policies that prohibit Claude for autonomous weapons or mass surveillance applications. The Pentagon dismissed these restrictions as red herrings, arguing that Anthropic is inappropriately interfering with how its AI technology can be used in sensitive military operations
2
. Anthropic contends the Pentagon is illegally retaliating against it by applying a designation typically reserved for protecting against sabotage of national security systems by foreign adversaries1
. The company emphasizes that the Pentagon has the opportunity to test models before deployment, reinforcing its argument that it cannot manipulate the technology after implementation2
.
Source: AP
The appeals court in Washington D.C. rejected Anthropic's request for an order blocking the Pentagon's actions while evidence collection continues
1
. However, in a parallel case in San Francisco federal court, a judge granted Anthropic's request, prompting the Trump administration to remove stigmatizing labels from the company according to court filings1
. The split decision means Anthropic cannot participate in new Pentagon contracts but can continue working with other government agencies while litigation unfolds2
.Related Stories
As the Pentagon argues Anthropic poses a national security threat, the Trump administration simultaneously moves to deploy Anthropic's new Mythos model across the federal government. Agency heads are now scrambling to determine how they can protect their systems from cyber attacks using Mythos, potentially undermining the administration's argument that the company represents a supply chain risk
2
. Oral arguments before the appeals court are scheduled for May 19, when the Trump administration will have an opportunity to present its response to Anthropic's latest court filings1
.Summarized by
Navi
[1]
11 Mar 2026•Policy and Regulation

04 Mar 2026•Policy and Regulation

27 Mar 2026•Policy and Regulation

1
Technology

2
Science and Research

3
Technology
