Trump fires Anthropic 'like dogs' as Pentagon negotiations quietly resume over military AI dispute

Reviewed byNidhi Govil

2 Sources

Share

Donald Trump claimed he fired Anthropic over its refusal to drop AI guardrails for military use, even as reports emerged that negotiations between the Pentagon and the AI startup have restarted. The dispute centers on whether Claude can be used for mass domestic surveillance and fully autonomous weapons, with Anthropic's $60 billion financing round now in jeopardy.

Trump Claims Victory as Anthropic Faces Uncertain Future

Donald Trump declared he "fired Anthropic like dogs" in a Thursday interview with Politico, using one of his characteristic phrases to describe his administration's decisive action against the AI company

1

. The statement came as the tech world watches the Pentagon's unprecedented confrontation with Anthropic unfold, with massive implications for how military AI will be developed and deployed in the United States. Trump on Friday ordered the entire federal government to cease using Anthropic's technology, a directive that the state and treasury departments have already begun implementing according to their respective heads

2

.

Source: Gizmodo

Source: Gizmodo

Pentagon Deal Collapses Over AI Guardrails

The conflict erupted when the Department of Defense demanded that Anthropic drop guardrails on its AI model Claude that prohibit its use in mass domestic surveillance and fully autonomous weapons

1

. Defense Secretary Pete Hegseth responded to Anthropic's refusal by threatening to designate the company as a supply chain risk, a label that would prevent all government contractors from using its technology and has never been used before against a US company

2

. This designation amounts to what industry observers have described as corporate murder, potentially devastating Anthropic's business model and competitive position.

Negotiations Resume Despite Public Tensions

Despite Trump's inflammatory rhetoric, negotiations have restarted between the Pentagon and Anthropic over the military's use of the company's AI and the contract between the two, according to reports from the Financial Times and Bloomberg

2

. Anthropic CEO Dario Amodei is currently in talks with Emil Michael, Under-Secretary of Defense for Research and Engineering, though the discussions face significant obstacles given that Michael last week tweeted that Amodei was a "liar" with a "god-complex"

1

. The company has yet to acknowledge receiving formal notice of the supply chain risk designation, leaving the entire situation in limbo.

Claude Already Embedded in Military Operations

The irony of the dispute is that Claude is already being used in active military operations through its integration into Palantir's Maven Smart System

2

. The US military is currently using Claude to help choose targets in bombing campaigns against Iran, according to the Wall Street Journal. As planning for strikes in Iran was underway, Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance, turning weeks-long battle planning into real-time operations

1

. This AI use in defense raises troubling questions about target selection accuracy, particularly after a school in Minab, Iran, built on an old Revolutionary Guard base that closed 15 years ago, was bombed, killing 168 people, most of them children.

Financial Stakes and Industry Fallout

Anthropicโ€™s corporate survival hangs in the balance as its most recent financing round, valued at some $60 billion, is now in jeopardy over the fracas with the Pentagon, according to Axios

2

. The retributive blacklisting could cause significant financial harm if formally enacted. Meanwhile, OpenAI seized the opportunity created by government contracts with AI companies being severed, announcing its own deal with the military late last week. However, this move elicited backlash that propelled Claude's app to the top of the download charts in the US, and OpenAI CEO Sam Altman later admitted the conduct appeared "opportunistic and sloppy"

2

. Amodei sent a scorching message to employees calling Altman "mendacious" and dismissing his safety claims as "safety theater," while Altman conceded his company would have no control over how the military used OpenAI's technology.

Ethical Implications of AI in Defense

The standoff highlights fundamental tensions around the ethical implications of AI and how tech companies navigate demands from their most powerful potential customer. Silicon Valley, including OpenAI, has backed Anthropic in the fight over the designation, and Amodei has vowed the company will sue over the label

2

. Whether Anthropic can maintain its ethical stance after making such a public stand remains uncertain, especially given that compromise seems unlikely and the company's survival depends on finding a resolution. The tech industry is now watching closely to see whether principled opposition to military AI applications can coexist with commercial viability, or whether government pressure will force even the most ethics-focused companies to capitulate.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo