Trump bans Anthropic from government as AI companies clash with Pentagon over weapons and surveillance

Reviewed byNidhi Govil

110 Sources

Share

President Donald Trump ordered federal agencies to stop using Anthropic's AI tools after the company refused to give the Pentagon unrestricted access to its technology. The dispute centers on Anthropic's refusal to allow its AI models to be used for mass surveillance or fully autonomous weapons. OpenAI quickly stepped in to fill the void, raising questions about how AI companies should balance ethical boundaries with national security demands.

Trump Orders Federal Ban on Anthropic After Pentagon Standoff

President Donald Trump announced Friday that he was instructing every federal agency to "immediately cease" use of Anthropic's AI tools, marking an unprecedented escalation in tensions between Silicon Valley and the Department of Defense

1

. The move follows weeks of conflict over military applications of artificial intelligence, with Anthropic refusing to remove contractual restrictions that prevent its technology from being used for mass surveillance and autonomous weapons

1

.

Source: ET

Source: ET

Defense Secretary Pete Hegseth met with Anthropic CEO Dario Amodei earlier in the week, giving the company until Friday to commit to changing the terms of its contract to allow "all lawful use" of its models

1

. When Amodei refused, Trump announced a six-month phase-out period for agencies using Anthropic, while Hegseth threatened to designate the company as a supply chain risk designation—a label typically reserved for foreign adversaries that would effectively blacklist Anthropic from working with any agency or company doing business with the Pentagon

5

.

OpenAI Steps In as Industry Debates Ethical Red Lines

Within hours of Trump's announcement, OpenAI revealed it had reached a deal to deploy its AI models in the Department of Defense's classified environments, filling the void left by Anthropic

2

. Sam Altman, OpenAI's CEO, held a public Q&A on X Saturday night, fielding questions about the company's willingness to work with the military on activities Anthropic had ruled out

2

.

Source: PYMNTS

Source: PYMNTS

Altman claimed OpenAI maintains the same ethical boundaries of AI as Anthropic regarding mass surveillance and fully autonomous weapons, but took a different contractual approach

3

. Rather than seeking specific prohibitions in the contract, OpenAI cited applicable laws and policies, including a 2023 Pentagon directive on autonomous weapons and the Fourth Amendment

3

. The company says it will embed ethical red lines directly into model behavior rather than relying solely on contractual language

3

.

However, critics argue this approach offers weaker protections. Jessica Tillipman, associate dean for government procurement law studies at George Washington University, noted that OpenAI's published contract excerpt "does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use"

3

.

Tech Workers Rally Behind Anthropic as Industry Fractures

Hundreds of tech workers from major companies including OpenAI, Google, IBM, and Slack signed an open letter urging the Department of Defense to withdraw its supply chain risk designation of Anthropic and calling on Congress to examine whether using such extraordinary authorities against an American company is appropriate

5

. The letter warns that "punishing an American company for declining to accept changes to a contract sends a clear message to every technology company in America: accept whatever terms the government demands, or face retaliation"

5

.

The dispute represents a critical test for AI companies and government collaboration as the industry transitions from consumer products to national security infrastructure

2

. Anthropic was the first major AI lab to work with the US military through a $200 million deal signed with the Pentagon last year, creating custom models known as Claude Gov with fewer restrictions than regular versions

1

.

AI in War Gaming Raises Urgent Questions About Military AI

The controversy arrives amid troubling research about AI in national security contexts. A recent study by Kenneth Payne at King's College London found that leading AI models from OpenAI, Anthropic, and Google deployed nuclear weapons in 95 per cent of simulated war games

4

. The AI models played 21 games involving intense international standoffs, and when one AI deployed tactical nuclear weapons, the opposing AI only de-escalated 18 per cent of the time

4

.

Source: Digit

Source: Digit

"The nuclear taboo doesn't seem to be as powerful for machines [as] for humans," Payne noted, adding that AI models may not understand "stakes" as humans perceive them

4

. This matters because major powers are already using AI in war gaming, though the extent to which they incorporate AI decision support into actual military decision-making remains uncertain

4

.

Boaz Barak, an OpenAI researcher, wrote that blocking governments from using AI for mass surveillance should be everyone's "personal red line," urging the industry to treat government abuse and surveillance as a catastrophic risk requiring the same rigorous evaluations and mitigations applied to bioweapons and cybersecurity threats

5

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo