Anthropic clashes with US Military over AI warfare ethics as Trump orders federal ban

Reviewed byNidhi Govil

74 Sources

Share

Anthropic refused to allow its Claude AI to be used for mass surveillance or fully autonomous weapons, leading President Trump to order federal agencies to stop using the company's technology. Defense Secretary Pete Hegseth designated Anthropic a supply-chain risk, barring military contractors from working with the firm. The dispute highlights mounting tensions between AI tech companies and government over ethical boundaries in military applications.

Anthropic Stands Firm on AI Safety Principles

A high-stakes standoff between Anthropic and the US Military has exposed deep divisions over how AI in warfare should be deployed. The conflict erupted when Anthropic refused to remove safeguards preventing its Claude AI from being used for mass surveillance of Americans or to guide autonomous weapons without human oversight

1

. CEO Dario Amodei stated the company "cannot in good conscience" comply with the Department of Defense demand that AI models be available for "any lawful use" without constraints

5

.

Source: Digit

Source: Digit

The disagreement centers on a $200-million contract under which Claude AI has powered the Maven Smart System since 2024

1

. In January, the Department of Defense issued a memo requiring all AI procurement contracts to permit unrestricted lawful use. When Anthropic refused to comply by the Friday deadline set by Defense Secretary Pete Hegseth, President Donald Trump directed federal agencies to "immediately cease" using Anthropic products, allowing a six-month phase-out period

2

.

Source: The Hill

Source: The Hill

Pentagon Designates Anthropic as Supply-Chain Risk

Hegseth escalated the dispute by designating Anthropic a supply-chain risk to national security, effectively barring any contractor, supplier, or partner doing business with the US military from conducting commercial activity with the company

3

. The designation sent shockwaves through Silicon Valley, with companies scrambling to understand whether they must sever ties with one of the industry's most popular AI models

3

.

Anthropic responded by announcing it would "challenge any supply chain risk designation in court," arguing such action would "set a dangerous precedent for any American company that negotiates with the government"

3

. The company maintained it received no direct communication from the Pentagon or White House regarding negotiations. Legal experts questioned whether Hegseth possessed statutory authority for such sweeping restrictions, with federal contract specialists unable to determine which Anthropic customers must cut ties

3

.

OpenAI Steps In as Alternative Provider

Following the government-tech company rupture, the US Military quickly signed a deal with OpenAI to deploy its AI models in classified environments

3

. OpenAI CEO Sam Altman announced the agreement includes similar protections against domestic mass surveillance and requires human responsibility for use of force, including for autonomous weapon systems. In an internal memo, Altman reportedly told employees OpenAI maintains the same redlines as Anthropic but believed these guardrails could be managed through technical requirements

5

.

The Maven Smart System, which uses AI models for applications including image processing and tactical support, speeds up attack capabilities by suggesting and prioritizing targets

1

. The system has been used in previous conflicts and reportedly in recent attacks on Iran. As of March 5, Dario Amodei was reportedly back in talks with the department, suggesting potential resolution remains possible

1

.

Source: Japan Times

Source: Japan Times

Ethical Dilemmas and International Regulation Efforts

The dispute underscores broader concerns about military applications of AI technology. While AI's precision targeting could theoretically reduce civilian casualties, ongoing conflicts in Ukraine and Gaza where AI assists target identification have seen high civilian death tolls

1

. Political geographer Craig Jones notes "there is no evidence that AI lowers civilian deaths or wrongful targeting decisions and it may be that the opposite is true"

1

.

Academics and legal experts met in Geneva this week to discuss lethal autonomous weapons systems as part of long-running efforts toward international agreement on ethical or legal uses of AI in warfare

1

. Political scientist Michael Horowitz at the University of Pennsylvania observes that rapid technological development is outpacing slow international discussions. LLMs powering fully autonomous weapons without human oversight are not currently reliable and do not comply with international laws requiring the ability to distinguish between military and civilian targets

1

.

Employees at Google and OpenAI circulated a petition calling on company leaders to refuse permission for AI models to be used for domestic mass surveillance or to autonomously kill people without human oversight

1

. The petition argued the Pentagon is "trying to divide each company with fear that the other will give in"

5

. A project led by political scientist Toni Erskine concluded that "fully autonomous weapons systems without a human in the loop are ethically untenable and should be banned internationally," while noting non-autonomous systems also carry risks requiring regulation

1

.

Michael Pastor, dean for technology law programs at New York Law School, said Anthropic is "right to press hard on what 'for lawful purposes' means," noting that if the Pentagon is unwilling to clarify whether it would use the technology for mass domestic surveillance, "that raises flags Anthropic seems justified in waving"

5

. The conflict escalation between AI tech companies and government may determine what leverage each holds when their views on appropriate technology use clash, with significant ramifications for future defense contracts

5

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo