Anthropic sues Pentagon over supply chain risk label after refusing autonomous weapons use

Reviewed byNidhi Govil

85 Sources

Share

Anthropic filed two federal lawsuits against the Trump administration after the Pentagon designated it a supply chain riskโ€”a label typically reserved for foreign adversaries. The AI company refused to allow its Claude AI models to be used for autonomous lethal warfare and mass surveillance of Americans, prompting the government to blacklist it and order all federal agencies to cease using its technology.

Anthropic Files Lawsuit Against Pentagon Over Supply Chain Risk Designation

Anthropic filed two federal lawsuits on Monday challenging the Trump administration's decision to designate it a supply chain risk after the AI company refused to allow its Claude AI models to be used for autonomous weapons and mass surveillance of Americans

1

. The lawsuit, filed in US District Court for the Northern District of California, argues that the government violated Anthropic's First Amendment rights by retaliating against the company for expressing its views on AI safety

1

. A second lawsuit was filed in the US Court of Appeals for the District of Columbia Circuit

1

.

Source: Market Screener

Source: Market Screener

The dispute began when Anthropic held firm to two red lines during contract negotiations with the Department of Defense: it didn't want its technology used for mass surveillance of Americans and didn't believe its AI models were ready to power fully autonomous lethal weapons with no humans making targeting and firing decisions

4

. Defense Secretary Pete Hegseth argued that the Pentagon should have unrestricted access to AI systems for "any lawful purpose"

4

. When Anthropic refused to budge, President Trump directed every federal agency to "IMMEDIATELY CEASE all use of Anthropic's technology," and hours later, Hegseth designated the company a supply chain risk to national security

1

.

White House Labels Anthropic "Radical Left, Woke" Company

The White House responded to the lawsuit with sharp rhetoric, calling Anthropic a "radical left, woke company." A White House spokesperson stated: "President Trump will never allow a radical left, woke company to jeopardize our national security by dictating how the greatest and most powerful military in the world operates"

1

. The statement added that "our military will obey the United States Constitution -- not any woke AI company's terms of service"

1

.

Hegseth's directive went beyond the legal requirements of the supply chain risk designation, which by law prevents only a narrow set of companies doing business with the Pentagon from incorporating Anthropic into their systems. Instead, he posted on X that "effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic"

5

. At Tuesday's court hearing, the Trump administration refused to commit to not taking further action against the company, with a Justice Department attorney stating he was "not prepared to offer any commitments on that issue"

3

. President Trump is currently finalizing an executive order that would formally ban usage of Anthropic tools across the government

3

.

Source: New York Post

Source: New York Post

OpenAI and Google Employees Support Anthropic's Legal Challenge

More than 30 OpenAI and Google DeepMind employees filed an amicus brief supporting Anthropic's lawsuit, with signatories including Google DeepMind chief scientist Jeff Dean

2

. The brief stated that "the government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry"

2

. The filing also affirmed that Anthropic's stated concerns about autonomous weapons and mass surveillance are legitimate issues warranting strong guardrails

2

.

The Google and OpenAI employees argued that without public law to govern ethical use of AI, the contractual and technical restrictions developers impose on their systems serve as critical safeguards against catastrophic misuse

2

. They wrote that "current AI models are not reliable enough to bear the responsibility of making lethal targeting decisions entirely alone, and the risks of their deployment for that purpose require some kind of response and guardrails"

1

.

Billions in Revenue at Risk as Business Partners Pull Back

Anthropic's chief financial officer Krishna Rao revealed that hundreds of millions of dollars in expected revenue this year from work tied to the Pentagon is already at risk, and the company could ultimately lose billions of dollars in sales if the government pressures a broad range of companies from doing business with the AI startup

5

. Rao disclosed that Anthropic's all-time sales since commercializing its technology in 2023 exceed $5 billion, though the company has spent over $10 billion to train and deploy its models and remains deeply unprofitable

5

.

Anthropic chief commercial officer Paul Smith provided concrete examples of the blacklisting's impact: a financial services customer paused negotiations over a $15 million deal, two leading financial services companies refused to close deals valued together at $80 million unless they gained the right to unilaterally cancel their contracts for any reason, and a grocery store chain cancelled a sales meeting, all citing the supply chain risk designation

5

. Smith wrote that "all have taken steps that reflect deep distrust and a growing fear of associating with Anthropic"

5

.

Legal Experts Question Government's Use of National Security Powers

Several advocacy groups filed a brief supporting Anthropic, including the Foundation for Individual Rights and Expression, the Electronic Frontier Foundation, the Cato Institute, the Chamber of Progress, and the First Amendment Lawyers Association

1

. They argued that Pentagon retaliation against Anthropic will "silence future speech from those who fear the government attempting to harm their business or extinguish it entirely"

1

.

Source: Analytics Insight

Source: Analytics Insight

Legal experts believe the administration's actions against Anthropic continue a pattern of abusing the law to punish perceived political enemies. Yale Law School professor Harold Hongju Koh, who worked in the Barack Obama administration, stated: "If this is a one-off, you might give the president some deference. But now, it's just unmistakable that this is just the latest in a chain of events related to a punitive presidency"

3

. Georgetown University Law Center professor David Super noted that the provisions the Department of Defense used to sanction Anthropic were designed to protect the country from potential sabotage by its enemies

3

.

Anthropic is seeking a preliminary injunction to suspend the sanctions and prevent further irreparable harm to its business. A hearing is scheduled for March 24 in San Francisco, though the company had requested an even earlier date

3

. The case will test whether the government can weaponize national security designations against domestic companies that refuse government contracts based on free speech concerns about AI safety.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Donโ€™t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

ยฉ 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo