Appeals court keeps Pentagon's supply-chain risk label on Anthropic amid conflicting rulings

Reviewed byNidhi Govil

10 Sources

Share

A federal appeals court in Washington, DC refused to block the Pentagon's designation of Anthropic as a supply-chain risk, creating a legal conflict with a San Francisco judge's opposite ruling. The AI company, caught in a dispute over autonomous weapons and surveillance restrictions, faces potential billions in lost revenue as two separate lawsuits unfold under different supply-chain laws.

Appeals Court Ruling Keeps Anthropic Blacklisting in Place

A US Court of Appeals in Washington, DC on Wednesday denied Anthropic's request to pause the Pentagon's national security supply-chain risk designation, delivering a setback to the AI company in its escalating legal dispute with the Trump administration

1

2

. The three-judge appellate panel ruled that Anthropic "has not satisfied the stringent requirements" to temporarily lose the label, even while acknowledging the company would "likely suffer some degree of irreparable harm" from the ongoing designation

1

5

. The appeals court ruling creates immediate business uncertainty for the Claude AI developer, which has claimed it could face billions of dollars in lost revenue from the Anthropic blacklisting

2

3

.

Source: AP

Source: AP

Conflicting Court Decisions Create Legal Uncertainty

The Washington, DC appeals court ruling directly conflicts with a decision issued last month by US District Judge Rita Lin in San Francisco federal court, who found the Pentagon's decision appeared to constitute "classic illegal First Amendment retaliation"

4

5

. The San Francisco judge ordered the supply-chain risk label removed, and the Trump administration initially complied by restoring access to Anthropic AI tools inside the Pentagon and throughout the federal government

1

. The conflicting preliminary judgments stem from the government sanctioning Anthropic under two different supply-chain laws with similar effects, with each court ruling on only one of them

1

. Anthropic has said it is the first US company to be designated under the two laws, which are typically used to punish foreign businesses that pose a risk to national security

1

.

Military Use of AI and Autonomous Weapons at Center of Dispute

The legal dispute erupted after Anthropic refused to allow the military to use Claude AI for US surveillance or autonomous weapons deployment due to AI safety and ethics concerns

3

. The company has argued it is being illegally punished for insisting that Claude lacks the accuracy needed for certain sensitive operations such as carrying out deadly drone strikes without human supervision

1

. Defense Secretary Pete Hegseth issued orders designating Anthropic under 10 USC § 3252 and 41 USC § 4713, requiring defense contractors to certify they don't use Claude in their work with the military

4

. The Justice Department argues that Anthropic's refusal to lift restrictions could cause uncertainty over how the Pentagon could use Claude and risk disabling military systems during operations

3

.

Source: NYT

Source: NYT

National Security Concerns Override Financial Harm

In explaining the appeals court ruling, the panel emphasized military priorities during active conflict. "On one side is a relatively contained risk of financial harm to a single private company," the court wrote. "On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict"

2

4

. The panel stated they did not want to risk "a substantial judicial imposition on military operations" or "lightly override" the military's judgments on national security

1

. The battle between Anthropic and the Trump administration is playing out as the Pentagon deploys AI in military contexts, including its war against Iran

1

.

First Amendment and Free Speech Implications

In its lawsuits, Anthropic claims the government violated its right to free speech under the First Amendment by retaliating against its views on AI safety

3

. The company also alleges it was not given a chance to dispute its designation, in violation of its Fifth Amendment right to due process

3

. The San Francisco judge had found that the Department of Defense likely acted in bad faith against Anthropic, driven by frustration over the AI company's proposed limits on how its technology could be used and its public criticism of those restrictions

1

. Several experts in government contracting and corporate rights have indicated Anthropic has a strong case against the government, though courts sometimes refuse to overrule the White House on matters related to national security

1

.

Industry Impact and Path Forward

The conflicting court decisions create substantial business uncertainty at a pivotal time when US companies compete globally to lead in AI development, according to Matt Schruers, CEO of the Computer & Communications Industry Association

5

. Some AI researchers have said the Pentagon's actions against Anthropic "chills professional debate" about the performance of AI systems

1

. The court in Washington, DC is scheduled to hear oral arguments on May 19, with final decisions in the company's two lawsuits potentially months away

1

2

. Anthropic spokesperson Danielle Cohen says the company remains confident "the courts will ultimately agree that these supply chain designations were unlawful"

1

. The parties have revealed minimal details about how the Department of Defense has used Claude or its progress transitioning staff to other AI tools from Google DeepMind, OpenAI, or others

1

.

Source: Reuters

Source: Reuters

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo