Microsoft backs Anthropic in legal fight against Pentagon's supply chain risk designation

Reviewed byNidhi Govil

12 Sources

Share

Microsoft filed an amicus brief urging a federal court to temporarily block the Pentagon's blacklist of Anthropic, arguing the designation would disrupt military operations and harm government contractors. The move comes after Anthropic sued the Department of Defense over its refusal to remove AI safeguards against autonomous weapons and mass surveillance, despite Microsoft's $5 billion investment in the company.

Microsoft Files Amicus Brief Supporting Anthropic's Legal Challenge

Microsoft threw its weight behind Anthropic on Tuesday, filing an amicus brief that urges a federal court in San Francisco to temporarily block the Pentagon's supply chain risk designation of the AI startup

1

. The tech giant argued that immediate enforcement would impose "substantial and wide-ranging costs and risks" on companies using Anthropic's models as foundational technology for products supplied to the U.S. military

2

. Microsoft's intervention marks a significant moment for one of America's largest government contractors, demonstrating willingness to challenge the Trump administration at a critical juncture.

Source: ET

Source: ET

The amicus brief came just one day after Microsoft launched Copilot Cowork, a new AI product built on Anthropic's Claude models, and four months after Microsoft committed to invest up to $5 billion in Anthropicβ€”a deal requiring the startup to spend at least $30 billion on Microsoft Azure

2

. This financial entanglement makes Microsoft's decision to file the brief look less like altruism and more like financial self-defense, as the designation puts Microsoft's own Copilot and Azure products at risk

4

.

Pentagon's Blacklist Stems From Dispute Over AI Safeguards

The legal case against the U.S. Department of Defense erupted after Anthropic refused to remove two ethical guardrails for AI models during contract negotiations: no use for fully autonomous weapons and no use for mass domestic surveillance of Americans

2

. Following this refusal to remove AI safeguards, the Pentagon dropped all contracts with Anthropic and labeled the company a supply chain riskβ€”a designation historically reserved for foreign adversaries

3

. President Trump separately directed all federal agencies to stop using Anthropic's technology.

Source: Fast Company

Source: Fast Company

Microsoft aligned itself with Anthropic's position in its brief, stating that AI should not be used "to conduct domestic mass surveillance or put the country in a position where autonomous machines could independently start a war"

2

. The lawsuit against the U.S. Department of War, as the Trump administration has renamed the Department of Defense (DoD), challenges what Microsoft's lawyers called an "unprecedented" use of supply chain risk authority under 10 U.S.C. Β§ 3252

4

.

Immediate Compliance Creates Operational Chaos for Government Contractors

Microsoft's brief highlighted a procedural contradiction that exposes the designation's impact on government contractors. While the Pentagon gave itself six months to transition away from Anthropic's tools, it applied the supply chain risk designation to contractors immediately with no equivalent runway

4

. "Otherwise, Microsoft and other technology companies must act immediately to alter existing product and contract configurations used by DoW. This could potentially hamper U.S. warfighters at a critical point in time," the company warned

1

.

The temporary restraining order would "enable a more orderly transition and avoid disrupting the American military's ongoing use of advanced AI," Microsoft argued

3

. This double standard forces tech suppliers to scramble to audit, re-engineer, and reprocure products on a timeline the government didn't impose on itself

4

. Microsoft integrates Anthropic's AI products into technology it supplies to the U.S. military, making the designation's immediate enforcement particularly disruptive

5

.

Broader Implications for AI Ethics and National Security

The New York Times DealBook called Microsoft's brief "a remarkable act" and "a momentous decision" for a company that is one of the largest government contractors in America, noting it stands out in a period when corporate America typically avoids confronting the White House

2

. Microsoft hasn't shied away from fighting Washington at key moments, from its landmark antitrust battle with the Justice Department in the late 1990s to its Supreme Court fight over DACA immigration protections.

If courts allow the Pentagon's move to stand, every AI company selling into the government just learned that safety guardrails can be reframed as national security threats

4

. The brief makes clear this lesson isn't lost on the broader tech industry. Thirty-seven engineers and researchers from OpenAI and Google, including Google chief scientist Jeff Dean, separately filed their own amicus brief in support of Anthropic

2

.

Source: PYMNTS

Source: PYMNTS

Meanwhile, OpenAI moved quickly to fill the gap left by Anthropic, announcing its own Pentagon deal on the same day the designation came down. CEO Sam Altman later acknowledged the timing looked "opportunistic and sloppy"

2

. The State Department reportedly already shifted its internal chatbot model from Claude Sonnet 4.5 to OpenAI's GPT-4.1

3

. What remains to be seen is whether procurement law can be weaponized over policy disagreements without due process, and whether the tech industry will accept this precedent quietly.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo