12 Sources
12 Sources
[1]
Microsoft says court should temporarily block Pentagon's blacklist of Anthropic
Microsoft CEO Satya Nadella attends the 56th annual World Economic Forum (WEF) meeting in Davos, Switzerland, January 20, 2026. Microsoft threw its support behind Anthropic on Tuesday and advocated for a temporary restraining order that would block the Pentagon's supply chain designation "for all existing contracts." Such a move would "enable a more orderly transition and avoid disrupting the American military's ongoing use of advanced AI," Microsoft said in a filing. "Otherwise, Microsoft and other technology companies must act immediately to alter existing product and contract configurations used by DoW. This could potentially hamper U.S. warfighters at a critical point in time."
[2]
Microsoft's brief in Anthropic case shows new alliance and willingness to challenge Trump administration
A brief filed by Microsoft in Anthropic's lawsuit against the U.S. Department of War shows the deepening ties between the two companies, and Microsoft's willingness to take on the federal government at key moments in its history. Microsoft on Tuesday urged a federal judge in San Francisco to temporarily block the Pentagon's designation of Anthropic as a supply chain risk, arguing that immediate enforcement would hurt Microsoft and other government contractors that depend on Anthropic's technology. The government's designation imposes "substantial and wide-ranging costs and risks" on companies that use Anthropic's models "as a foundational layer of their own products and services, which they provide to the U.S. military," Microsoft said in the filing. The New York Times DealBook called Microsoft's brief "a remarkable act" and "a momentous decision" for a company that is one of the largest government contractors in America, noting that it stands out in a period when corporate America's unwritten rule has been to avoid picking fights with the White House. It came a day after Microsoft launched Copilot Cowork, a new AI product built on Anthropic's Claude models, and four months after Microsoft committed to invest up to $5 billion in the startup in a deal that includes Anthropic spending at least $30 billion on Microsoft Azure. Amazon, which has invested $8 billion in Anthropic, has not publicly weighed in on the lawsuit or the supply chain risk designation. We've contacted the company for comment. Microsoft hasn't shied away from fighting with Washington, D.C., at key moments in its history, ranging from its landmark antitrust battle with the Justice Department in the late 1990s to its Supreme Court fight against the Trump administration over DACA immigration protections. The Redmond-based company has built one of the deepest government-relations operations in tech, led by President and Vice Chair Brad Smith, a former D.C. lawyer whom the New York Times once called "a de facto ambassador for the technology industry at large." Anthropic sued the Department of War on Monday over the designation, which is historically reserved for foreign adversaries. It followed the collapse of contract negotiations in which Anthropic refused to drop two guardrails on its AI models: no use for fully autonomous weapons and no use for mass domestic surveillance of Americans. President Trump separately directed all federal agencies to stop using Anthropic's technology. OpenAI, meanwhile, moved quickly to fill the gap left by Anthropic, announcing its own Pentagon deal on the same day the designation came down. CEO Sam Altman later acknowledged the timing looked "opportunistic and sloppy." Thirty-seven engineers and researchers from OpenAI and Google, including Google chief scientist Jeff Dean, separately filed their own amicus brief in support of Anthropic. In its amicus brief, Microsoft said AI should not be used "to conduct domestic mass surveillance or put the country in a position where autonomous machines could independently start a war," aligning itself with Anthropic's position on the two sticking points in the negotiations. Microsoft also flagged a double-standard in the government approach: the Pentagon gave itself six months to transition off Anthropic's models but made the designation effective immediately for contractors. Without a restraining order, Microsoft warned, it and other companies would have to "act immediately to alter existing product and contract configurations" for the military.
[3]
Microsoft Backs Anthropic in Pentagon Fallout Despite Heated Rivalry
Microsoft is taking Anthropic's side in its fight to overturn the Pentagon's "supply-chain risk" designation. Late last month, the Pentagon dropped all contracts with Anthropic and labeled the AI company a "supply-chain risk" after the latter refused to drop safeguards against mass surveillance and completely autonomous weapons. In response, Anthropic filed two lawsuits against the Department of Defense last week. In an amicus brief filed on Tuesday in support of those lawsuits, Microsoft asked the federal court to issue a temporary block to the DoD designation until the case is decided. "A temporary restraining order will permit the parties to pursue a negotiated resolution that will better serve all involved and avoid wide-ranging negative business impacts," Microsoft wrote in the filing. The designation requires all companies working with the Pentagon to ditch Anthropic's models in work for the Department, effective immediately, even though the agencies have a six-month phase-out period. Microsoft, which is both a long-time government contractor and an investor in Anthropic, warned that decoupling will be tough. "A temporary restraining order will enable a more orderly transition and avoid disrupting the American military's ongoing use of advanced AI," Microsoft wrote in the filing. "Otherwise, Microsoft and other technology companies must act immediately to alter existing product and contract configurations used by DoW. This could potentially hamper U.S. warfighters at a critical point in time." (DoW is in reference to the Department of War, the Trump administration's preferred renaming of the Department of Defense.) AI has been a crucial helper in increasing the speed and scale of the U.S. attack on Iran. The scale of the attacks has been massive, "double" that of the initial Γ’β¬Εshock and aweΓ’β¬ phase of the 2003 invasion of Iraq, according to military experts. In just the first 24 hours, the U.S. hit roughly a thousand targets, per Bloomberg. One of these targets was allegedly an elementary school in southern Iran. Anthropic is leaning into its new pro-humanity stance, with the company announcing on Wednesday that it was launching an internal think tank to research the large-scale implications and dangers AI may present on things like the economy and safety. According to The Verge, as part of the changes, cofounder Jack Clark will be leading the think tank under his new title, "head of public benefit." But Anthropic did willingly lend Claude's services to the U.S. military at one point. The technology was reportedly used in the capture of Venezuelan President Nicolas Maduro, and is allegedly still being used by the military in Iran. The Pentagon's Central Command still uses Claude in some capacity, according to the Wall Street Journal. The DoD has nothing to worry about with its Anthropic breakup, though, because OpenAI was quick to fill the space left behind by Anthropic, much to the dismay of ChatGPT users and some company employees. The government-level transition to OpenAI is currently underway, and the State Department reportedly already shifted its internal chatbot model from Anthropic's Claude Sonnet 4.5 to OpenAI's GPT-4.1. Even though OpenAI once banned AI from being used for military purposes, WIRED reported last week that the Pentagon had been testing OpenAI's models through a Microsoft Azure workaround as far back as 2023. Microsoft's amicus brief comes on the heels of yet another brief in support of Anthropic, this time signed by 37 employees at both Google and OpenAI.
[4]
Microsoft Sides With Anthropic Against Trump Admin's Supply Chain Risk Designation - Decrypt
Microsoft argued the DoD used a foreign-adversary security designation in an "unprecedented" way. Microsoft has up to $5 billion invested in Anthropic, while Anthropic has committed to buy $30 billion in Azure compute under the partnership. That context makes its decision to file an amicus curiae brief in support of Anthropic's lawsuit against the U.S. Department of Defense look less like altruism and more like financial self-defense. The brief, filed March 10 in San Francisco, argues that a temporary restraining order blocking enforcement of the Pentagon's "supply chain risk" designation would serve the public interest. Microsoft itself is a major DoD contractor, and that designation puts its own products at risk. Defense Secretary Pete Hegseth directed that no contractor, supplier, or partner doing business with the U.S. military may conduct any commercial activity with Anthropic -- a sweep potentially broad enough to catch Microsoft's own Copilot and Azure products, which ship with support for Claude. The brief highlights a procedural contradiction that has received little attention in mainstream coverage: The Department of Defense gave itself a six-month phase-out period to transition away from Anthropic's tools, but applied the designation to contractors immediately with no equivalent runway. Microsoft's lawyers called this out directly, noting that tech suppliers must now scramble to audit, re-engineer, and reprocure products on a timeline the government didn't impose on itself. Microsoft also raised an alarm that cuts to the heart of the legal dispute. The supply chain risk authority invoked -- 10 U.S.C. Β§ 3252 -- has historically been reserved for foreign adversaries. Only one such designation has ever been issued publicly under related statutes, and that was against Acronis AG, a Swiss software firm with Russian ties. Using it against a San Francisco AI startup is, as Microsoft put it, "unprecedented." The brief's most pointed argument is structural. If a contract dispute between one agency and one company can trigger a national-security blacklist, then every company doing business with the federal government just inherited a new category of existential risk. Microsoft's lawyers described an industry model built on interconnected services, where one banned component can freeze entire product lines. There's an irony here that's hard to ignore. Microsoft is simultaneously OpenAI's biggest backer -- with investments valued at approximately $135 billion -- and now one of Anthropic's loudest courtroom defenders. OpenAI, for its part, rushed to sign a deal with the DoD hours after the Anthropic blacklist dropped, a move that drew internal backlash and led to public acknowledgment from OpenAI CEO Sam Altman that the announcement "looked opportunistic and sloppy." Microsoft backed both horses. The brief stops short of endorsing Anthropic's specific AI safety positions on autonomous weapons and mass surveillance -- the two red lines that triggered the standoff. Instead, it frames the case in terms any government contractor can understand: due process, orderly transitions, and the effects of weaponizing procurement law over policy disagreements. Microsoft's request is a temporary restraining order, not a verdict. The tech giant wants the clock slowed down enough for the parties to negotiate -- and for its own products to stay legally deployable while they do. What's at stake goes beyond one company's contract. If courts allow the Pentagon's move to stand, then every AI company selling into the government just learned that safety guardrails can be reframed as national security threats. Microsoft's brief makes clear that lesson isn't lost on the broader tech industry -- and that the company isn't willing to learn it quietly.
[5]
Microsoft shows support for Anthropic's legal case against the US Department of Defense
Claude creator, Anthropic, has already had an eventful 2026 -- and now it's challenging the recently renamed US Department of War. The Pentagon has threatened to designate the company as a "supply chain risk," which Anthropic is currently fighting by suing the US government. Additionally on Monday, Anthropic requested a restraining order that would pause the Pentagon's national security blacklisting of the company, arguing the court should hear the company's suit first before carrying out the government's order. The latest update today is that an amicus brief filing suggests Microsoft may back Anthropic's cause (via Reuters). Microsoft argued it would be directly impacted by the DOD's 'supply chain risk' designation of Anthropic. This is because Microsoft integrates a number of Anthropic's AI products into tech it supplies to the US military. The company therefore made the case that the temporary restraining order is necessary in order to avoid massive disruption; if Anthropic was blacklisted, suppliers such as Microsoft would then have to make a costly pivot in order to quickly rebuild any of its offerings that were previously based on Anthropic's AI tech. Microsoft requested to file its amicus brief in a federal court in San Francisco, but a judge still needs to approve its official inclusion in the case. This follows another similarly supportive filing on Monday from a β group of 37 researchers and engineers hailing from OpenAI and Google. So, why is it all kicking off between the AI suppliers and the US Gov? Allow me to briefly recap the saga so far: Last July, Anthropic announced it had reached a deal with the then-named US Department of Defense "with a $200 million ceiling" that would see Anthropic providing "prototype frontier AI capabilities that advance U.S. national security." Six months later, disagreements arose between Anthropic and the DOD regarding the implementation of safeguards in the company's LLMs -- specifically the guardrails that would prevent these models from being deployed in autonomous weapon targeting and domestic surveillance scenarios. Anthropic's AI safety lead quit shortly afterwards, issuing a bizarre resignation letter via X that includes the phrase "The world is in peril." Shortly after that Anthropic revised its stance on 'pausing' development of more powerful AI models if suitable safety safeguards weren't yet ready, effectively ditching its defining safety promise. Even so, Anthropic ultimately stood up to the DOD and refused to remove the aforementioned LLM safeguards. "We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values," Anthropic CEO Dario Amodei wrote, going on to later add: "Frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America's warfighters and civilians at risk." Beyond threatening to blacklist Anthropic, the Pentagon has since responded by giving itself six months to phase out its use of the company's products. OpenAI has stepped in to fill the AI void, though CEO Sam Altman has said the company will be amending the language of its deal with the DOD. Bottom line, OpenAI has recently drawn "three main red lines" in the sand that are suspiciously similar to Anthropic's original objections: "No use of OpenAI technology for mass domestic surveillance. No use of OpenAI technology to direct autonomous weapons systems. No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as 'social credit')." As yet no toys have come flying out of the government's pram in response to OpenAI's stance, but it is worth noting that the company's lead of robotics, Caitlin Kalinowski, resigned recently -- after those 'red lines' were published. "This wasn't an easy call," she says. "AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got." So, all rather messy, and this is unlikely to be the last we'll hear about the AI industry's' legal wranglings with the US government.
[6]
Microsoft urges Pentagon pause blacklisting Anthropic
San Francisco (United States) (AFP) - Microsoft on Tuesday warned a judge that the Pentagon blacklisting of Anthropic could hamper US warfighters and imperil the country's drive to lead in artificial intelligence. In a brief, Microsoft backed Anthropic's request for an order stopping the Pentagon from implementing its ban on the use of Anthropic AI until the matter is settled in court. Anthropic filed suit this week against the Trump administration, alleging the US government retaliated against the company for refusing to let its Claude AI model be used for autonomous lethal warfare and mass surveillance of Americans. In the complaint, filed in federal court in San Francisco, Anthropic seeks to have its designation as a national security supply-chain risk declared unlawful and blocked. Anthropic is the first US company ever to have been publicly punished with such a designation, a label typically reserved for organizations from foreign adversary countries, such as Chinese tech giant Huawei. The label not only blocks use of the company's technology by the Pentagon, but also requires all defense vendors and contractors to certify that they do not use Anthropic's models in their work with the department. AI overhaul Microsoft argued in an amicus brief that blacklisting Anthropic was an unprecendented response to a contract dispute that portended ill for the technology sector as well as the US military. "This is not the time to put at risk the very AI ecosystem that the administration has helped to champion," Microsoft said in the brief. A temporary restraining order would allow time to avoid disrupting the American military's ongoing use of advanced AI, Microsoft argued. "Otherwise, Microsoft and other technology companies must act immediately to alter existing product and contract configurations used by Department of War." "This could potentially hamper US warfighters at a critical point in time." The row erupted days before the US military strike on Iran. Anthropic's Claude is the Pentagon's most widely-deployed frontier AI model and the only such model currently operating on its classified systems. Anthropic had infuriated Pentagon chief Pete Hegseth by insisting the technology should not be used for mass surveillance or fully autonomous weapons systems. President Donald Trump subsequently ordered every federal agency to cease all use of Anthropic's technology. "AI should not be used to conduct domestic mass surveillance or put the country in a position where autonomous machines could independently start a war," Microsoft said in the filing. More than three dozen AI industry insiders from OpenAI and Google, including Google chief scientist Jeff Dean, argued in support of Anthropic in an amicus brief filed with the court on Monday. In its lawsuit, Anthropic said it was founded on the belief that its AI should be "used in a way that maximizes positive outcomes for humanity" and should "be the safest and the most responsible." "Anthropic brings this suit because the federal government has retaliated against it for expressing that principle," the lawsuit says.
[7]
Anthropic's Pentagon showdown is drawing Silicon Valley into a larger fight
The dispute between Anthropic and the Department of Defense is quickly becoming a broader test of how far the government can go in policing AI companies' policies -- and how much support those companies can rally from the wider research community. A fair showing of top AI researchers had already signed a public letter backing Anthropic. Now 37 of them have taken a more formal step, signing an amicus brief filed with the court Monday. The filing underscores how the clash is evolving from a narrow contract dispute into something bigger: a test of whether the government can effectively blacklist an American AI company for setting limits on how its technology is used. The outcome could shape how much independence AI companies have to impose safety guardrails, especially when those limits collide with national security priorities. The group behind the amicus brief includes Google chief scientist Jeff Dean, along with 19 researchers from OpenAI and 10 from Google DeepMind. The researchers filed the brief in their personal capacities, not as representatives of their respective companies.
[8]
Microsoft Backs Anthropic, Urging a Judge to Halt Pentagon's Actions Against AI Company
SAN FRANCISCO (AP) -- Microsoft is throwing its weight behind Anthropic in asking a federal court to block the Trump administration's designation of the artificial intelligence company as a supply chain risk. Microsoft, in a legal filing, is challenging Defense Secretary Pete Hegseth's action last week to shut Anthropic out of military work by labeling its AI products as a national security threat. The Pentagon took the action against Anthropic after an unusually public dispute over the company's refusal to allow unrestricted military use of its AI model Claude. President Donald Trump also said he was ordering all federal agencies to stop using Claude. "The use of a supply chain risk designation to address a contract dispute may bring severe economic effects that are not in the public interest," Microsoft, a major government contractor, said in its Tuesday filing in the San Francisco federal court, where Anthropic sued the Trump administration on Monday. The Pentagon's action "forces government contractors to comply with vague and ill-defined directions that have never before been publicly wielded against a U.S. company," Microsoft's legal brief says. It asks for a judge to order a temporary lifting of the designation to allow for more "reasoned discussion." The Pentagon declined to comment, saying it does not comment on matters in litigation. Microsoft also sided with Anthropic's two ethical red lines that were a sticking point in the contract negotiations. "Microsoft also believes that American AI should not be used to conduct domestic mass surveillance or start a war without human control," Microsoft said. "This position is consistent with the law and broadly supported by American society, as the government acknowledges."
[9]
Microsoft files amicus brief supporting Anthropic in Pentagon fight
The technology giant Microsoft has filed an amicus brief in Anthropic's case against the Trump administration, urging the court to temporarily block the implementation of the Pentagon labelling the Claude-maker as a supply chain risk. The filing is a significant development for Anthropic, which first filed a complaint against the administration on Monday over the Pentagon's determination and President Trump's recent order for federal agencies to cease use of Anthropic's AI products after negotiations over safety guardrails fell apart. A Microsoft spokesperson told The Hill on Tuesday that the company believes "everyone involved shares common goals and we need time and a process to find common ground." "The Department of War needs reliable access to the country's best technology," the spokesperson said. "And everyone wants to ensure AI is not used for mass domestic surveillance or to start a war without human control. The government, the entire tech sector, and the American public need a path to achieve all these goals together." The filing comes just hours before a federal judge in northern California will take up Anthropic's request for a temporary block of the designation. Anthropic also filed suit in the D.C. federal appeals court, requesting a review of the Pentagon's determination. Microsoft's amicus brief, shared with The Hill Tuesday, argues a pause in the designation will help avoid "disrupting" the U.S. military's ongoing uses of advanced AI. The company, which has partnered with Anthropic on numerous products, warned U.S. warfighters could be "hampered" if companies are forced to immediately alter their existing products and contract configurations with the Pentagon. Any immediate implementation, Microsoft said, could have "broad negative ramifications" for the "entire technology sector" and "American business community." Anthropic is asking the court to ultimately reverse the Pentagon's designation, which has typically been reserved for foreign adversaries and restricts defense contractors from using the company's products. The AI firm alleges the federal government retaliated against the firm for its "protected viewpoint" on AI safety and the limitations of its own AI models. While workers from AI firms OpenAI and Google separately filed an amicus brief Monday in Anthropic's complaint, Tuesday's filing makes Microsoft the first standalone company to voice support for such a pause. Despite the Pentagon's position, Anthropic has argued the restrictions cannot prohibit anyone who can do business with the military from doing business with the AI firm. Google, Amazon and Apple said last week that Anthropic's AI tools will still be available on their platforms for work that does not involve the Pentagon.
[10]
Microsoft files amicus brief in support of Anthropic's lawsuit with US DOD - The Economic Times
Microsoft filed a proposed brief supporting Anthropic's lawsuit seeking to temporarily block the US Department of Defense designation of the AI startup as a supply-chain risk. The company said the decision directly affects it and could cause costly disruptions for suppliers relying on Anthropic's technology.Microsoft filed on Tuesday a proposed brief in support of Anthropic's lawsuit asking the court to temporarily block the β U.S. β Department of Defense's designation of the AI startup as a supply-chain risk. Microsoft added that it was directly impacted by the DOD's β designation. The Claude maker had filed a lawsuit to β block the Pentagon from placing it on a national security blacklist on Monday, escalating a high-stakes battle with the US military over usage restrictions on its technology. Microsoft's filing argued the TRO is needed β to prevent costly disruptions for suppliers, who would otherwise have to rapidly rebuild offerings that rely on Anthropic's products.
[11]
Microsoft Backs Anthropic Against Pentagon Ban | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The tech giant's comments were delivered in its motion for a proposed amicus brief in Anthropic's lawsuit against the Trump administration that was filed Monday (March 9), according to the report. Microsoft said in its filing that a restraining order halting the designation would enable a more orderly transition, avoid disrupting the military's use of AI and prevent tech firms from having to immediately alter their product and contract configuration, per the report. A Microsoft spokesperson told Reuters: "We believe everyone involved shares common goals, and we need time and a process to find common ground." It was reported in January that Microsoft has become one of Anthropic's top customers and that the tech giant is on track to spend around $500 million per year to use Anthropic's AI in Microsoft products. In addition, Microsoft has increased its focus on selling its cloud customers Anthropic AI models, which could drive more revenue for both firms. In November, Microsoft said it formed a collaboration with Anthropic in which Anthropic will scale its Claude AI model on Microsoft Azure. Anthropic agreed to purchase $30 billion of Azure compute capacity, while Microsoft pledged to invest up to $5 billion in Anthropic. The White House told federal agencies Feb. 27 to stop using Anthropic's AI products after the company refused a Pentagon demand that it agree that the military can use its models in "all lawful use cases." Anthropic wanted contract language that would prohibit the use of its models for autonomous weapons and mass domestic surveillance, while Pentagon officials took the position that the military should retain final authority on use cases, as long as they are lawful. Defense Secretary Pete Hegseth said of Anthropic in a Feb. 27 post on X: "Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable." Anthropic sued the U.S. government Monday (March 9) to block it from placing a supply chain risk designation on the AI company. The label also means that anyone wishing to do business with the military would need to end their relationship with Anthropic.
[12]
Microsoft backs Anthropic in amicus brief to halt US DOD's 'supply-chain risk' designation
March 10 (Reuters) - Microsoft filed on Tuesday a brief in support of Anthropic's lawsuit asking the court to temporarily block the U.S. Department of Defense's designation of the AI startup as a supply-chain risk. In an amicus brief filing in a federal court in San Francisco, Microsoft backed Anthropic's request for a temporary restraining order against the Pentagon order, arguing that its determination should be paused while the court considers the case. Microsoft, which integrates the AI lab's products and services into technology it provides to the U.S. military, said that it was directly impacted by the DOD designation. The Claude maker had filed a lawsuit to block the Pentagon from placing it on a national security blacklist on Monday, escalating a high-stakes battle with the U.S. military over usage restrictions on its technology. Microsoft's filing argued the TRO is needed to prevent costly disruptions for suppliers, who would otherwise have to rapidly rebuild offerings that rely on Anthropic's products. While the Pentagon gave itself six months to phase out Anthropic, it did not provide the same transition period for contractors that use Anthropic's products or services to perform under DOD, Microsoft said. "Should this action proceed without the entry of a temporary restraining order, Microsoft and other government contractors with expertise in developing solutions to support U.S. government missions will be forced to account for a new risk in their business planning," the company said. Microsoft added that a temporary restraining order would allow time to negotiate a solution while protecting military access to advanced technology and ensuring AI is not used for domestic mass surveillance or to start a war without human control. On Monday, a group of 37 researchers and engineers from OpenAI and Google had also filed an amicus brief in support of Anthropic. (Reporting by Vallari Srivastava and Anhata Rooprai in Bengaluru; Editing by Alan Barona)
Share
Share
Copy Link
Microsoft filed an amicus brief urging a federal court to temporarily block the Pentagon's blacklist of Anthropic, arguing the designation would disrupt military operations and harm government contractors. The move comes after Anthropic sued the Department of Defense over its refusal to remove AI safeguards against autonomous weapons and mass surveillance, despite Microsoft's $5 billion investment in the company.
Microsoft threw its weight behind Anthropic on Tuesday, filing an amicus brief that urges a federal court in San Francisco to temporarily block the Pentagon's supply chain risk designation of the AI startup
1
. The tech giant argued that immediate enforcement would impose "substantial and wide-ranging costs and risks" on companies using Anthropic's models as foundational technology for products supplied to the U.S. military2
. Microsoft's intervention marks a significant moment for one of America's largest government contractors, demonstrating willingness to challenge the Trump administration at a critical juncture.
Source: ET
The amicus brief came just one day after Microsoft launched Copilot Cowork, a new AI product built on Anthropic's Claude models, and four months after Microsoft committed to invest up to $5 billion in Anthropicβa deal requiring the startup to spend at least $30 billion on Microsoft Azure
2
. This financial entanglement makes Microsoft's decision to file the brief look less like altruism and more like financial self-defense, as the designation puts Microsoft's own Copilot and Azure products at risk4
.The legal case against the U.S. Department of Defense erupted after Anthropic refused to remove two ethical guardrails for AI models during contract negotiations: no use for fully autonomous weapons and no use for mass domestic surveillance of Americans
2
. Following this refusal to remove AI safeguards, the Pentagon dropped all contracts with Anthropic and labeled the company a supply chain riskβa designation historically reserved for foreign adversaries3
. President Trump separately directed all federal agencies to stop using Anthropic's technology.
Source: Fast Company
Microsoft aligned itself with Anthropic's position in its brief, stating that AI should not be used "to conduct domestic mass surveillance or put the country in a position where autonomous machines could independently start a war"
2
. The lawsuit against the U.S. Department of War, as the Trump administration has renamed the Department of Defense (DoD), challenges what Microsoft's lawyers called an "unprecedented" use of supply chain risk authority under 10 U.S.C. Β§ 32524
.Microsoft's brief highlighted a procedural contradiction that exposes the designation's impact on government contractors. While the Pentagon gave itself six months to transition away from Anthropic's tools, it applied the supply chain risk designation to contractors immediately with no equivalent runway
4
. "Otherwise, Microsoft and other technology companies must act immediately to alter existing product and contract configurations used by DoW. This could potentially hamper U.S. warfighters at a critical point in time," the company warned1
.The temporary restraining order would "enable a more orderly transition and avoid disrupting the American military's ongoing use of advanced AI," Microsoft argued
3
. This double standard forces tech suppliers to scramble to audit, re-engineer, and reprocure products on a timeline the government didn't impose on itself4
. Microsoft integrates Anthropic's AI products into technology it supplies to the U.S. military, making the designation's immediate enforcement particularly disruptive5
.Related Stories
The New York Times DealBook called Microsoft's brief "a remarkable act" and "a momentous decision" for a company that is one of the largest government contractors in America, noting it stands out in a period when corporate America typically avoids confronting the White House
2
. Microsoft hasn't shied away from fighting Washington at key moments, from its landmark antitrust battle with the Justice Department in the late 1990s to its Supreme Court fight over DACA immigration protections.If courts allow the Pentagon's move to stand, every AI company selling into the government just learned that safety guardrails can be reframed as national security threats
4
. The brief makes clear this lesson isn't lost on the broader tech industry. Thirty-seven engineers and researchers from OpenAI and Google, including Google chief scientist Jeff Dean, separately filed their own amicus brief in support of Anthropic2
.
Source: PYMNTS
Meanwhile, OpenAI moved quickly to fill the gap left by Anthropic, announcing its own Pentagon deal on the same day the designation came down. CEO Sam Altman later acknowledged the timing looked "opportunistic and sloppy"
2
. The State Department reportedly already shifted its internal chatbot model from Claude Sonnet 4.5 to OpenAI's GPT-4.13
. What remains to be seen is whether procurement law can be weaponized over policy disagreements without due process, and whether the tech industry will accept this precedent quietly.Summarized by
Navi
[2]
[4]
04 Mar 2026β’Policy and Regulation

04 Mar 2026β’Policy and Regulation

30 Jan 2026β’Policy and Regulation

1
Technology

2
Policy and Regulation

3
Policy and Regulation
