45 Sources
45 Sources
[1]
The Pentagon is developing alternatives to Anthropic, report says
After their dramatic falling out, it doesn't seem as though Anthropic and the Pentagon are getting back together. Instead, the Pentagon is building tools to replace Anthropic's AI, according to a Bloomberg conversation with Cameron Stanley, the chief digital and AI officer at the Pentagon. "The Department is actively pursuing multiple LLMs into the appropriate government-owned environments," he said. "Engineering work has begun on these LLMs, and we expect to have them available for operational use very soon." Anthropic's $200 million contract with the Department of Defense (DOD) broke down over the last several weeks after the two parties failed to come to an agreement over the degree to which the military could obtain unrestricted access to Anthropic's AI. While Anthropic sought to include a contractual clause that prohibits the Pentagon from using its AI for mass surveillance of Americans, or to deploy weapons that can fire without human intervention, the Pentagon didn't budge. Instead, OpenAI swooped in and made its own agreement with the Pentagon. The Department of Defense -- known under the Trump administration as the Department of War -- also signed an agreement with Elon Musk's xAI to use Grok in classified systems. It makes sense, then, why the Pentagon would be working on phasing Anthropic's technology out of its workflows. While some reports said there was a small possibility that Anthropic would reconcile with the Pentagon, this news suggests that the government is preparing to forge ahead without them. In fact, Defense Secretary Pete Hegseth has declared Anthropic a supply chain risk, a designation usually reserved for foreign adversaries, which bars companies that work with the Pentagon from working with Anthropic as well. Anthropic is challenging this designation in court.
[2]
Justice Department Says Anthropic Can't Be Trusted With Warfighting Systems
The Trump administration argued in a court filing on Tuesday that it did not violate Anthropic's First Amendment rights by designating the AI developer a supply-chain risk and predicted that the company's lawsuit against the government will fail. "The First Amendment is not a license to unilaterally impose contract terms on the government, and Anthropic cites nothing to support such a radical conclusion," US Department of Justice attorneys wrote. The response was filed in a federal court in San Francisco, one of two venues where Anthropic is challenging the Pentagon's decision to sanction the company with a label that can bar companies from defense contracts over concerns about potential security vulnerabilities. Anthropic argues the Trump administration overstepped its authority in applying the label and preventing the company's technologies from being used inside the department. If the designation holds, Anthropic could lose up to billions of dollars in expected revenue this year. Anthropic wants to resume business as usual until the litigation is resolved. Rita Lin, the judge overseeing the San Francisco case, has scheduled a hearing for next Tuesday to decide whether to honor Anthropic's request. Justice Department attorneys, writing for the Department of Defense and other agencies in the Tuesday filing, described Anthropic's concerns about potentially losing business as "legally insufficient to constitute irreparable injury" and called on Lin to deny the company a reprieve. The attorneys also wrote that the Trump administration was motivated to act because of "concerns about Anthropic's potential future conduct if it retained access" to government technology systems. "No one has purported to restrict Anthropic's expressive activity," they wrote. The government argues that Anthropic's push to limit how the Pentagon can use its AI technology led Defense Secretary Pete Hegseth to "reasonably" determine that "Anthropic staff might sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, or operation of a national security system." The Department of Defense and Anthropic have been fighting over potential restrictions on the company's Claude AI models. Anthropic believes its models shouldn't be used to facilitate broad surveillance of Americans and are not currently reliable enough to power fully autonomous weapons. Several legal experts previously told WIRED that Anthropic has a strong argument that the supply-chain measure amounts to illegal retaliation. But courts often favor national security arguments from the government, and Pentagon officials have described Anthropic as a contractor that has gone rogue and that its technologies cannot be trusted. "In particular, DoW became concerned that allowing Anthropic continued access to DoW's technical and operational warfighting infrastructure would introduce unacceptable risk into DoW supply chains," Tuesday's filing states. "AI systems are acutely vulnerable to manipulation, and Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations, if Anthropic -- in its discretion -- feels that its corporate 'red lines' are being crossed." The Defense Department and other federal agencies are working to replace Anthropic's AI tools with products from competing tech companies in the next few months. One of the military's top uses of Claude is through Palantir data analysis software, people familiar with the matter have told WIRED. In Tuesday's filing, the lawyers argued that the Pentagon "cannot simply flip a switch at a time when Anthropic currently is the only AI model cleared for use" on the department's's "classified systems and high-intensity combat operations are underway." The department is working to deploy AI systems from Google, OpenAI, and xAI as alternatives. A number of companies and groups, including AI researchers, Microsoft, a federal employee labor union, and former military leaders have filed court briefs in support of Anthropic. None have been filed in support of the government. Anthropic has until Friday to file a counter response to the government's arguments.
[3]
Trump Administration Vows Legal Fight on Anthropic AI Tool Ban
The Trump administration vowed a legal fight to oust Anthropic PBC from all US government agencies following a dispute over how the company's artificial intelligence technology would be used. "For national security reasons, the terms of service for plaintiff Anthropic PBC's artificial intelligence (AI) technology have become unacceptable to the Executive Branch," the US Justice Department, which is representing the government, said Tuesday in a court filing. Anthropic, the maker of the Claude chatbot, sued to block a declaration by the Defense Department that the company posed a risk to the US supply chain, escalating a high-stakes dispute over safeguards on AI technology used by the military. Anthropic, which wants limits on how its products are deployed, argued a prolonged legal fight could cost it billions of dollars in lost revenue. The company is asking the court to issue a preliminary injunction to block the government's ban from taking effect while the legal fight plays out. In their filing, DOJ lawyers alleged that during Defense Department negotiations in early 2026 with Anthropic to add a provision to its contract that permits the Pentagon to use its technology for any lawful purpose, Anthropic "refused" to accept the term due to its usage policies for Claude. Throughout the negotiations, "Anthropic's behavior more generally caused the department to question whether Anthropic represented a trusted partner with whom the department was willing to contract in this highly sensitive area," according to the filing. The agency also became worried that letting Anthropic continue to access its technical and operational warfighting systems "would introduce unacceptable risk" into Defense Department supply chains, according to the filing. "After all, AI systems are acutely vulnerable to manipulation, and Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations, if Anthropic -- in its discretion -- feels that its corporate "red lines" are being crossed," the lawyers wrote. At the direction of President Donald Trump, the Pentagon and other federal agencies are shifting their AI work to other providers based on a risk designation typically reserved for companies from countries the US views as adversaries. The startup had demanded assurances from the government that its AI wouldn't be used for mass surveillance of Americans or autonomous weapons deployment. Defense Secretary Pete Hegseth, in a Feb. 27 social media post, said the US military would continue to use Anthropic's technology tools for "no more than six months to allow for a seamless transition to a better and more patriotic service." The case is Anthropic v. US Department of War, 26-cv-01996, US District Court, Northern District of California (San Francisco).
[4]
Trump administration defends Anthropic blacklisting in US court
NEW YORK, March 17 (Reuters) - The Trump administration said in a Tuesday court filing that the Pentagon's blacklisting of Anthropic was justified and lawful, opposing the artificial intelligence lab's high-stakes lawsuit challenging the decision. Defense Secretary Pete Hegseth designated Anthropic, the maker of popular AI assistant Claude, a national security supply chain risk on March 3 after the company refused to remove guardrails against its technology being used for autonomous weapons or domestic surveillance. The Trump administration's filing, opens new tab says Anthropic is unlikely to succeed on its claims that the U.S. action violated speech protections under the U.S. Constitution's First Amendment, asserting the dispute stems from contract negotiations and national security concerns, not retaliation. "It was only when Anthropic refused to release the restrictions on the use of its products -- which refusal is conduct, not protected speech -- that the President directed all federal agencies to terminate their business relationships with Anthropic," the administration's legal filing said. The filing, from the U.S. Justice Department, said "no one has purported to restrict Anthropic's expressive activity." Anthropic's lawsuit in California federal court asks a judge to block the Pentagon's decision while the case plays out. Some legal experts say the company appears to have a strong case that the government overreached. President Donald Trump backed Hegseth's move, which excludes Anthropic from a limited set of military contracts but could damage the company's reputation and cause billions of dollars in losses this year, according to its executives. The designation came after months of negotiations between the Pentagon and Anthropic reached an impasse, prompting Trump and Hegseth to denounce the company and accuse it of endangering American lives with its usage restrictions. Anthropic has disputed those claims and said AI is not yet safe enough to be used in autonomous weapons. The company said it opposes domestic surveillance as a matter of principle. In its March 9 lawsuit, Anthropic said the "unprecedented and unlawful" designation violated its free speech and due process rights, while running afoul of a law requiring federal agencies to follow specific procedures when making decisions. The Pentagon separately designated Anthropic a supply chain risk under a different law that could expand the order to the entire government. Anthropic is challenging that move in a second lawsuit in a Washington, D.C. appeals court. Reporting by Jack Queen in New York. Additional reporting by Mike Scarcella in Washington; Editing by Lincoln Feast. Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Government * Constitutional Law * Public Policy Jack Queen Thomson Reuters Jack Queen covers major lawsuits against the Trump administration involving urgent questions of executive power and how their resolution could affect the law and the legal profession in the years to come. Previously, he covered criminal and civil cases against Trump during the interim of his presidential terms, including gavel-to-gavel coverage of his historic hush money trial in New York and his civil fraud trial, which ended in a half-billion-dollar judgment. Jack has also covered high-profile defamation cases including the Dominion Voting Systems' lawsuit against Fox News, which settled for $787 million after intense pretrial litigation. Based in New York, he specializes in breaking news as well as analysis, explainers and other explanatory reporting.
[5]
Defense Department says Anthropic poses 'unacceptable risk' to national security
The Department of Defense said giving Anthropic continued access to its warfighting infrastructure would "introduce unacceptable risk" to its supply chains in a court filing submitted in response to the AI company's lawsuit. If you'll recall, Anthropic sued the government to challenge the supply chain risk designation it received for refusing to allow its model to be used for mass surveillance and the development of autonomous weapons. In its filing, the department explained that its secretary, Pete Hegseth, had a provision incorporated into AI service contracts, allowing the agency to use their technologies for any lawful purpose. Anthropic refused its terms and apparently, the company's behavior caused the Pentagon to question whether it truly was a "trusted partner" that it could work with when it comes to "highly sensitive" initiatives. "After all, AI systems are acutely vulnerable to manipulation, and Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations, if Anthropic -- in its discretion -- feels that its corporate "red lines" are being crossed," the Pentagon wrote in its filing. "DoW deemed that an unacceptable risk to national security," it added, referring to the agency as the Department of War, which is the Trump administration's preferred name for it. It was due to those concerns that President Trump ordered federal agencies to stop using its technology, the filing reads. The company is asking the court to issue a preliminary injunction and put a pause on a ban while it's challenging its supply chain risk designation in court. While Anthropic's clients could continue working with the company on non-defense-related projects, it says the label could cause it to lose billions of dollars in revenue. It's not quite clear if Anthropic is still trying to reach a new deal with the government, as was reported before it filed its lawsuit. As The New York Times notes, Microsoft, Google and OpenAI filed friend-of-the-court briefs in support of Anthropic since then, asking the court to put an injunction on the government ban.
[6]
Sam Altman faced 'serious questions' in meeting with lawmakers about OpenAI's defense work
Oracle is building yesterday's data centers with tomorrow's debt Anthropic had been trying to renegotiate its contract with the DOD, but the talks stalled over a disagreement about how the technology could be used. The DOD wanted Anthropic to grant the military unfettered access to its models for all lawful purposes, while Anthropic sought assurance that its models would not be used for fully autonomous weapons or domestic mass surveillance. Altman said in a post on X the day the deal fell apart that prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems, are two of the company's "most important safety principles." He said the DOD agreed and put them into the arrangement. OpenAI published an excerpt of its contract with the DOD, which says that the agency "may use the AI System for all lawful purposes." The company said it's confident the DOD will not be able to use its AI systems for mass surveillance or fully autonomous weapons because of OpenAI's safety stack, the contract language and existing laws. "We think it's very important to support the United States government and the democratic process," Altman told CNBC on Thursday. He added that while OpenAI disagrees with the Pentagon's decision to designate Anthropic a supply chain risk, he thinks the government needs to be able to make decisions about "how the most important things in the country are going to work." The clash between Anthropic and the DOD came as a shock to officials and technologists in Washington. Many had come to view Anthropic's models as superior -- they were the first to be deployed in the agency's classified networks -- and championed the company's ability to integrate with existing defense contractors like Palantir. Kelly told CNBC that Congress "needs to have a role." "We need to have legislation on this that creates some of these boundaries, guardrails," he said. "This is the United States Congress, things don't move as fast as we would like, but this technology is moving very quickly."
[7]
U.S. Says Anthropic Is an 'Unacceptable' National Security Risk
The U.S. government said on Tuesday that it had deemed the artificial intelligence company Anthropic an "unacceptable risk" to national security because the start-up could disable or alter its technology to suit its own interests, rather than the country's priorities, in a time of war. In a 40-page filing in U.S. District Court for the Northern District of California, lawyers for the government said they questioned whether Anthropic was a "trusted partner," especially given that A.I. systems "are acutely vulnerable to manipulation." Giving Anthropic access to the Department of Defense's warfighting infrastructure would therefore "introduce unacceptable risk into DoW supply chains," the government said, referring to the Department of War, which is the Trump administration's favored term. Anthropic did not immediately have a comment. The filing was the government's first response to lawsuits from Anthropic, a leading A.I. company based in San Francisco that makes the Claude chatbot. On March 9, Anthropic filed two lawsuits -- one in the same court and the other in the U.S. Court of Appeals for the District of Columbia Circuit -- to challenge Defense Secretary Pete Hegseth's decision last month to label it a "supply chain risk." Mr. Hegseth acted after the Pentagon battled with the company over a $200 million contract for the use of A.I. in classified systems. During negotiations for the contract, Anthropic had said it did not want its A.I. used for mass surveillance of Americans or with autonomous lethal weapons. The Pentagon countered that it was not up to a private company to tell it how to use the technology. When the two sides could not agree, Mr. Hegseth said Anthropic posed a supply chain risk, a move that effectively cuts the company off from working with the U.S. government. The label was previously used only to bar foreign companies that posed a national security risk.
[8]
Big Tech backs Anthropic in fight against Trump administration
A slew of America's biggest tech companies have swung behind Anthropic in its lawsuit against leaders in the Trump Administration. Since Monday, Google, Amazon, Apple and Microsoft have publicly supported Anthropic's legal action to overturn Defense Secretary Pete Hegseth's unprecedented decision to label it a "supply chain risk". In legal filings, the tech giants expressed concerns about the government's retaliation against Anthropic after it refused to let its tools be used in mass surveillance and autonomous weapons. The government's behaviour could cause "broad negative ramifications for the entire technology sector", Microsoft warned. Microsoft, which works extensively with the US government and the Department of Defense (DoD), said it agrees with Anthropic that AI tools "should not be used to conduct domestic mass surveillance or put the country in a position where autonomous machines could independently start a war". A joint amicus filing, a filing by parties with a strong interest in a case, also came from the Chamber of Progress. The tech advocacy group, funded by and representing Google, Apple, Amazon, Nvidia and many other tech companies, said they shared concerns over the government punishing Anthropic for public speech. The Chamber of Progress stressed that it is "ideologically diverse" but concerned about the impact of the government's action on protections under the First Amendment of the US Constitution. Facebook owner Meta is one holdout among major tech companies backing Anthropic's action. It left the Chamber of Progress in 2025 after years of membership. The Chamber of Progress said its current members all "oppose governmental attempts to force or restrict access to speech". Anthropic's lawsuit claims its free speech rights have been violated through government retaliation for its public statements, as Hegseth, President Donald Trump and others have accused the company of being "woke" or otherwise politically at odds with the administration. The joint amicus brief called the department labelling Anthropic a risk "a potentially ruinous sanction" for businesses and little more than a "temper tantrum". "If left in place, that sanction imposes a culture of coercion, complicity, and silence, in which the public understands that the government will use any means at its disposal to punish those who dare to disagree," the brief goes on. Another amicus brief was filed by almost 40 OpenAI and Google employees. And two dozen former high ranking US military officials filed their own brief, saying the government's actions "send the message that investing in national security carries the risk of capricious retaliation or disproportionate punishment for voicing disagreement". Big Tech companies throwing support behind Anthropic may seem out of step, given that executives at the firms have supported and donated large sums of money to Trump since his return to office last year. The suddenness and severity of the action against Anthropic appear to have crossed a line for major tech companies. In a court hearing in San Francisco on Tuesday, a lawyer for Anthropic said the DoD had gone so far as to "affirmatively reach out to Anthropic customers, urging them to stop working with Anthropic". A lawyer from the Department of Justice representing the government did not deny such actions and refused to say the government would take no further action against Anthropic. "When the government starts to overreach and step on basic levers of capitalism, the alarm bells go off," Gary Ellis, the chief executive of Remesh AI who previously worked in US politics, told the BBC. "If the government can do this and blacklist a company, one that has incredibly good technology, these executives know this is serious and can quickly impact them." While officials have claimed they did not want to use Anthropic's technology for either mass surveillance or autonomous weapons, Anthropic claims Hegseth started to insist language in its government contracts specifying such prohibitions be removed. Anthropic and the DoD spent weeks negotiating revised contract language, with the row spilling into the public domain in February. Anthropic chief executive Dario Amodei then went public with his refusal to entirely remove the guardrails. It led to Trump berating the company and announcing on his Truth Social platform that Anthropic tools like Claude, in use by government and military agencies since 2024, would be removed from the entire government. Hegseth then designated Anthropic a supply chain risk, branding it not secure enough for government use, the first time an American company has ever received such a label. John Coleman, legislative counsel at FIRE which was part of the joint amicus brief, said he expects more "clashes" between Anthropic and the DoD, given the tension between tech leaders being able to express themselves and government claims of national security issues. "It's our hope that other Silicon Valley companies follow Anthropic's lead in staying true to their principles and rejecting federal pressure to abandon them", Coleman said. "A free society requires no less."
[9]
Pentagon Moving to Replace Anthropic Amid AI Feud, Official Says
The Pentagon is working to develop alternatives to Anthropic PBC's artificial intelligence tools, according to a senior US defense official, following a Trump administration decision to declare the company a supply-chain risk in a feud over safeguards governing military use of the technology. Cameron Stanley, the Pentagon's chief digital and AI officer, said that it would take more than a month to start transitioning from the Anthropic products currently being used in US military operations in Iran but that efforts are under way to install a different large-language model. "The Department is actively pursuing multiple LLMs into the appropriate government-owned environments," Stanley said in an interview. "Engineering work has begun on these LLMs and we expect to have them available for operational use very soon." Stanley's remarks highlighted defense officials' willingness to abandon Anthropic as an AI provider following a breakdown in talks last month. After Anthropic refused to drop its demand for assurances that its AI wouldn't be used for mass surveillance of Americans or autonomous weapons deployment, the Pentagon moved to label the firm a threat to the supply chain. Spokespeople for Anthropic had no immediate comment. That designation threatens a $200 million pact for Anthropic to provide the Pentagon with classified AI tools and could bar it from partnering with other companies on defense work. Defense Secretary Pete Hegseth and President Donald Trump have already outlined a six-month period for the military and other federal agencies to shift from Anthropic to different AI developers. In recent weeks, OpenAI and Elon Musk's xAI have won approval to perform classified work for the Pentagon. It's unclear how easily or how quickly their products could be integrated into existing programs such as the Maven Smart System, an AI-enabled mission control platform made by Palantir Technologies Inc. that Bloomberg News has reported the US is using in Iran campaign. Another provider, Alphabet Inc.'s Google, is introducing its Gemini AI agents across the Pentagon's three million-strong workforce to automate routine tasks. The company will initially operate on unclassified networks, and then move into classified work, according to Emil Michael, the under secretary of defense for research and engineering. The addition of other possible vendors illustrates the Defense Department's desire to plug the gap created by ejecting Anthropic and to speed artificial intelligence adoption. A new strategy released in January called for making the military an "AI-first" force by increasing experimentation with the most advanced models and reducing bureaucratic barriers to use. Until recently Anthropic was the only AI system that could operate in the Pentagon's classified cloud, and its Claude tool has won favor among defense personnel for its ease of use. Anthropic last week sued to block the supply-chain risk designation and other steps by the Trump administration to bar it from government work. In court filings, the company has claimed the moves violate its rights to free speech and due process under the US Constitution while jeopardizing billions of dollars in business. Even so, Michael said in an interview hours after Anthropic filed its lawsuit that the military was moving on and that there was little chance to revive talks. "I don't think there's a scenario where this gets resolved in that way," he said.
[10]
The Pentagon Claims That Anthropic's 'Soul' Creates a Supply-Chain Risk. That Makes No Sense
"Their model has a soul, a 'constitution'â€"not the US Constitution." Emil Michael, the Under Secretary of Defense for Research and Engineering, appeared on CNBC on Thursday, where he faced questions about the Pentagon's designation of Anthropic as a supply chain risk. Michael tried to make his case that Anthropic was a unique threat to American national security, an utterly confusing stance when the U.S. military is still using Anthropic's AI model Claude. “We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our warfighters are getting ineffective weapons, ineffective body armor, ineffective protection," Michael argued on CNBC. The "soul" is a reference to the “Soul overview†guiding document baked into Claude that influences its interactions with users and their "personality." Late last year, a version of the document was discovered by a user and briefly made headlines. It included guidance like “being truly helpful to humans is one of the most important things Claude can do for both Anthropic and for the world.†The AI startup subsequently confirmed the legitimacy of the document, saying it was a work in progress, and in January, the document was released in full as "Claude's Constitution." The Pentagon gave Anthropic an ultimatum in late February that it would have to lift guardrails that prohibit Claude from being used in mass domestic surveillance and fully autonomous weapons or face being labeled a supply chain risk. Anthropic refused, and the Pentagon gave the company that designation, something that's never been used against a U.S. company before. Anthropic is now suing, and the Pentagon is taking the next six months to get Claude out of its system. CNBC's Andrew Ross Sorkin asked Michael about the contradiction in the fact that the U.S. military was claiming Claude was a dire threat to national security while stopping short of an immediate decoupling. "If, in fact, this was and is a genuine supply chain risk, wouldn't you be removing this service immediately from all of the Pentagon?" Sorkin asked. The CNBC host then pointed out that Claude was still being used "as we speak" in Iran, and there are reports from Reuters and elsewhere that the Pentagon and other parts of the U.S. government were exploring an "extension period" to use it for even longer. "If it was a genuine supply chain risk, and this was not part of a larger negotiation, what some people think is a political shakedown, why wouldn't you remove it immediately?" Sorkin asked. Michael has been extremely hostile to Anthropic CEO Dario Amodei, according to several reports, but insisted that it wasn't punitive and that it would take time to get Claude out. "If they had never entered the department systems, it wouldn't be an issue on this, and they could move on. But they're embedded in our systems. And as you know, Andrew, you can't just rip out a system that's deeply embedded overnight." Michael went on to argue that "we're watching it very closely, making sure we have control so that there's no way that the model could be corrupted or that the insider threat could do anything through it," referring to the possibility that Claude could do something dangerous and against the military's interests. "But the supply chain threat is real, but we also have to move off it, and that doesn't happen overnight. This is not just Outlook, where you could delete it from your desktop," Michael argued. That argument might make sense to some people who don't think about it too hard. But it skirts around the fact that it's not what a supply chain risk designation is for. When the U.S. national security establishment calls Chinese hardware manufacturers like Huawei a supply chain risk, they're worried that the Chinese government could have access to U.S. devices through a backdoor of some kind. When a Swiss cybersecurity company with Russian ties received the supply chain risk designation from the Office of the Director of National Intelligence (DNI) in 2025, it was similarly over a concern that data protected by the company could be compromised. The U.S. government didn't want any data held by the intelligence community to fall into the wrong hands, and any offending software needed to be reported within three days and removed "promptly." If something that posed a legitimate risk to U.S. national security was sitting on American military hardware right now, it would be ripped out immediately, and they'd start using pen and pencil to fight this war, not gamble with a real potential threat. Or at least that's what an intelligent military would do. The difference with Anthropic is not that they actually pose a risk; it's that they've placed guardrails that won't allow two very specific use cases that the company believes would create a dangerous environment or violate the U.S. Constitution. Anthropic has said that the reason the company doesn't want autonomous weapons isn't even out of principle, it's that they don't believe Claude can safely do that. But Hegseth and the geniuses at the Pentagon don't care and are going to punish Anthropic for not bending the knee.
[11]
Microsoft's brief in Anthropic case shows new alliance and willingness to challenge Trump administration
A brief filed by Microsoft in Anthropic's lawsuit against the U.S. Department of War shows the deepening ties between the two companies, and Microsoft's willingness to take on the federal government at key moments in its history. Microsoft on Tuesday urged a federal judge in San Francisco to temporarily block the Pentagon's designation of Anthropic as a supply chain risk, arguing that immediate enforcement would hurt Microsoft and other government contractors that depend on Anthropic's technology. The government's designation imposes "substantial and wide-ranging costs and risks" on companies that use Anthropic's models "as a foundational layer of their own products and services, which they provide to the U.S. military," Microsoft said in the filing. The New York Times DealBook called Microsoft's brief "a remarkable act" and "a momentous decision" for a company that is one of the largest government contractors in America, noting that it stands out in a period when corporate America's unwritten rule has been to avoid picking fights with the White House. It came a day after Microsoft launched Copilot Cowork, a new AI product built on Anthropic's Claude models, and four months after Microsoft committed to invest up to $5 billion in the startup in a deal that includes Anthropic spending at least $30 billion on Microsoft Azure. Amazon, which has invested $8 billion in Anthropic, has not publicly weighed in on the lawsuit or the supply chain risk designation. We've contacted the company for comment. Microsoft hasn't shied away from fighting with Washington, D.C., at key moments in its history, ranging from its landmark antitrust battle with the Justice Department in the late 1990s to its Supreme Court fight against the Trump administration over DACA immigration protections. The Redmond-based company has built one of the deepest government-relations operations in tech, led by President and Vice Chair Brad Smith, a former D.C. lawyer whom the New York Times once called "a de facto ambassador for the technology industry at large." Anthropic sued the Department of War on Monday over the designation, which is historically reserved for foreign adversaries. It followed the collapse of contract negotiations in which Anthropic refused to drop two guardrails on its AI models: no use for fully autonomous weapons and no use for mass domestic surveillance of Americans. President Trump separately directed all federal agencies to stop using Anthropic's technology. OpenAI, meanwhile, moved quickly to fill the gap left by Anthropic, announcing its own Pentagon deal on the same day the designation came down. CEO Sam Altman later acknowledged the timing looked "opportunistic and sloppy." Thirty-seven engineers and researchers from OpenAI and Google, including Google chief scientist Jeff Dean, separately filed their own amicus brief in support of Anthropic. In its amicus brief, Microsoft said AI should not be used "to conduct domestic mass surveillance or put the country in a position where autonomous machines could independently start a war," aligning itself with Anthropic's position on the two sticking points in the negotiations. Microsoft also flagged a double-standard in the government approach: the Pentagon gave itself six months to transition off Anthropic's models but made the designation effective immediately for contractors. Without a restraining order, Microsoft warned, it and other companies would have to "act immediately to alter existing product and contract configurations" for the military.
[12]
Pentagon CTO says 'no chance' of renewed Anthropic negotiations
March 12 (Reuters) - Pentagon Chief Technology Officer Emil Michael on Thursday ruled out negotiations with Anthropic after the agency labeled the AI lab a supply-chain risk in a dispute over restrictions on how the U.S. military can use its technology. "There's no chance. The (Anthropic) leadership has proven, through the leaking and through sort of bad faith negotiation that they don't want to reach an agreement," Michael said in an interview with CNBC. Anthropic did not immediately respond to a Reuters request for comment. Last week, the U.S. Department of Defense labeled Anthropic a supply-chain risk, a move that effectively bans its technology from military use and bars government contractors from deploying it in work for the U.S. armed forces. Anthropic sued the Trump administration on Monday, calling the government's actions unlawful, and said the move will put hundreds of millions of dollars worth of revenue in jeopardy. The dispute is notable in part because Anthropic aggressively courted the U.S. national security apparatus before most other AI companies. CEO Dario Amodei has said he is not opposed to AI-driven weapons, but believes the current generation of AI technology isn't good enough to be accurate. Reuters has reported that Anthropic's investors were racing to contain the damage caused by the fallout with the Pentagon. A group, including OpenAI and some of these investors, expressed concern over the government's move. Reporting by Harshita Mary Varghese in Bengaluru; Editing by Leroy Leo Our Standards: The Thomson Reuters Trust Principles., opens new tab
[13]
Anthropic seeks appeals court stay of Pentagon supply-chain risk designation
Dario Amodei, Anthropic's CEO, speaking on CNBC's "Squawk Box" outside the World Economic Forum in Davos, Switzerland, on Jan. 21, 2025. Anthropic on Wednesday sought a stay from a U.S. appeals court after the Pentagon said the company was a supply-chain risk, pending a judicial review of the case, adding that the designation could cost it billions of dollars in lost revenue. Anthropic's latest request comes after a weeks-long dispute over technology guardrails on the use of Anthropic's artificial intelligence tools by the U.S. military. Defense Secretary Pete Hegseth labelled the firm a supply-chain risk and barred the Pentagon and its contractors from using its AI products. The AI firm separately filed a lawsuit earlier this week in a California federal court to challenge its Pentagon blacklisting. In a filing with the U.S. Court of Appeals for the District of Columbia Circuit on Wednesday, Anthropic said the Pentagon's supply-chain designation would cause the company "irreparable harm." According to Anthropic's court filing, more than 100 enterprise customers have reached out to the company about the designation. "By Anthropic's best estimate, for 2026, the government's adverse actions risk hundreds of millions, or even multiple billions, of dollars in lost revenue," lawyers for the AI firm wrote. The Pentagon did not immediately respond to a request for comment outside of regular business hours.
[14]
Microsoft Backs Anthropic in Pentagon Fallout Despite Heated Rivalry
Microsoft is taking Anthropic's side in its fight to overturn the Pentagon's "supply-chain risk" designation. Late last month, the Pentagon dropped all contracts with Anthropic and labeled the AI company a "supply-chain risk" after the latter refused to drop safeguards against mass surveillance and completely autonomous weapons. In response, Anthropic filed two lawsuits against the Department of Defense last week. In an amicus brief filed on Tuesday in support of those lawsuits, Microsoft asked the federal court to issue a temporary block to the DoD designation until the case is decided. "A temporary restraining order will permit the parties to pursue a negotiated resolution that will better serve all involved and avoid wide-ranging negative business impacts," Microsoft wrote in the filing. The designation requires all companies working with the Pentagon to ditch Anthropic's models in work for the Department, effective immediately, even though the agencies have a six-month phase-out period. Microsoft, which is both a long-time government contractor and an investor in Anthropic, warned that decoupling will be tough. "A temporary restraining order will enable a more orderly transition and avoid disrupting the American military's ongoing use of advanced AI," Microsoft wrote in the filing. "Otherwise, Microsoft and other technology companies must act immediately to alter existing product and contract configurations used by DoW. This could potentially hamper U.S. warfighters at a critical point in time." (DoW is in reference to the Department of War, the Trump administration's preferred renaming of the Department of Defense.) AI has been a crucial helper in increasing the speed and scale of the U.S. attack on Iran. The scale of the attacks has been massive, "double" that of the initial “shock and awe†phase of the 2003 invasion of Iraq, according to military experts. In just the first 24 hours, the U.S. hit roughly a thousand targets, per Bloomberg. One of these targets was allegedly an elementary school in southern Iran. Anthropic is leaning into its new pro-humanity stance, with the company announcing on Wednesday that it was launching an internal think tank to research the large-scale implications and dangers AI may present on things like the economy and safety. According to The Verge, as part of the changes, cofounder Jack Clark will be leading the think tank under his new title, "head of public benefit." But Anthropic did willingly lend Claude's services to the U.S. military at one point. The technology was reportedly used in the capture of Venezuelan President Nicolas Maduro, and is allegedly still being used by the military in Iran. The Pentagon's Central Command still uses Claude in some capacity, according to the Wall Street Journal. The DoD has nothing to worry about with its Anthropic breakup, though, because OpenAI was quick to fill the space left behind by Anthropic, much to the dismay of ChatGPT users and some company employees. The government-level transition to OpenAI is currently underway, and the State Department reportedly already shifted its internal chatbot model from Anthropic's Claude Sonnet 4.5 to OpenAI's GPT-4.1. Even though OpenAI once banned AI from being used for military purposes, WIRED reported last week that the Pentagon had been testing OpenAI's models through a Microsoft Azure workaround as far back as 2023. Microsoft's amicus brief comes on the heels of yet another brief in support of Anthropic, this time signed by 37 employees at both Google and OpenAI.
[15]
Tech companies aren't slowing relationships with Anthropic
Why it matters: The Trump administration tried to kneecap one of the world's most powerful AI companies. So far, it's just giving it a leg up. The big picture: Anthropic is raking in more revenue and attention than it did before President Trump and the Pentagon went after the company. * The designation looked like it could hit Anthropic where it hurts most: enterprise contracts, its core revenue driver. The opposite happened. * "I've seen enough. Anthropic is the new default for businesses," Ara Kharazian, lead economist at Ramp, said in a post with new data indicating a surge in the share of enterprises choosing Anthropic over OpenAI for their first AI contracts. * Anthropic's enterprise revenue has continued to grow since the designation was applied, according to Pitchbook. Meanwhile, Claude has surged to the top spot on U.S. app downloads. What they're saying: "We continue to work really, really closely with them, and we have no plans, in terms of the responsibility I have, to change anything," said Brian Delahunty, VP of engineering at Google. * Delahunty told Axios that he recently spoke with Anthropic's CTO about continued partnership. * Multiple Fortune 500 enterprises have told Axios they are not currently making any changes to their Anthropic contracts. Context: President Trump called for the government to stop using Claude, giving agencies six months to phase out existing deployments. * Anthropic is suing the Pentagon over the supply chain risk designation, further fueling uncertainty for companies on how the label will be interpreted and enforced. * Tech industry groups and employees are rallying around Anthropic, filing amicus briefs calling for a pause to the designation. Zoom out: For many companies, Anthropic's models are simply too valuable to abandon. * The AI lab has become a core provider of enterprise AI tools used for coding, research and business automation, embedding its models deep inside modern software stacks. * "If you had to rip the tool out because of some policy or regulatory issue, it's a big deal. These are hard changes to make, and I don't think anyone expected it to get so sticky and so difficult to remove," Bret Greenstein, chief AI officer at consulting firm West Monroe, told Axios. The bottom line: The White House wanted to isolate Anthropic, but the private sector has already voted.
[16]
Anthropic-Pentagon battle shows how big tech has reversed course on AI and war
Less than a decade ago, Google employees scuttled any military use of its AI. Now Anthropic is fighting Trump officials not over if, but how The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war - and what lines it will not cross. Amid Silicon Valley's rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech's answer is looking very different than it did even less than a decade ago. Anthropic's feud with the Trump administration escalated three days ago as the AI firm sued the Department of Defense, claiming that the government's decision to blacklist it from government work violated its first amendment rights. The company and the Pentagon have been locked in a months-long standoff, with Anthropic attempting to prohibit its AI model from being used for domestic mass surveillance or fully autonomous lethal weapons. Anthropic has argued that giving in to the DoD's demands to permit "any lawful use" of its technology would violate its founding safety principles and open up its technology for potential abuse, staking an ethical boundary that others in the industry must decide whether they want to cross. Although Anthropic's refusal to remove safety guardrails and the Pentagon's subsequent retaliation have highlighted longstanding concerns over the use of AI for conflict, the fight has shown how much the goal posts have moved when it comes to big tech's ties to the military. "If people are looking for good guys and bad guys, where a good guy is someone who doesn't support war," said Margaret Mitchell, an AI researcher and chief ethics scientist at the tech firm Hugging Face. "Then they're not going to find that here." There's a number of contributing factors in big tech's newfound embrace of militarism. Its alignment with the Trump administration, which has included shows of fealty to Trump from major CEOs, has tied tech firms to the government's desire to expand its military capabilities. The administration's vow to overhaul federal agencies using artificial intelligence has also specifically signaled an opportunity for AI firms to integrate their products into government and military operations in a way that could secure revenue for years to come. Looming in the background, concern over China's technological advancement and a surge in international defense spending have also shifted attitudes in the industry. It was not so long ago, however, that working with the military on potentially harmful technology was seen as a red line for many big tech workers. In 2018, thousands of Google employees launched a protest against a program to analyze drone footage for the DoD called Project Maven. "We believe that Google should not be in the business of war," over 3,000 workers stated in an open letter at the time. Google decided not to renew Project Maven following the protests and published policies that barred pursuing technology that could "cause or directly facilitate injury to people". In the years since the Project Maven protest, though, Google has clamped down on employee activism, removed the 2018 language from its policies that prohibited creating technology for weaponry and signed numerous contracts that allow militaries to use its products. In 2024, the tech giant fired over 50 employees in response to protests against the company's military ties to the Israeli government. Chief executive Sundar Pichai sent a memo to employees after the firings stating that Google was a business and not a place to "fight over disruptive issues or debate politics". Google announced just this week that it would provide its Gemini artificial intelligence to provide the military a platform for creating AI agents to work on unclassified projects. OpenAI too had a blanket ban on allowing any militaries to access its models prior to 2024, but since and now has its chief product officer serving as a lieutenant colonel in the US military's "executive innovation corps". The startup, along with Google, Anthropic and xAI, signed an up-to-$200m contract with the DoD last year to integrate its technology into military systems. On the day that Pete Hegseth, the defense secretary, declared Anthropic a supply chain risk, OpenAI secured a deal with the DoD allowing its tech to be used in classified military systems. Elsewhere in the tech industry, more hawkish companies like defense tech firm Anduril, founded the year before the Google Maven protests, and surveillance tech maker Palantir have made partnering with the DoD a cornerstone of their businesses and attempted to sway Silicon Valley politics towards their worldview. Palantir has been ahead of the curve on working with the military, contracting with military intelligence to map planted explosives in Afghanistan in the early 2010s. Chief executive Alex Karp published a book last year dedicated in large part to advocating for closer integration of the tech industry and AI with the US military, in one passage accusing the Google workers who protested in 2018 of being nihilists. After Google dropped the Project Maven contract in 2019, Palantir took it over. Maven is now the name of the classified system that military personnel use to access Anthropic's Claude, according to the Washington Post. Even as Anthropic has received public praise in its standoff with the Pentagon, its co-founder and chief exceutive Dario Amodei has emphasized that the AI company and the government largely want the same things. "Anthropic has much more in common with the Department of War than we have differences," Amodei wrote in a blogpost last Thursday. While the White House has accused Anthropic of being "a radical left, woke company", Amodei's views on the use of AI in conflict and fears of its misuse are far from tree-hugging pacifism. In a lengthy essay published in January, Amodei warned against potential harms of AI such as the creation of deadly bioweapons and threats from China maliciously using the technology. Simultaneously, he argued that companies should arm democratic governments and militaries with the most advanced AI possible to combat autocratic adversaries. He expressed less concern about AI making it easier to kill people or conduct warfare and more about the reliability of the technology and threat of it being consolidated by too small a number of people with "fingers on the button" who could control an autonomous drone army. Amodei's essay also foreshadowed some of the central issues involved in his fight with the Pentagon, including the potential for AI as a tool of mass surveillance. While arguing for bulwarks against the abuse of AI, he stated that his formulation was that it was okay to use the technology for national defense "in all ways except those which would make us more like our autocratic adversaries". While Amodei has so far stuck to the company's red lines, he has also repeatedly stated that he wants Anthropic to continue working with the Defense Department. The company's lawsuit against the DoD showcases how extensively the company has been willing to work with the military and alter its products for their use. "Anthropic does not impose the same restrictions on the military's use of Claude as it does on civilian customers," Anthropic's California lawsuit stated. "Claude Gov is less prone to refuse requests that would be prohibited in the civilian context, such as using Claude for handling classified documents, military operations, or threat analysis." The government has reportedly been using Claude for target selection and analysis in its bombing campaign against Iran, a use-case that Anthropic has given no indication that it has an issue with. In his blog post on Anthropic's website last week, Amodei stated that he did not believe that his company had any role in the military's operational decision-making. He claimed that Anthropic supports American frontline warfighters and remains committed to providing them with technology. "We have said to the department of war that we are OK with all use cases," Amodei told CBS News last week. "Basically 98 or 99% of the use cases they want to do, except for two."
[17]
How the Anthropic-Pentagon dispute over AI safeguards escalated
March 11 (Reuters) - A standoff erupted between the U.S. Department of Defense and Anthropic in January after the AI lab refused to loosen safety guardrails on its systems, prompting the Pentagon to label it a 'supply-chain risk,' and putting the company's government contracts in jeopardy. The Claude maker's executives warned that the designation, which they view as retaliation for opposing the use of their technology in autonomous weapons and domestic surveillance, could slash their 2026 revenue by billions of dollars. Legal experts suggest the government's case may be undermined by a mismatch between the law invoked and Anthropic's conduct, internal contradictions in the Pentagon's behavior and evidence that its decision may have been driven by animus rather than security. Here is a timeline of the ongoing conflict: January 29 The Pentagon and Anthropic clash over eliminating safeguards that could allow the government to use its technology to target weapons autonomously and conduct U.S. domestic surveillance February 11 The Pentagon pushes AI companies, including Anthropic, to make their AI tools available in classified settings without many of the standard restrictions that the companies apply to other users February 14 The Pentagon considers ending its ties with Anthropic over the AI lab's insistence on keeping some limits on how the U.S. military uses its models February 23 U.S. Defense Secretary Pete Hegseth summons Anthropic CEO Dario Amodei to the Pentagon for talks on the military use of Claude February 24 The Pentagon asks Anthropic to get on board, or risk consequences, including being labeled a supply-chain risk February 25 The Pentagon asks defense contractors, including Boeing (BA.N) and Lockheed Martin (LMT.N), to assess their reliance on Anthropic February 26 Pentagon spokesperson Sean Parnell asks Anthropic to allow the Pentagon to use its technology for all lawful purposes, giving the company until 5:01 p.m. ET on February 27 to decide February 26 Anthropic says it will not accede to the Pentagon's request to eliminate safeguards from its AI systems February 27 U.S. President Donald Trump directs every federal agency to immediately cease all use of Anthropic's technology February 27 Hegseth directs the U.S. DoD to designate Anthropic a "supply-chain risk to national security" February 27 Anthropic says it will challenge in court the Pentagon's decision February 27 OpenAI announces deal to deploy technology in the DoD's classified network February 28 OpenAI says its latest agreement with the Pentagon includes three red lines: its technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems or for any high-stakes automated decisions March 2 The U.S. Departments of State, Treasury and Health and Human Services move to cease using Anthropic's Claude March 3 Lockheed Martin pledges to follow the DoD's direction, signaling a likely exodus of defense contractors removing Anthropic's tools from their supply chains to protect their federal contracts, legal experts say March 4 U.S. Treasury Secretary Scott Bessent tells CNBC that the agency will remove Anthropic from its government systems within days March 4 Big tech industry group pushes to de-escalate the clash, saying a supply-chain risk designation creates uncertainty for companies and could threaten the military's access to the best products and services March 5 The U.S. DoD formally designates Anthropic as a supply-chain risk March 6 Amazon says it is helping customers transition DoD workloads to alternative models on its cloud, while customers and partners could continue using Claude for all non‑Pentagon workloads March 6 The U.S. General Services Administration draws up strict rules for civilian artificial-intelligence contracts and terminates Anthropic's OneGov deal, which made Claude available to the federal government March 9 Anthropic sues to block the Pentagon from placing it on a national security blacklist, saying the designation is unlawful and violates its free speech and due process rights March 9 Anthropic executives say the U.S. government's blacklisting of the AI firm could cut its 2026 revenue by multiple billions of dollars and cause reputational harm March 10 Microsoft (MSFT.O) files a brief backing Anthropic's lawsuit, saying the DoD designation directly affects it and that a temporary restraining order is needed to avoid costly supplier disruptions and rushed rebuilding of products that depend on Anthropic Reporting by Anhata Rooprai in Bengaluru; Editing by Arun Koyyur Our Standards: The Thomson Reuters Trust Principles., opens new tab
[18]
Palantir CEO Alex Karp says there was 'never a sense' AI products would be used for domestic surveillance in Anthropic-DoD feud | Fortune
Right in the middle of the ongoing feud between the Silicon Valley AI company Anthropic and the U.S. Department of Defense over whether the military will use -- or not use -- Anthropic's large language models is yet another company: Palantir. Palantir, the Miami-based data analytics and artificial intelligence platform, is a key software provider for the Department of Defense -- and the main channel by which the Department has been using Anthropic's large language model, Claude. "We are legitimately still in the middle of all this," CEO Alex Karp said in an interview with Fortune on the sidelines of the company's twice-a-year AIP conference on Thursday. "It's our stack that runs the LLMs." Karp says he had been in numerous discussions with all parties involved -- discussions he declined to give specifics about, as he says he doesn't want to "out conversations" or "bash people." But Karp does want to make one thing clear: The Defense Department is not using AI for domestic mass surveillance on U.S. citizens -- and, to his knowledge, it has no plans to. "Without commenting on internal dialogs, there was never a sense that these products would be used domestically," Karp said. "The Department of War is not planning to use these products domestically. That's a completely different kettle of fish... The terms the Department of War wants are completely focused on non-American citizens in a war context." Palantir has a vast business doing work for the U.S. government, including the DoD. Anthropic partnered with Palantir in 2024 to offer its AI technology to the DoD via Palantir. Anthropic also began working directly with the DoD last year to create a version of its technology designed for the Defense Department. The contentious back-and-forth between Anthropic and the Defense Department has been ongoing since around January, and the two sides don't agree on what set it off. Statements that Undersecretary of Defense for Research and Engineering Emil Michael made last week allege that Palantir had notified the Pentagon that Anthropic was inquiring about whether its models had been used for the U.S. military mission to capture Venezuelan President Nicolás Maduro. (Anthropic has refuted this characterization, asserting it hasn't discussed the use of Claude for specific operations "with any industry partners, including Palantir, outside of routine discussions on strictly technical matters"). Ever since, the two sides have been locked in a fight over whether Anthropic can write contractual limits on how its models are used. Anthropic CEO Dario Amodei has published multiple blog posts on the matter, including an initial statement at the end of February asserting that the Defense Department had refused to accept safeguards that its LLMs not be used for domestic mass surveillance or the deployment of fully autonomous weapons. Pete Hegseth, the Secretary of Defense, later designated Anthropic a "supply-chain risk," threatening many of the company's commercial relationships, and prompting Anthropic to sue the Pentagon over the designation. Palantir, which was funded by the CIA's venture capital arm early on and whose software has been used in counter-terrorism efforts abroad, has long been accused of helping government and intelligence agencies spy on civilians and potential domestic suspects. Karp has repeatedly rebutted such claims for over a decade and has spoken about the importance of setting technical guardrails around technology that could be used in the U.S. for domestic surveillance. Palantir early on created a "Privacy and Civil Liberties" team -- an interdisciplinary group of engineers, lawyers, philosophers, and social scientists -- tasked with building privacy‑protective features into its products and fostering a culture of responsible use. The team helped set up internal channels, including an ethics hotline, for employees to flag work they viewed as crossing ethical lines. Civil liberties groups, however, continue to accuse the company of doing the opposite -- by helping the government surveil. The company's relationship with U.S. Immigration and Customs Enforcement, in particular, which began under the Obama Administration, has invited intense scrutiny and criticism from both external critics and the company's own employees -- criticism that has only escalated over the last year as the Trump Administration has pushed ICE into an aggressive crackdown in cities like Minneapolis. Karp told Fortune he is "very sympathetic with arguments against using these products inside the U.S." and said that he is "totally in favor" of setting terms of engagement and limits to how domestic agencies can use artificial intelligence. "Quite frankly, I think we should self-impose them," Karp said of these terms of engagement. "The Valley should have a consortium: This is what we're going to do, and this is what we're not going to do," he said. But Karp drew a sharp distinction between whether tech companies should set terms with domestic agencies and whether they should set them with the Department of Defense, which is primarily focused on managing the United States' relationships with other countries and its adversaries. "What we're talking about now is using products vis-a-vis someone who's trying to kill our service members," Karp said, noting that he personally supports "wide license" of usage for the Department of Defense specifically. "If we knew China and Russia and Iran wouldn't build them, I would be in favor of very heavy -- very heavy -- legal constraints," Karp said. But he points out that American adversaries will build them and use them against the U.S. anyway. "I don't think this is an opinion. I think this is a fact, and that fact means I think the Department of War should have wide license to use these products."
[19]
Anthropic's Claude would 'pollute' the Pentagon's supply chain: official
Anthropic CEO Dario Amodei (Krisztian Bocsi/Bloomberg via Getty Images) A high-ranking Pentagon official said Thursday that Anthropic's artificial intelligence models would "pollute" the military supply chain. Speaking on CNBC, Defense Department Chief Technology Officer Emil Michael said the startup's Claude models have "a different policy preference" embedded within their design. The agency recently said it was designating Anthropic as a supply chain risk, a move Michael said was intended to prevent soldiers from receiving ineffective weapons or protection. The designation marks the first time an American company has received such a label, which is typically used for foreign adversaries. Defense contractors must now certify they do not use Claude for Pentagon-related work. Despite the blacklist, some major contractors continue to use the technology. Palantir $PLTR Technologies CEO Alex Karp said Thursday on CNBC that his company is still using Claude. Karp noted that while the government intends to phase out the startup, Palantir's products remain integrated with the models for the time being. The Pentagon is currently using Claude to support military operations in Iran. Michael said the agency cannot "just rip out" the technology immediately and has developed a transition plan. He compared the software to deeply embedded systems rather than simple desktop applications. Anthropic has sued the Trump administration, describing the government's actions as "unprecedented and unlawful." The company's legal filing claims the designation causes irreparable harm and threatens hundreds of millions of dollars in contracts. Anthropic was founded in 2021 and uses a "constitution" to shape the behavior and ethical guidelines of its AI. President Donald Trump previously criticized the company on social media, calling its staff "leftwing nut jobs." An internal memo from the Pentagon's Chief Information Officer indicated that use of the tools may continue past a planned six-month phase-out period if the technology is deemed critical to national security and no alternatives exist.
[20]
Microsoft says court should temporarily block Pentagon's blacklist of Anthropic
Microsoft CEO Satya Nadella attends the 56th annual World Economic Forum (WEF) meeting in Davos, Switzerland, January 20, 2026. Microsoft threw its support behind Anthropic on Tuesday and advocated for a temporary restraining order that would block the Pentagon's supply chain designation "for all existing contracts." Such a move would "enable a more orderly transition and avoid disrupting the American military's ongoing use of advanced AI," Microsoft said in a filing. "Otherwise, Microsoft and other technology companies must act immediately to alter existing product and contract configurations used by DoW. This could potentially hamper U.S. warfighters at a critical point in time."
[21]
Microsoft Sides With Anthropic Against Trump Admin's Supply Chain Risk Designation - Decrypt
Microsoft argued the DoD used a foreign-adversary security designation in an "unprecedented" way. Microsoft has up to $5 billion invested in Anthropic, while Anthropic has committed to buy $30 billion in Azure compute under the partnership. That context makes its decision to file an amicus curiae brief in support of Anthropic's lawsuit against the U.S. Department of Defense look less like altruism and more like financial self-defense. The brief, filed March 10 in San Francisco, argues that a temporary restraining order blocking enforcement of the Pentagon's "supply chain risk" designation would serve the public interest. Microsoft itself is a major DoD contractor, and that designation puts its own products at risk. Defense Secretary Pete Hegseth directed that no contractor, supplier, or partner doing business with the U.S. military may conduct any commercial activity with Anthropic -- a sweep potentially broad enough to catch Microsoft's own Copilot and Azure products, which ship with support for Claude. The brief highlights a procedural contradiction that has received little attention in mainstream coverage: The Department of Defense gave itself a six-month phase-out period to transition away from Anthropic's tools, but applied the designation to contractors immediately with no equivalent runway. Microsoft's lawyers called this out directly, noting that tech suppliers must now scramble to audit, re-engineer, and reprocure products on a timeline the government didn't impose on itself. Microsoft also raised an alarm that cuts to the heart of the legal dispute. The supply chain risk authority invoked -- 10 U.S.C. § 3252 -- has historically been reserved for foreign adversaries. Only one such designation has ever been issued publicly under related statutes, and that was against Acronis AG, a Swiss software firm with Russian ties. Using it against a San Francisco AI startup is, as Microsoft put it, "unprecedented." The brief's most pointed argument is structural. If a contract dispute between one agency and one company can trigger a national-security blacklist, then every company doing business with the federal government just inherited a new category of existential risk. Microsoft's lawyers described an industry model built on interconnected services, where one banned component can freeze entire product lines. There's an irony here that's hard to ignore. Microsoft is simultaneously OpenAI's biggest backer -- with investments valued at approximately $135 billion -- and now one of Anthropic's loudest courtroom defenders. OpenAI, for its part, rushed to sign a deal with the DoD hours after the Anthropic blacklist dropped, a move that drew internal backlash and led to public acknowledgment from OpenAI CEO Sam Altman that the announcement "looked opportunistic and sloppy." Microsoft backed both horses. The brief stops short of endorsing Anthropic's specific AI safety positions on autonomous weapons and mass surveillance -- the two red lines that triggered the standoff. Instead, it frames the case in terms any government contractor can understand: due process, orderly transitions, and the effects of weaponizing procurement law over policy disagreements. Microsoft's request is a temporary restraining order, not a verdict. The tech giant wants the clock slowed down enough for the parties to negotiate -- and for its own products to stay legally deployable while they do. What's at stake goes beyond one company's contract. If courts allow the Pentagon's move to stand, then every AI company selling into the government just learned that safety guardrails can be reframed as national security threats. Microsoft's brief makes clear that lesson isn't lost on the broader tech industry -- and that the company isn't willing to learn it quietly.
[22]
Anthropic sues Pentagon, Trump administration over "supply chain risk" designation
Stefan Becket is a managing editor of politics for CBSNews.com. Stefan has covered national politics for more than a decade and helps oversee a team covering the White House, Congress, the Supreme Court, immigration and federal law enforcement. Washington -- Anthropic sued the Defense Department and other federal agencies on Monday over the Trump administration's move to designate it a supply chain risk and eliminate its use across the government, the latest chapter in a bitter dispute over the firm's powerful artificial intelligence model. In a 48-page lawsuit filed in the U.S. District Court for the Northern District of California, Anthropic said efforts by the Pentagon and President Trump to punish the company were "unprecedented and unlawful." "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation," the filing said. The company filed a separate, narrower suit asking the U.S. Court of Appeals for the District of Columbia to review the Pentagon's determination that it poses a risk to the supply chain. Federal law gives that court jurisdiction to review the finding. Anthropic filed a motion Wednesday seeking an emergency stay from that court of the supply-chain risk designation. The dispute stems from guardrails that Anthropic sought to impose on the military's use of Claude, the only AI model authorized for use on classified networks. The company sought assurances from the Pentagon that Claude would not be used for mass surveillance of U.S. citizens or to power lethal autonomous weapons. The Pentagon insisted that Claude be available for "all lawful use." The two sides failed to resolve the conflict before a deadline of Feb. 27. Mr. Trump announced that he was ordering all federal agencies "to IMMEDIATELY CEASE all use of Anthropic's technology." Defense Secretary Pete Hegseth said Anthropic would be designated a supply chain risk and cut off from defense contracts, phasing out the technology over the course of six months. The Pentagon has continued to use Claude during the U.S. and Israeli war with Iran, CBS News reported last week. Hegseth formally issued the supply chain risk designation last week. Anthropic's lawsuit asks the court to block Hegseth's order and declare it as "arbitrary, capricious, an abuse of discretion, and contrary to law." The company also asked the court to find that the president did not have the authority to order the rest of the government to cut ties with Anthropic. "Anthropic's contracts with the federal government are already being canceled. Current and future contracts with private parties are also in doubt, jeopardizing hundreds of millions of dollars in the near-term," the company's lawsuit said. "On top of those immediate economic harms, Anthropic's reputation and core First Amendment freedoms are under attack. Absent judicial relief, those harms will only compound in the weeks and months ahead." The lawsuit accused the administration of "seeking to destroy the economic value created by one of the world's fastest-growing private companies, which is a leader in responsibly developing an emergent technology of vital significance to our Nation." "The Challenged Actions inflict immediate and irreparable harm on Anthropic; on others whose speech will be chilled; on those benefiting from the economic value the company can continue to create; and on a global public that deserves robust dialogue and debate on what AI means for warfare and surveillance," the suit continued. "There is no valid justification for the Challenged Actions. The Court should declare them unlawful and enjoin Defendants from taking any steps to implement them." A Pentagon spokesperson declined to comment on pending litigation. In a statement, White House spokeswoman Liz Huston said the president "will never allow a radical left, woke company to jeopardize our national security by dictating how the greatest and most powerful military in the world operates." "Under the Trump Administration, our military will obey the United States Constitution -- not any woke AI company's terms of service," she said.
[23]
Who in tech is supporting Anthropic's AI fight with US government?
The Department of War labelled Anthropic a 'supply chain risk,' which means it can exclude them from bidding for contracts and refuse other companies from working with them. Microsoft, retired military leaders, and artificial intelligence (AI) think tanks are supporting Anthropic in its case to remove the Trump administration's designation of the company as a "supply chain risk." The US Department of War's (DOW) actions"forces government contractors to comply with vague and ill-defined directions that have never before been publicly wielded against a US company," Microsoft's legal brief said. Microsoft's legal filing said that the supply chain risk designation "may bring severe economic effects that are not in the public interest," and has asked the judge to order a temporary lifting of the designation. The filings follow Anthropic mounting a legal challenge against the United States' Department of War last week. Anthropic's label as a supply chain risk means the government is able to exclude the company from contract awards, remove its products from consideration and prevent direct prime contractors from using the supplier. Anthropic was, until recently, the only one of its peers approved for use in classified military networks. Its AI chatbot had been rolled out throughout the US government's classified information networks, deployed at national nuclear laboratories, and did intelligence analysis for the Department of War. Who else has come out to support Anthropic? Several joint files supporting Anthropic have been launched by American military officials, Big Tech companies, and AI organisations. US Defence Secretary Pete Hegseth's conduct against Anthropic "threatens the rule-of-law principles that have long strengthened our military," according to the file, which is supported by Michael Hayden, the former director of the US' Central Intelligence Agency (CIA). They also allege that Hegseth's actions are a misuse of government authority for "retribution against a private company that has displeased the leadership." The filing from the ex-military officials warns that the "sudden uncertainty" of targeting a technology widely embedded in military platforms could disrupt planning and put soldiers at risk during ongoing operations, such as the war in Iran. Another filing, on behalf of 37 AI engineers previously working at OpenAI and Google's DeepMind, called the DOW's actions an "improper and arbitrary use of power that has serious ramifications for our industry." "If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness in the field of artificial intelligence and beyond," the brief reads. "And it will chill open deliberation in our field about the risks and benefits of today's AI systems." A separate filing said that it is not hard to "imagine a world in which the government effectively controls what all Americans do and say," if the government is able to dictate Anthropic's policies. The joint filing, from groups such as the Electronic Frontier Foundation and the Cato Institute, said that the government's actions violate the country's First Amendment, the part of the Constitution that dictates free speech rights. "The government's actions ... threaten the vitality and independence of our democracy," the filing reads. How did the conflict start? The issue between Anthropic and the US military began when the company refused to give the military unfettered access to its AI chatbot, Claude. The government gave Anthropic 48 hours to give it access, or face sanctions. Anthropic CEO Dario Amodei said the company gave the military two red lines: that its technology not be used for mass domestic surveillance or that it be embedded in fully autonomous weapons. Amodei said in a statement on February 26 that he "cannot in good conscience accede to the Pentagon's request" for unrestricted access to the company's AI systems. "In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," he wrote. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do," he said. Microsoft's court filing supported Anthropic's red lines, saying that "American AI should not be used to conduct domestic mass surveillance or start a war without human control," noting their position is "consistent with the law". Claude will also be phased out of military operations over the next six months, according to a statement from Hegseth. Amodei said in a statement that the Department of War can choose who to work with on contracts that are more aligned with its vision, but "given the substantial value that Anthropic's technology provides to our armed forces, we hope they reconsider".
[24]
Microsoft shows support for Anthropic's legal case against the US Department of Defense
Claude creator, Anthropic, has already had an eventful 2026 -- and now it's challenging the recently renamed US Department of War. The Pentagon has threatened to designate the company as a "supply chain risk," which Anthropic is currently fighting by suing the US government. Additionally on Monday, Anthropic requested a restraining order that would pause the Pentagon's national security blacklisting of the company, arguing the court should hear the company's suit first before carrying out the government's order. The latest update today is that an amicus brief filing suggests Microsoft may back Anthropic's cause (via Reuters). Microsoft argued it would be directly impacted by the DOD's 'supply chain risk' designation of Anthropic. This is because Microsoft integrates a number of Anthropic's AI products into tech it supplies to the US military. The company therefore made the case that the temporary restraining order is necessary in order to avoid massive disruption; if Anthropic was blacklisted, suppliers such as Microsoft would then have to make a costly pivot in order to quickly rebuild any of its offerings that were previously based on Anthropic's AI tech. Microsoft requested to file its amicus brief in a federal court in San Francisco, but a judge still needs to approve its official inclusion in the case. This follows another similarly supportive filing on Monday from a group of 37 researchers and engineers hailing from OpenAI and Google. So, why is it all kicking off between the AI suppliers and the US Gov? Allow me to briefly recap the saga so far: Last July, Anthropic announced it had reached a deal with the then-named US Department of Defense "with a $200 million ceiling" that would see Anthropic providing "prototype frontier AI capabilities that advance U.S. national security." Six months later, disagreements arose between Anthropic and the DOD regarding the implementation of safeguards in the company's LLMs -- specifically the guardrails that would prevent these models from being deployed in autonomous weapon targeting and domestic surveillance scenarios. Anthropic's AI safety lead quit shortly afterwards, issuing a bizarre resignation letter via X that includes the phrase "The world is in peril." Shortly after that Anthropic revised its stance on 'pausing' development of more powerful AI models if suitable safety safeguards weren't yet ready, effectively ditching its defining safety promise. Even so, Anthropic ultimately stood up to the DOD and refused to remove the aforementioned LLM safeguards. "We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values," Anthropic CEO Dario Amodei wrote, going on to later add: "Frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America's warfighters and civilians at risk." Beyond threatening to blacklist Anthropic, the Pentagon has since responded by giving itself six months to phase out its use of the company's products. OpenAI has stepped in to fill the AI void, though CEO Sam Altman has said the company will be amending the language of its deal with the DOD. Bottom line, OpenAI has recently drawn "three main red lines" in the sand that are suspiciously similar to Anthropic's original objections: "No use of OpenAI technology for mass domestic surveillance. No use of OpenAI technology to direct autonomous weapons systems. No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as 'social credit')." As yet no toys have come flying out of the government's pram in response to OpenAI's stance, but it is worth noting that the company's lead of robotics, Caitlin Kalinowski, resigned recently -- after those 'red lines' were published. "This wasn't an easy call," she says. "AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got." So, all rather messy, and this is unlikely to be the last we'll hear about the AI industry's' legal wranglings with the US government.
[25]
Microsoft urges Pentagon pause blacklisting Anthropic
San Francisco (United States) (AFP) - Microsoft on Tuesday warned a judge that the Pentagon blacklisting of Anthropic could hamper US warfighters and imperil the country's drive to lead in artificial intelligence. In a brief, Microsoft backed Anthropic's request for an order stopping the Pentagon from implementing its ban on the use of Anthropic AI until the matter is settled in court. Anthropic filed suit this week against the Trump administration, alleging the US government retaliated against the company for refusing to let its Claude AI model be used for autonomous lethal warfare and mass surveillance of Americans. In the complaint, filed in federal court in San Francisco, Anthropic seeks to have its designation as a national security supply-chain risk declared unlawful and blocked. Anthropic is the first US company ever to have been publicly punished with such a designation, a label typically reserved for organizations from foreign adversary countries, such as Chinese tech giant Huawei. The label not only blocks use of the company's technology by the Pentagon, but also requires all defense vendors and contractors to certify that they do not use Anthropic's models in their work with the department. AI overhaul Microsoft argued in an amicus brief that blacklisting Anthropic was an unprecendented response to a contract dispute that portended ill for the technology sector as well as the US military. "This is not the time to put at risk the very AI ecosystem that the administration has helped to champion," Microsoft said in the brief. A temporary restraining order would allow time to avoid disrupting the American military's ongoing use of advanced AI, Microsoft argued. "Otherwise, Microsoft and other technology companies must act immediately to alter existing product and contract configurations used by Department of War." "This could potentially hamper US warfighters at a critical point in time." The row erupted days before the US military strike on Iran. Anthropic's Claude is the Pentagon's most widely-deployed frontier AI model and the only such model currently operating on its classified systems. Anthropic had infuriated Pentagon chief Pete Hegseth by insisting the technology should not be used for mass surveillance or fully autonomous weapons systems. President Donald Trump subsequently ordered every federal agency to cease all use of Anthropic's technology. "AI should not be used to conduct domestic mass surveillance or put the country in a position where autonomous machines could independently start a war," Microsoft said in the filing. More than three dozen AI industry insiders from OpenAI and Google, including Google chief scientist Jeff Dean, argued in support of Anthropic in an amicus brief filed with the court on Monday. In its lawsuit, Anthropic said it was founded on the belief that its AI should be "used in a way that maximizes positive outcomes for humanity" and should "be the safest and the most responsible." "Anthropic brings this suit because the federal government has retaliated against it for expressing that principle," the lawsuit says.
[26]
Tech industry rallies behind Anthropic in Pentagon fight
Why it matters: The Pentagon didn't just stop doing business with Anthropic -- it labeled the company a supply chain risk, a move industry says could chill innovation and reshape how the government treats AI vendors. Driving the news: Major tech industry groups representing companies with Pentagon contracts filed an amicus brief calling for a pause on the designation. * "The government has ample, well-established tools to resolve procurement disputes and to contract with providers on whatever terms it prefers," the March 13 court filing states. * What they can't do, the filing says, is "misuse extraordinary national security authorities designed for foreign adversary sabotage" because of a disagreement with a contractor -- or ignore congressional protections and "upend the legal framework on which the entire technology contracting community depends." The filing was signed by the Computer & Communications Industry Association (CCIA), the Information Technology Industry Council (ITI), the Software & Information Industry Association (SIIA), and TechNet. * Google, OpenAI, Meta, Cloudflare, Adobe, Accenture, Nvidia, Microsoft, and Deloitte are among the many companies represented by the groups. * The industry groups say the move bypassed the typical procurement and supply-chain security processes that Congress created to deal with situations like the Pentagon-Anthropic dispute. Catch up quick: Anthropic is suing the Pentagon and other federal agencies, alleging the designation violates the company's First Amendment rights and exceeds congressional authority. * President Trump has also told the federal government to stop using Anthropic's Claude. Between the lines: This fight is a major test of how aggressively the government can regulate AI companies through procurement decisions. The other side: The Pentagon labeled Anthropic a supply chain risk after officials said the company was trying to impose "woke" usage policies on sensitive military operations. * The Pentagon is offering a deadline extension to off-board Anthropic, which undersecretary Emil Michael says could be needed for ongoing sensitive operations but, ultimately, the department is looking to rip and replace the company. The bottom line: Industry isn't buying it. * If the government can blacklist a company and claim it's a security risk, the entire procurement system "becomes contingent on political favor rather than the rule of law," the groups wrote in the brief. What's next: A hearing on whether to grant Anthropic temporary relief is set for March 24.
[27]
The Pentagon-Anthropic clash is a warning for every enterprise AI buyer
Every so often, a "technical" dispute reveals something much bigger. The recent blowup between the U.S. Department of Defense and Anthropic is one of those moments: not because it's about a $200 million contract, but because it makes visible a new kind of enterprise risk, one that most CEOs, CTOs, and CIOs are still treating as a procurement detail. In a recent piece, "The Pentagon wants to rewrite the rules of AI," I focused on the political meaning of a government attempting to force an AI company to relax its own guardrails. For enterprise leaders, the most important takeaway is more practical: If your AI capabilities depend on a single provider's terms, policies, and enforcement mechanisms, your strategy is now downstream of someone else's conflict. According to reporting, the Pentagon wanted the ability to use Anthropic's models "for all lawful purposes," while Anthropic insisted on explicit carve-outs, particularly around mass surveillance and fully autonomous weapons. When Anthropic wouldn't budge, the dispute escalated into threats of blacklisting and "supply chain risk" designation, with public pressure at the highest political levels. The Associated Press describes the demand for broader access and the potential consequences in detail, including the Pentagon's willingness to treat compliance as nonnegotiable for participation in its internal AI network, GenAI.mil.
[28]
Anthropic just sued the Pentagon. The outcome could reshape the AI race with China | Fortune
Anthropic has now filed two lawsuits against the Department of Defense. What happens next could matter far more than either side is letting on. Supposedly, Anthropic refused to give the Pentagon unrestricted access to Claude, its frontier AI model, the only one currently running on classified military networks. They wanted guarantees that there would be zero mass surveillance and no autonomous weapons without a human in the loop, making the final decisions of life or death. The Department of War's message was "remove those restrictions or lose everything." And President Trump ordered every federal agency to stop using Anthropic and designated the company a "supply-chain risk." But, there is far more to this story than lawsuits and bruised egos. Federal law already prohibits mass surveillance of US citizens. The DoW policy already restricts autonomous weapons. Anthropic is demanding contractual veto power over activities that are already illegal. A private company claiming authority over how the United States military operates is not acceptable. No one elected Dario Amodei and we don't let Lockheed dictate targeting doctrine. The notion that a software company should hold veto power over operational military decisions has no precedent. Claude outperforms ChatGPT on virtually every enterprise benchmark that matters from legal reasoning and financial modeling to cybersecurity and legacy systems modernization. But, a "supply chain risk" designation by the Department of War threatens to end Anthropic's commercial momentum before it can fully capitalize on its technological lead. Anthropic signed their $200 million contract with the Pentagon in July 2025. That's eight months ago. Now it's done and OpenAI is swooping in and filling that void. To say this happened fast is understating it. Additionally, Anthropic and OpenAI have both publicly accused Chinese labs of distilling their models. Those stolen, open-source versions including Deepseek are now available to the PLA, to Iran, to every bad actor on the planet with zero guardrails. Do we want to exist in a world where American companies restrict their own military while adversaries train on pirated versions of that same technology with no restrictions whatsoever? The real existential threat is not the $200 million contract loss, but the ripple effect that will rush through AWS, Google, Palantir, Accenture, Deloitte, and the entire defense contractor ecosystem reaching deep into Anthropic's commercial customer base in the US. The corporate world has shown that they will do whatever it takes to keep the current administration happy with them. Every company that does business with the federal government now potentially has to certify zero exposure to Anthropic products. AWS, Google Cloud, Azure all serve the government, and Anthropic says the largest U.S. companies use Claude, and many are defense contractors. If this comes to be, Anthropic may not be viable in the United States for much longer. My point of view is that legally, the designation won't survive. There's 10 U.S.C. § 3252 limitations, due process and First Amendment arguments, and the Luokung and Xiaomi precedents. Then, there is the inherent contradiction that the government says that Anthropic is dangerous, but they're allowing six months to phase it out. All of that combined and there is a playbook for Anthropic to win these two suits. They have billions, which means they can afford the best legal team money can buy. They have the ammunition and the will to battle this administration as long as it takes. Winning in court is necessary but not sufficient. To stay viable, Anthropic needs to move on multiple fronts simultaneously: The core question isn't really about lawsuits or contract dollars. It's about who decides the boundaries of national defense -- elected officials accountable to voters, or tech executives accountable to their boards. Vinod Khosla put it plainly: he admires Anthropic's principles, but disagrees with the principle itself.
[29]
Exclusive: Anduril's Luckey says Pentagon could have been "more forceful" against Anthropic
Why it matters: The Pentagon's designation of Anthropic as a supply chain risk is a move historically reserved for foreign adversaries. It means that other companies who do business with the Pentagon may have to cut ties with the AI giant. * This puts hundreds of companies that do business with Anthropic and the department in limbo. Microsoft, for one, asked for a pause to avoid potentially hampering soldiers. What he's saying: The Pentagon would face a slippery slope if it allowed every company it does business with to have its own special provisions, Luckey said. * "What you end up with very quickly is a patchwork of different regulations across all these different companies. And the department now has to ask: Is this operation, is this planning element, is this tool, compliant with these five different companies' versions" of their limitations? Catch up quick: The relationship between the Pentagon and Anthropic broke down during contract negotiations over the latter's efforts to ensure its models are not used for domestic mass surveillance or fully autonomous weapons. * Anthropic rival, OpenAI, was able to strike a deal with the Pentagon, and says it has the same safeguards accounted for in its contract. Zoom out: The dispute, which has upended the tech industry, is a matter of both policies and personalities, Luckey said. * "Somebody probably could have gotten something equivalent to those policies by talking about it in exactly the right way ... and making sure to not to rub the wrong people the wrong way," Luckey said. * "Personalities always enter into it. I'd be naive to say otherwise," he said. The personalities involved probably didn't approach negotiations in the most "collaborative way," he added. * But ultimately, the Pentagon wins because it made sure the chain of command, not a company, is responsible for use of force, he said. The intrigue: The Pentagon is continuing to use Anthropic, without violating the company's terms of use, and is offering senior leaders an extension beyond the 6-month phaseout period as needed.
[30]
Anthropic requests emergency stay of supply chain risk designation in DC appeals case
Anthropic has filed a request for an emergency stay on the Pentagon's designation of its products as a supply chain risk, arguing Defense Secretary Pete Hegseth's decision will "inflict escalating and irreparable harm" for the company. Lawyers for Anthropic argued in a Wednesday filing that the actions was not a "reasoned agency decision" but came from a social media post by Hegseth designating the company a supply chain risk, a designation typically reserved for foreign adversaries. Hegseth, according to Anthropic, did not cite any statutory authority or provide any legal or factual explanation for the directive, allegedly violating due process protections. The designation came after the Pentagon and Anthropic clashed over the use of artificial intelligence for mass surveillance and autonomous weapons. Negotiations eventually fell apart earlier this month, prompting Hegseth to issue the designation and President Trump to order all federal agencies to stop using Anthropic's products. "This authority has been used -- at most -- only once before, and never against an American company," Anthropic's lawyers said. Anthropic also filed suit in a California federal court earlier this week, along with requesting a review from the D.C. federal appeals court. The AI firm, which makes the popular Claude chatbot, also alleged the federal government retaliated against the AI firm, citing Hegseth's criticism of the company as its "sanctimonious rhetoric" and perceived "Silicon Valley ideology." Anthropic's lawyers said the comments and alleged retaliation present "exceptional circumstances," favoring a traditional stay ahead of a judicial review. The Pentagon's actions "reflect open retaliation for Anthropic's protected speech and petitioning activity concerning the safe and responsible use of AI tools -- matters of urgent public concern," the filing stated. "The harms flowing from these unlawful executive actions are immediate, irreparable, and significant to Anthropic, and the government's unlawful acts decidedly disserve the public interest," it added. Earlier this week, lawyers for Anthropic's California case warned the Pentagon's actions could cost the AI firm billions of dollars in revenue. Michael Mongan, an attorney for Anthropic, said more than 100 enterprise customers contacted Anthropic in recent days with concerns about working with the AI firm amid the public clash. Anthropic's chief financial officer estimated harm to the 2026 revenue could range from hundreds of millions of dollars to billions of dollars. Anthropic's case has support from different players across the technology sphere, including technology giant Microsoft, dozens of workers from OpenAI and Google, and think tanks including the Foundation for American Innovation.
[31]
Microsoft Just Backed Anthropic's Fight Against the Pentagon -- and Issued a Stark Warning to the Department of War
In an amicus brief shared with The Hill on Tuesday, Microsoft urged a federal judge to temporarily block the designation while the courts review the dispute. The company warned that forcing contractors to immediately cut ties with Anthropic could disrupt existing AI systems used by the U.S. military and ripple across the broader technology sector. "Immediate implementation of the designation could have broad negative ramifications for the entire technology sector and American business community," Microsoft argued in the filing, according to The Hill. The company also warned that U.S. warfighters could be "hampered" if defense contractors are forced to quickly alter AI products and contracts built around Anthropic's models. The legal clash began Monday when Anthropic sued the Trump administration after the Pentagon classified the company as a supply chain risk, a designation typically used for foreign adversaries and companies considered national security threats. The label effectively prevents defense contractors from using the company's technology.
[32]
Microsoft Backs Anthropic, Urging a Judge to Halt Pentagon's Actions Against AI Company
SAN FRANCISCO (AP) -- Microsoft is throwing its weight behind Anthropic in asking a federal court to block the Trump administration's designation of the artificial intelligence company as a supply chain risk. Microsoft, in a legal filing, is challenging Defense Secretary Pete Hegseth's action last week to shut Anthropic out of military work by labeling its AI products as a national security threat. The Pentagon took the action against Anthropic after an unusually public dispute over the company's refusal to allow unrestricted military use of its AI model Claude. President Donald Trump also said he was ordering all federal agencies to stop using Claude. "The use of a supply chain risk designation to address a contract dispute may bring severe economic effects that are not in the public interest," Microsoft, a major government contractor, said in its Tuesday filing in the San Francisco federal court, where Anthropic sued the Trump administration on Monday. The Pentagon's action "forces government contractors to comply with vague and ill-defined directions that have never before been publicly wielded against a U.S. company," Microsoft's legal brief says. It asks for a judge to order a temporary lifting of the designation to allow for more "reasoned discussion." The Pentagon declined to comment, saying it does not comment on matters in litigation. Microsoft also sided with Anthropic's two ethical red lines that were a sticking point in the contract negotiations. "Microsoft also believes that American AI should not be used to conduct domestic mass surveillance or start a war without human control," Microsoft said. "This position is consistent with the law and broadly supported by American society, as the government acknowledges."
[33]
Anthropic's Pentagon showdown is drawing Silicon Valley into a larger fight
The dispute between Anthropic and the Department of Defense is quickly becoming a broader test of how far the government can go in policing AI companies' policies -- and how much support those companies can rally from the wider research community. A fair showing of top AI researchers had already signed a public letter backing Anthropic. Now 37 of them have taken a more formal step, signing an amicus brief filed with the court Monday. The filing underscores how the clash is evolving from a narrow contract dispute into something bigger: a test of whether the government can effectively blacklist an American AI company for setting limits on how its technology is used. The outcome could shape how much independence AI companies have to impose safety guardrails, especially when those limits collide with national security priorities. The group behind the amicus brief includes Google chief scientist Jeff Dean, along with 19 researchers from OpenAI and 10 from Google DeepMind. The researchers filed the brief in their personal capacities, not as representatives of their respective companies.
[34]
Trump administration defends Anthropic blacklisting in US court
Defense Secretary Pete Hegseth designated Anthropic, the maker of popular AI assistant Claude, a national security supply chain risk on March 3 after the company refused to remove guardrails against its technology being used for autonomous weapons or domestic surveillance. The Trump administration said in a Tuesday court filing that the Pentagon's blacklisting of Anthropic was justified and lawful, opposing the artificial intelligence lab's high-stakes lawsuit challenging the decision. Defense Secretary Pete Hegseth designated Anthropic, the maker of popular AI assistant Claude, a national security supply chain risk on March 3 after the company refused to remove guardrails against its technology being used for autonomous weapons or domestic surveillance. The Trump administration's filing says Anthropic is unlikely to succeed on its claims that the U.S. action violated speech protections under the U.S. Constitution's First Amendment, asserting the dispute stems from contract negotiations and national security concerns, not retaliation. "It was only when Anthropic refused to release the restrictions on the use of its products - which refusal is conduct, not protected speech - that the President directed all federal agencies to terminate their business relationships with Anthropic," the administration's legal filing said. The filing, from the U.S. Justice Department, said "no one has purported to restrict Anthropic's expressive activity." Anthropic's lawsuit in California federal court asks a judge to block the Pentagon's decision while the case plays out. Some legal experts say the company appears to have a strong case that the government overreached. President Donald Trump backed Hegseth's move, which excludes Anthropic from a limited set of military contracts but could damage the company's reputation and cause billions of dollars in losses this year, according to its executives. The designation came after months of negotiations between the Pentagon and Anthropic reached an impasse, prompting Trump and Hegseth to denounce the company and accuse it of endangering American lives with its usage restrictions. Anthropic has disputed those claims and said AI is not yet safe enough to be used in autonomous weapons. The company said it opposes domestic surveillance as a matter of principle. In its March 9 lawsuit, Anthropic said the "unprecedented and unlawful" designation violated its free speech and due process rights, while running afoul of a law requiring federal agencies to follow specific procedures when making decisions. The Pentagon separately designated Anthropic a supply chain risk under a different law that could expand the order to the entire government. Anthropic is challenging that move in a second lawsuit in a Washington, D.C. appeals court.
[35]
Defense Dept. Claims Anthropic Would 'Pollute' Pentagon Supply Chain
Defense Dept. Claims Anthropic Would 'Pollute' Pentagon Supply Chain In an argument that effectively boils down to "you didn't break up with me, I broke up with you," Defense Department CTO Emil Michael told CNBC the Pentagon decided not to use Anthropic -- and ordered DOD suppliers to cease using it as well -- because its AI models would "pollute" the supply chain. "We can't have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our war fighters are getting ineffective weapons, ineffective body armor, ineffective protection," Michael told the outlet. "That's really where the supply chain risk designation came from." That's a massive messaging U-turn from just two weeks ago, when Defense Secretary Pete Hegseth demanded Anthropic grant the Pentagon access for unrestricted military use or risk losing its government contract. Hegseth reportedly wanted to use the AI for autonomous weapons and domestic surveillance. Anthropic stood firm, and the government designated it a supply chain risk in what looks like a retaliatory act. It's the first American company to earn the designation.
[36]
Microsoft files amicus brief supporting Anthropic in Pentagon fight
The technology giant Microsoft has filed an amicus brief in Anthropic's case against the Trump administration, urging the court to temporarily block the implementation of the Pentagon labelling the Claude-maker as a supply chain risk. The filing is a significant development for Anthropic, which first filed a complaint against the administration on Monday over the Pentagon's determination and President Trump's recent order for federal agencies to cease use of Anthropic's AI products after negotiations over safety guardrails fell apart. A Microsoft spokesperson told The Hill on Tuesday that the company believes "everyone involved shares common goals and we need time and a process to find common ground." "The Department of War needs reliable access to the country's best technology," the spokesperson said. "And everyone wants to ensure AI is not used for mass domestic surveillance or to start a war without human control. The government, the entire tech sector, and the American public need a path to achieve all these goals together." The filing comes just hours before a federal judge in northern California will take up Anthropic's request for a temporary block of the designation. Anthropic also filed suit in the D.C. federal appeals court, requesting a review of the Pentagon's determination. Microsoft's amicus brief, shared with The Hill Tuesday, argues a pause in the designation will help avoid "disrupting" the U.S. military's ongoing uses of advanced AI. The company, which has partnered with Anthropic on numerous products, warned U.S. warfighters could be "hampered" if companies are forced to immediately alter their existing products and contract configurations with the Pentagon. Any immediate implementation, Microsoft said, could have "broad negative ramifications" for the "entire technology sector" and "American business community." Anthropic is asking the court to ultimately reverse the Pentagon's designation, which has typically been reserved for foreign adversaries and restricts defense contractors from using the company's products. The AI firm alleges the federal government retaliated against the firm for its "protected viewpoint" on AI safety and the limitations of its own AI models. While workers from AI firms OpenAI and Google separately filed an amicus brief Monday in Anthropic's complaint, Tuesday's filing makes Microsoft the first standalone company to voice support for such a pause. Despite the Pentagon's position, Anthropic has argued the restrictions cannot prohibit anyone who can do business with the military from doing business with the AI firm. Google, Amazon and Apple said last week that Anthropic's AI tools will still be available on their platforms for work that does not involve the Pentagon.
[37]
Palantir uses Anthropic's Claude despite Pentagon's 'supply chain risk' tag: CEO Alex Karp - The Economic Times
Palantir Technologies CEO Alex Karp has said the company still relies on AI models from Anthropic, despite the US Department of War (DoW) recently classifying the firm as a 'supply chain risk', according to a report by CNBC. "The Department of War is planning to phase out Anthropic; currently, it's not phased out.. Our products are integrated with Anthropic, and in the future, will probably be integrated with other large language models," Karp told CNBC. Anthropic had partnered with Amazon Web Services and Palantir in 2024 to support the military with AI capabilities. The Pentagon last week formally categorised Anthropic as a 'supply chain risk' -- a label typically applied to companies connected to foreign adversaries. The designation means contractors and vendors working with the Pentagon must confirm they are not using Anthropic's Claude AI models in projects tied to the US military. However, the transition is not immediate. As reported by CNBC, Claude models are still being used in systems supporting US operations in Iran. Anthropic has challenged the move. The company filed a lawsuit against the administration of President Donald Trump, arguing that the designation is "unprecedented and unlawful". It is seeking a court order to pause the Pentagon's action, warning that hundreds of millions of dollars in government contracts could be affected. According to the Pentagon's chief technology officer Emil Michael, replacing Claude will take time because it is deeply integrated with military infrastructure. "You can't just rip out a system that's deeply embedded overnight," Michael said. Trump has said that federal agencies will be given six months to remove Anthropic's products from government systems. However, an internal Pentagon memo suggests exceptions could be granted where the technology is essential to operations and no suitable alternatives are available. Meanwhile, on Friday, Michael explained the government's concerns during an appearance on CNBC's Squawk Box. He said the issue relates to the "different policy preferences" embedded in the model during training, which could conflict with US military needs.
[38]
Microsoft Backs Anthropic Against Pentagon Ban | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The tech giant's comments were delivered in its motion for a proposed amicus brief in Anthropic's lawsuit against the Trump administration that was filed Monday (March 9), according to the report. Microsoft said in its filing that a restraining order halting the designation would enable a more orderly transition, avoid disrupting the military's use of AI and prevent tech firms from having to immediately alter their product and contract configuration, per the report. A Microsoft spokesperson told Reuters: "We believe everyone involved shares common goals, and we need time and a process to find common ground." It was reported in January that Microsoft has become one of Anthropic's top customers and that the tech giant is on track to spend around $500 million per year to use Anthropic's AI in Microsoft products. In addition, Microsoft has increased its focus on selling its cloud customers Anthropic AI models, which could drive more revenue for both firms. In November, Microsoft said it formed a collaboration with Anthropic in which Anthropic will scale its Claude AI model on Microsoft Azure. Anthropic agreed to purchase $30 billion of Azure compute capacity, while Microsoft pledged to invest up to $5 billion in Anthropic. The White House told federal agencies Feb. 27 to stop using Anthropic's AI products after the company refused a Pentagon demand that it agree that the military can use its models in "all lawful use cases." Anthropic wanted contract language that would prohibit the use of its models for autonomous weapons and mass domestic surveillance, while Pentagon officials took the position that the military should retain final authority on use cases, as long as they are lawful. Defense Secretary Pete Hegseth said of Anthropic in a Feb. 27 post on X: "Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable." Anthropic sued the U.S. government Monday (March 9) to block it from placing a supply chain risk designation on the AI company. The label also means that anyone wishing to do business with the military would need to end their relationship with Anthropic.
[39]
Pentagon CTO Emil Michael reveals why Anthropic was labelled a 'supply-chain risk' - The Economic Times
The US Defense Department's chief technology officer, Emil Michael, has explained for the first time why the government considers AI models from Anthropic -- particularly its Claude system -- to be a national security concern. Speaking on Squawk Box on CNBC, Michael said the issue lies in the 'different policy preferences' built into the model during training. According to him, these could conflict with the requirements of the US military. "We can't have a company that has a different policy preference that is baked into the model through its constitution, its soul... pollute the supply chain so our fighters are getting ineffective weapons, ineffective body armour, ineffective protection," Michael said. Not meant as punishment Michael stressed that the move was not designed to punish Anthropic. He pointed out that most of the company's revenue comes from its commercial operations rather than US government contracts. He also rejected reports suggesting that the Pentagon had warned companies against using Anthropic's technology in general. According to him, such claims are simply "rumours" and the restrictions apply specifically to defence supply chains. At the same time, Michael ruled out the possibility of renewed discussions with the company. "There's no chance," he said. "The (Anthropic) leadership has proven, through the leaking and through sort of bad faith negotiations, that they don't want to reach an agreement." 'Supply-chain risk' designation Recently, Anthropic was formally classified as a 'supply-chain risk' -- a status usually applied only to organisations linked to foreign adversaries. As a result, defence contractors and suppliers must confirm that they are not using Claude in any work connected to the Pentagon. Anthropic responded by filing a lawsuit against the administration of President Donald Trump. The company described the designation as "unprecedented and unlawful" and warned that it could put hundreds of millions of dollars in contracts at risk. Dispute over AI use in military systems The dispute stems from disagreements over how AI should be used in defence programmes. The Pentagon had asked Anthropic to remove strict limits preventing its technology from being used in fully autonomous weapons and in domestic surveillance of American citizens. Anthropic refused, arguing that current AI technology is not dependable enough to control autonomous weapons safely. The company also said that using such systems for domestic surveillance would violate fundamental rights. After the negotiations collapsed, US Defense Secretary Pete Hegseth officially labelled Anthropic a national security 'supply-chain risk'. Trump later instructed federal agencies to stop working with the company, with a six-month transition period set for existing agreements. Pentagon stands firm The Defense Department has defended its position, saying decisions about how AI can be used in warfare should be determined by US law rather than the policies of private companies. Officials argue the military must retain complete freedom to apply AI for "any lawful use." They also warned that restrictions imposed by Anthropic could limit critical capabilities and potentially put American lives at risk.
[40]
Exclusive | Pentagon AI chief Emil Michael says it's 'totally bananas' that...
Pentagon AI chief Emil Michael isn't sweating Microsoft's legal support for Anthropic. In fact, he told me the whole situation is "totally bananas." Michael dismissed Microsoft's amicus brief, which the company filed earlier this week in support of Anthropic and against the Department of War, as little more than a move to appease employees. "It's sort of par for the course to prove to your employees that you have certain camaraderie [with other left-wing tech companies]", said Michael, who serves as Under Secretary of Defense for Research & Engineering and oversees AI contracts, on a phone call earlier this week. "If you read their actual briefs, it says everything the Department of War wants to do is legal." While the amicus brief claims Microsoft and Anthropic want to protect warfighters, Michael isn't buying it. It's the government, he told me, that's working to keep service members safe. "[Our move designating Anthropic a supply chain risk] is to protect warfighters -- because Anthropic was trying to insert themselves into the chain of command," Michael explained. "Supply chain risk is to remove that risk, not add another risk." Michael said this is the latest wrinkle in a saga made more bizarre by the fact that Anthropic worked closely with the Department of War for years before the government terminated its contract in February. Following the January 2026 US raid that captured and extradited Nicolás Maduro from Venezuela, an Anthropic executive asked a counterpart at Palantir -- via which Anthropic's Claude AI was integrated -- whether and how its artificial intelligence was used. Anthropic had supplied a specialized version of Claude to some of the military's most sensitive commands -- including Central Command and Indo-Pacific Command -- under a deal worth roughly $200 million. Palantir told Pentagon officials about Anthropic's inquiry, and the Feds interpreted it as potential post-facto disapproval. The move raised alarms over dependency risks on an AI provider that might second-guess classified operations, triggering a contract review and failed talks. Anthropic insisted on safeguards barring mass domestic surveillance of US citizens -- which the US denies doing -- and fully autonomous lethal weapons without human oversight. In response, the Pentagon labeled Anthropic a "supply chain risk" on March 4 and replaced Anthropic's technology with OpenAI. Anthropic sued on March 9, challenging it as unlawful and retaliatory. The Pentagon's concern is that its contractors could try and dictate their operations. "I don't want and can't have a model that has a policy bias that could be poisoned because it doesn't want to do certain military activities -- or an insider threat who disagrees with the democratically elected leadership of this country," Michael said. It's a stance Michael finds hard to square with Anthropic's own claims about its technology. "They're saying this technology is so powerful -- more powerful than a nuclear weapon, more powerful than some governments -- [and] they don't want to use that power to help our country or our war department," Michael told me in disbelief. "This would never happen in China ... And yet they don't want to serve the most important part of the government in time of conflict. That seems very off." He said what makes it stranger is that this comes after years of working with the Pentagon. "What's very surprising is for a company that didn't want to do Department of War stuff, why are they selling and deploying their software to the Department of War for three years?" he said. The episode was a reminder that despite companies like Palantir or Anduril enthusiastically supporting America and taking contracts supporting Customs and Border Patrol and ICE, a slice of Silicon Valley still resists supporting the military. "When I took this job, I thought those 2018 days where Google's employees revolted were behind us," he said, referencing the employee uprising that forced Google to abandon Project Maven, a Pentagon contract to use AI for drone imagery analysis. The incident has become a defining symbol of Silicon Valley's resistance to military work. But these days, he believes Anthropic is the exception, not the norm. "I don't think Elon's woke with AI and Grok," he said. And even a company like Google? They "learned the lesson of 2018." As the Pentagon irons out massive contracts with longtime partners, it's also investing in new providers to make sure the US is always ahead. In the coming weeks, Michael said, the department will announce a wave of new partnerships with smaller, non-traditional defense companies -- startups unburdened by the ideological baggage that derailed Anthropic. For Michael, it's the right ending to a strange saga: the future of American military AI, built by companies that actually want the work. He said, "[They're] building stuff for the next generation of warfare."
[41]
How the Anthropic-Pentagon dispute over AI safeguards escalated
A dispute erupted after Anthropic refused to loosen AI safety safeguards for the Pentagon. The U.S. Defense Department labelled the Claude maker a "supply-chain risk", threatening government contracts. Anthropic plans a court challenge, warning the move could cut billions from 2026 revenue, while industry groups and companies urge de-escalation. A standoff erupted between the U.S. Department of Defense and Anthropic in January after the AI lab refused to loosen safety guardrails on its systems, prompting the Pentagon to label it a 'supply-chain risk,' and putting the company's government contracts in jeopardy. The Claude maker's executives warned that the designation, which they view as retaliation for opposing the use of their technology in autonomous weapons and domestic surveillance, could slash their 2026 revenue by billions of dollars. Legal experts suggest the government's case may be undermined by a mismatch between the law invoked and Anthropic's conduct, internal contradictions in the Pentagon's behavior and evidence that its decision may have been driven by animus rather than security. Here is a timeline of the ongoing conflict: January 29: The Pentagon and Anthropic clash over eliminating safeguards that could allow the government to use its technology to target weapons autonomously and conduct US domestic surveillance. February 11: The Pentagon pushes AI companies, including Anthropic, to make their AI tools available in classified settings without many of the standard restrictions that the companies apply to other users. February 14: The Pentagon considers ending its ties with Anthropic over the AI lab's insistence on keeping some limits on how the US military uses its models February 23: US Defense Secretary Pete Hegseth summons Anthropic CEO Dario Amodei to the Pentagon for talks on the military use of Claude February 24: The Pentagon asks Anthropic to get on board, or risk consequences, including being labeled a supply-chain risk February 25: The Pentagon asks defence contractors, including Boeing and Lockheed Martin, to assess their reliance on Anthropic February 26: Pentagon spokesperson Sean Parnell asks Anthropic to allow the Pentagon to use its technology for all lawful purposes, giving the company until 5:01 p.m. ET on February 27 to decide February 26 Anthropic says it will not accede to the Pentagon's request to eliminate safeguards from its AIsystems February 27: US President Donald Trump directs every federal agency to immediately cease all use of Anthropic's technology February 27: Hegseth directs the US DoD to designate Anthropic a "supply-chain risk to national security" February 27: Anthropic says it will challenge in court the Pentagon's decision February 27: OpenAI announces deal to deploy technology in the DoD's classified network February 28: OpenAI says its latest agreement with the Pentagon includes three red lines: its technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems or for any high-stakes automated decisions. March 2: The US Departments of State, Treasury and Health and Human Services move to cease using Anthropic's Claude March 3: Lockheed Martin pledges to follow the DoD's direction, signalling a likely exodus of defence contractors removing Anthropic's tools from their supply chains to protect their federal contracts, legal experts say March 4: U.S. Treasury Secretary Scott Bessent tells CNBC that the agency will remove Anthropic from its government systems within days March 4: Big tech industry group pushes to de-escalate the clash, saying a supply-chain risk designation creates uncertainty for companies and could threaten the military's access to the best products and services March 5: The U.S. DoD formally designates Anthropic as a supply-chain risk March 6: Amazon says it is helping customers transition DoD workloads to alternative models on its cloud, while customers and partners could continue using Claude for all non‑Pentagon workloads March 6: The U.S. General Services Administration draws up strict rules for civilian artificial-intelligence contracts and terminates Anthropic's OneGov deal, which made Claude available to the federal government March 9: Anthropic sues to block the Pentagon from placing it on a national security blacklist, saying the designation is unlawful and violates its free speech and due process rights March 9: Anthropic executives say the U.S. government's blacklisting of the AI firm could cut its 2026 revenue by multiple billions of dollars and cause reputational harm March 10: Microsoft files a brief backing Anthropic's lawsuit, saying the DoD designation directly affects it and that a temporary restraining order is needed to avoid costly supplier disruptions and rushed rebuilding of products that depend on Anthropic.
[42]
Anthropic would 'pollute' US military supply chain, Pentagon official...
The Pentagon cut ties with Anthropic because use of its artificial intelligence models would "pollute" the US military's supply chain, a top War Department official claimed Thursday. Emil Michael, the War Department's chief technology officer, said Anthropic's Claude AI chatbot was trained using a fundamentally different ideology from what the Pentagon wants for its systems. "We can't have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our war fighters are getting ineffective weapons, ineffective body armor, ineffective protection," Michael said in an interview with CNBC. "That's really where the supply chain risk designation came from," he added. Anthropic is currently suing the Pentagon after it became the first US company to be formally labeled a "supply chain risk" - a tag typically reserved for foreign entities that effectively requires defense contractors to stop using its technology. Michael said the designation was "not meant to be punitive" and denied allegations from Anthropic that the Trump administration has been telling companies outside the defense sector not to work with them. Trump officials had long been concerned that Anthropic had wacky ideological leanings - including its ties to the cult-like "Effective Altruism" movement and Democratic megadonors like LinkedIn co-founder Reid Hoffman. Earlier this month, The Post exclusively reported on oddball blog posts penned by Amanda Askell, Anthropic's in-house "philosopher" who helped craft the "soul document" that governs Claude. The Trump administration's rocky relationship with Anthropic reached a peak last month after the company and its CEO Dario Amodei refused to remove safeguards blocking its AI models from being used to power autonomous weapons or mass surveillance of Americans. President Trump blasted Anthropic's leaders as "leftwing nut jobs" while ordering all federal agencies to stop working with the firm. He allowed a six-month transition period. At the time, Anthropic's Claude was the only model approved for work on the Pentagon's classified systems. OpenAI has since struck a deal to take over the bulk of that work. Amodei, himself a Democratic donor, responded by blasting Trump in an internal memo, claiming the Pentagon had targeted Anthropic because it hadn't given the president "dictator-style praise." The exec later apologized. Anthropic's new suit against the Trump administration claims that the supply chain risk designation and other actions by the US government are "unprecedented and unlawful." Meanwhile, Palantir, another defense contractor, is still using Claude for the time being - including for operations linked to the conflict with Iran, its CEO Alex Karp told CNBC.
[43]
Microsoft files amicus brief in support of Anthropic's lawsuit with US DOD - The Economic Times
Microsoft filed a proposed brief supporting Anthropic's lawsuit seeking to temporarily block the US Department of Defense designation of the AI startup as a supply-chain risk. The company said the decision directly affects it and could cause costly disruptions for suppliers relying on Anthropic's technology.Microsoft filed on Tuesday a proposed brief in support of Anthropic's lawsuit asking the court to temporarily block the U.S. Department of Defense's designation of the AI startup as a supply-chain risk. Microsoft added that it was directly impacted by the DOD's designation. The Claude maker had filed a lawsuit to block the Pentagon from placing it on a national security blacklist on Monday, escalating a high-stakes battle with the US military over usage restrictions on its technology. Microsoft's filing argued the TRO is needed to prevent costly disruptions for suppliers, who would otherwise have to rapidly rebuild offerings that rely on Anthropic's products.
[44]
Pentagon Deems Anthropic's Claude AI a Supply Chain Risk
The US Department of Defense now requires its suppliers to certify that they do not use the start-up's models. The US Department of Defense's Chief Technology Officer, Emil Michael, stated that the Claude artificial intelligence models developed by Anthropic represent a potential risk to the Pentagon's supply chain. He says that certain political preferences are embedded in the very design of the system, which could influence the outputs produced by these tools. He asserted that the military cannot integrate technologies into its processes that could affect the quality or reliability of equipment intended for soldiers. Anthropic thus becomes the first American company to be publicly designated as a supply chain risk by the Pentagon, a classification usually reserved for entities linked to countries considered adversaries. This decision now forces contractors and suppliers working with the Department of Defense to certify that they do not use Claude models in the context of their Pentagon-related activities. The start-up has challenged this measure by filing a lawsuit against the Trump administration, calling it an "unprecedented and illegal" decision. In its complaint, Anthropic claims that this classification could threaten contracts worth several hundred million dollars. However, the Pentagon maintains that the measure is not punitive and emphasizes that Anthropic's government activities represent a limited portion of its revenue, which primarily comes from the private sector.
[45]
Microsoft backs Anthropic in amicus brief to halt US DOD's 'supply-chain risk' designation
March 10 (Reuters) - Microsoft filed on Tuesday a brief in support of Anthropic's lawsuit asking the court to temporarily block the U.S. Department of Defense's designation of the AI startup as a supply-chain risk. In an amicus brief filing in a federal court in San Francisco, Microsoft backed Anthropic's request for a temporary restraining order against the Pentagon order, arguing that its determination should be paused while the court considers the case. Microsoft, which integrates the AI lab's products and services into technology it provides to the U.S. military, said that it was directly impacted by the DOD designation. The Claude maker had filed a lawsuit to block the Pentagon from placing it on a national security blacklist on Monday, escalating a high-stakes battle with the U.S. military over usage restrictions on its technology. Microsoft's filing argued the TRO is needed to prevent costly disruptions for suppliers, who would otherwise have to rapidly rebuild offerings that rely on Anthropic's products. While the Pentagon gave itself six months to phase out Anthropic, it did not provide the same transition period for contractors that use Anthropic's products or services to perform under DOD, Microsoft said. "Should this action proceed without the entry of a temporary restraining order, Microsoft and other government contractors with expertise in developing solutions to support U.S. government missions will be forced to account for a new risk in their business planning," the company said. Microsoft added that a temporary restraining order would allow time to negotiate a solution while protecting military access to advanced technology and ensuring AI is not used for domestic mass surveillance or to start a war without human control. On Monday, a group of 37 researchers and engineers from OpenAI and Google had also filed an amicus brief in support of Anthropic. (Reporting by Vallari Srivastava and Anhata Rooprai in Bengaluru; Editing by Alan Barona)
Share
Share
Copy Link
The Pentagon is actively building alternatives to replace Anthropic's AI technology after their $200 million contract collapsed over usage restrictions. Defense Secretary Pete Hegseth designated Anthropic a supply chain risk, typically reserved for foreign adversaries, while the Trump administration defends the decision in court as a matter of national security.

The Pentagon is actively developing alternatives to Anthropic's technology following the dramatic collapse of their relationship, signaling that reconciliation appears unlikely. Cameron Stanley, the chief digital and AI officer at the Pentagon, confirmed to Bloomberg that engineering work has already begun on multiple Large Language Models for government-owned environments, with operational deployment expected soon
3
. The Department of Defense is working to deploy AI systems from Google, OpenAI, and xAI as replacements for Claude AI2
. This shift comes after Anthropic's $200 million contract with the Department of Defense broke down over disagreements about AI technology restrictions, specifically regarding autonomous weapons and surveillance applications1
.Defense Secretary Pete Hegseth designated Anthropic a supply chain risk on March 3, a label typically reserved for foreign adversaries that effectively bars companies working with the Pentagon from partnering with Anthropic
1
. President Donald Trump backed this decision, directing all federal agencies to terminate their business relationships with the AI company4
. Anthropic has filed lawsuits in both California federal court and a Washington, D.C. appeals court, challenging what it calls an "unprecedented and unlawful" designation that violates its First Amendment rights and due process protections4
. The company argues the government ban could cost it billions of dollars in lost revenue this year and is seeking a preliminary injunction to block the designation while litigation proceeds2
.In a court filing submitted Tuesday, the Department of Justice argued that Anthropic's concerns about losing business are "legally insufficient to constitute irreparable injury" and defended the government's actions as necessary for national security
2
. The Trump administration asserted that the dispute stems from contract negotiations rather than retaliation against protected speech, stating that "no one has purported to restrict Anthropic's expressive activity"4
. Pentagon officials expressed concerns that Anthropic staff "might sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, or operation of a national security system"2
. The government's court filing emphasized that AI systems are "acutely vulnerable to manipulation" and that Anthropic could attempt to disable its technology during ongoing warfighting operations if the company felt its ethical red lines were being crossed5
.Related Stories
The conflict originated when Pete Hegseth incorporated a provision into AI service contracts allowing the Pentagon to use technologies for any lawful purpose without restrictions. Anthropic refused to accept these terms, insisting on contractual clauses prohibiting the use of its AI for mass surveillance of Americans and fully autonomous weapons deployment
1
. The company maintains that AI is not yet safe enough for autonomous weapons applications and opposes domestic surveillance as a matter of principle4
. During negotiations in early 2026, Anthropic's behavior caused the department to question whether it represented a trusted partner for highly sensitive military contracts3
. The Pentagon became concerned that allowing Anthropic continued access to warfighting systems would introduce unacceptable risk to supply chains5
.Microsoft, Google, OpenAI, AI researchers, a federal employee labor union, and former military leaders have filed court briefs supporting Anthropic's position, with no briefs filed in support of the government
2
. Despite this backing, Hegseth announced the military would continue using Anthropic's technology for no more than six months to allow for a seamless transition to what he called "a better and more patriotic service"3
. One of the military's top uses of Claude AI has been through Palantir data analysis software2
. A hearing is scheduled for next Tuesday in San Francisco federal court to decide whether to grant Anthropic's request for an injunction, with the company required to file a counter response to the government's arguments by Friday2
.Summarized by
Navi
04 Mar 2026•Policy and Regulation

18 Mar 2026•Policy and Regulation

04 Mar 2026•Policy and Regulation

1
Science and Research

2
Science and Research

3
Policy and Regulation
