10 Sources
10 Sources
[1]
US judge sides with Anthropic, says company supply chain risk branding over Pentagon disagreement 'Orwellian' -- Trump slapped AI company with designation after it refused to lower its guardrails for the military
U.S. courts say that the federal government cannot treat companies that disagrees with it this way. A U.S. court has sided with Anthropic and is temporarily blocking the Pentagon from calling the company a supply chain risk. The ruling comes after the AI tech company sued the Department of War for designating it as such, after the military's demand to bypass the firm's AI safety policies was refused. According to the Associated Press, the U.S. government argues that it should be able to use the AI tool in any way it deems lawful, but U.S. District Judge Rita Lin said that her ruling was not about how the Pentagon wanted to use Claude, but its response when Anthropic refused to give in to the department's demands. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," Judge Lin wrote in her decision. She also added, "If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, these measures appear designed to punish Anthropic." This issue stemmed from Anthropic CEO Dario Amodei's refusal to allow the use of Claude for mass domestic surveillance and fully autonomous weapons. Amodei said that the government can purchase information on the average American from data broken and then use AI to turn all these data points into one cohesive profile without requiring a warrant, and that he wouldn't allow his company's tool to be party to that operation. Aside from that, he said that AI isn't ready to be deployed in fully autonomous weapons because it cannot make judgments like humans. "We will not knowingly provide a product that puts America's warfighters and civilians at risk," the Anthropic chief said. The company's decision to go against the Pentagon's demand brought the ire of U.S. President Donald Trump. "THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS," Trump posted on his platform. He also wrote, "Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all of Anthropic's technology." Still, the company is fighting back, filing cases against the administration and arguing that the "supply chain risk" designation violated its First Amendment rights and also did not give Anthropic due process. Judge Rita Lin's decision is a win for the company, but it isn't over for Anthropic, as this is just a temporary block. Aside from that, the company also has another case filed against the government waiting to be heard in the U.S. Court of Appeals for the D.C. Circuit. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic said in its court filing. But in the meantime, OpenAI has struck a deal with the Pentagon to deploy its AI models on the military's classified network. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
[2]
Court temporarily blocks US government from labeling Anthropic as a 'supply chain risk'
The court has granted Anthropic's request for a preliminary injunction, preventing the government from banning its products for federal use and from formally labeling it as a "supply chain risk," at least for now. If you'll recall, things turned sour between the company and the Trump administration when Anthropic refused to change the terms of its contract that would allow the government to use its technology for mass surveillance and the development of autonomous weapons. In response to Anthropic's refusal, the president ordered federal agencies to stop using Claude and the company's other services. The Defense Department also officially labeled it as a supply chain risk, which is typically reserved for entities typically based in US adversaries like China that threaten national security. In addition, department secretary Pete Hegseth warned companies that if they want to work with the government, they must sever ties with Anthropic. The AI company challenged the designation in court, calling it unlawful and in violation of free speech and its rights to due process. It asked the court to put a pause on the ban while the lawsuit is ongoing, as well. In a court filing, the Defense Department said giving Anthropic continued access to its warfighting infrastructure would "introduce unacceptable risk" to its supply chains. But Judge Rita F. Lin of the District Court for the Northern District of California said the measures the government took "appear designed to punish Anthropic." Lin wrote in her decision that it seems Anthropic is being punished for criticizing the government in the press. "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation," she continued. The judge also said that the supply chain risk designation is contrary to law, arbitrary and capricious. She added that the government argued that Anthropic showed its subversive tendencies by "questioning" the use of its technology. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government," she wrote. Anthropic told The New York Times that it's "grateful to the court for moving swiftly" and that it's now focused on "working productively with the government to ensure all Americans benefit from safe, reliable AI." The company's lawsuit is still ongoing, and the court has yet to issue its final decision. Judge Lin said, however, that Anthropic "has shown a likelihood of success on its First Amendment claim."
[3]
Judge blocks Pentagon from labeling Anthropic a national security risk
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. What just happened? A federal judge in California has blocked the Department of Defense from designating Anthropic as a national security risk. The ruling provides the company with temporary relief as it continues its legal battle with the Pentagon over whether private technology firms can object to how their AI is used in military programs. In a sharply worded 43-page order, US District Judge Rita F. Lin said the government's actions appeared to be driven by retaliation rather than legitimate security concerns. "The record supports an inference that Anthropic is being punished for criticizing the government's contracting position in the press," Judge Lin wrote. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the United States for expressing disagreement with the government." The injunction halts enforcement of a February 27 order from Defense Secretary Pete Hegseth labeling Anthropic a "supply chain risk," a designation typically reserved for foreign entities viewed as threats to US national security. The label would have barred Anthropic from selling its systems to federal agencies and potentially from working with other defense contractors. The ruling also prevents agencies from carrying out a related directive issued after President Trump amplified the designation on social media. The dispute began during negotiations over a $200 million Pentagon contract focused on artificial intelligence capabilities for defense operations. Anthropic, which has long advocated for guardrails on AI deployment, sought restrictions on how its models could be used in surveillance or autonomous weapons systems. Defense officials resisted, arguing that no private company should dictate how the military applies the technology it acquires. Shortly after those discussions broke down, the Pentagon labeled Anthropic a supply chain risk. The company responded by filing lawsuits in two federal courts, alleging that the government's actions were both punitive and unconstitutional. It argued that the designation violated its First Amendment rights and inflicted "irreparable harm" on its reputation and business. Judge Lin agreed that the company had demonstrated a strong likelihood of success on the merits, stating that "government officials cannot use the power of the state to punish or suppress disfavored expression." She added that the Defense Department's decision appeared "arbitrary and capricious" and that "the balance of equities and public interest favor Anthropic." The temporary injunction, which will take effect unless the Pentagon appeals within seven days, is being closely watched across the technology industry. Microsoft, along with employees of OpenAI and Google, submitted amicus briefs supporting Anthropic's position. Many in the sector view the case as a potential precedent for how the government may treat companies that challenge its use of AI systems in sensitive or military contexts. At a hearing earlier this week, Judge Lin signaled skepticism toward the Pentagon's assertion that Anthropic could manipulate its own technology "to suit its own interests" in wartime scenarios. "It looks like an attempt to cripple Anthropic," she said. In a statement following the ruling, Anthropic said it was "grateful to the court for moving swiftly" and reaffirmed that its "focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI." The Defense Department has not yet publicly commented on the decision.
[4]
Judge blocks Trump administration's 'Orwellian' branding of Anthropic
"Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government," judge Lin wrote. Anthropic has won an early round in its legal battle against the Trump administration, after a federal judge awarded the artificial intelligence company an injunction against the government's order that labelled it a "supply chain risk". US President Donald Trump and Defense Secretary Pete Hegseth publicly declared in February that it was cutting ties with Anthropic after it refused to allow unrestricted military use of its Claude AI model. The restrictions include the use of lethal autonomous weapons without human oversight and mass surveillance of Americans. In response, the US government labelled Anthropic a "supply chain risk to national security" and ordered federal agents to stop using Claude. Judge Rita F. Lin of the Northern District of California said at the outset of the hearing that "it looks like an attempt to cripple Anthropic," "cripple Anthropic" and "chill public debate" because the company was concerned over how the US Department of Defence was using its technology. "This appears to be classic First Amendment retaliation," she added. Lin said the "broad punitive measures" taken against the AI company by the Trump administration and Hegseth appeared arbitrary, capricious and could "cripple Anthropic," particularly Hegseth's use of a rare military authority that's previously been directed at foreign adversaries. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government," Lin wrote. Anthropic filed two lawsuits against the government over its designation as a supply chain risk. One is a case for reconsideration of the supply chain risk, and the other alleges the Trump administration violated the company's First Amendment right to speech. The order now means that Anthropic's technology will continue to be used in the government and by outside companies working with the Department of War until the lawsuit is resolved. Euronews Next has reached out to Anthropic for comment.
[5]
First blood to Anthropic as US judge slams Trump 2.0's 'Orwellian" attempt to cripple the firm as unconstitutional and illegal
The war goes on, but Anthropic has won a significant first battle as a US Judge grants a stay of execution against the Department of War's (DoW) "Orwellian" attempt to brand the US AI champion a national security risk, and ban public sector buyers from using it or anyone who associates with its tech. The background to the ruling is well-known by now. Anthropic went overnight - literally - from being the only AI provider with security clearance for the Pentagon's most sensitive systems to being deemed a threat to national security as a result of its refusal to remove red line clauses from its contract with the US Government. These centered on not allowing its tech to be used to spy on US citizens, and not allowing AI to launch missiles on its own! As well as ditching Anthropic directly, which all parties appear to agree is within the law, Secretary of War Pete Hesgeth and President Donald Trump went further, telling all Federal agencies that they need to remove all trace of Anthropic within six months, and warning third parties who want to pick up business with the Government to break ties with the firm. That would cost Anthropic multi-billions of dollars, according to the vendor, which legalled up and applied for an injunction to prevent what it called over-reach by the authorities being put into action. US District Court Judge Rita Lin heard evidence from both sides earlier this week, which included an admission from DoW lawyers that Hegseth mis-spoke in his pronouncements against Anthropic, and has now come down pretty firmly on the side of Anthropic. As per her ruling on Thursday, Secretary Hesgeth acted without following due procedure when he announced his 'final decision' via X, rather than seeking appropriate Congressional approvals. He and the Administration are also accused of being excessive in their actions: The record supports an inference that Anthropic is being punished for criticizing the government's contracting position in the press...Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation. [The Pentagon's] designation of Anthropic as a 'supply chain risk' is likely both contrary to law and arbitrary and capricious...Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government. Lin made a point of picking out intemperate language used by both Hesgeth and Trump in their public attacks on Anthropic, which called the firm "woke" and made up of "left-wing nut jobs" as being indicative of their true intent in terms of the actions they took: If this were merely a contracting impasse, DoW would presumably have just stopped using Claude. The challenged actions, however, far exceed the scope of what could reasonably address such a national security interest...The government's actions raise serious constitutional questions. Labelling a domestic company a 'supply chain risk' without meaningful process undermines the very principles the government claims to protect. She added in her 28-page opinion, that the Government made no finding of an actual or imminent threat to national security, but simply disagreed with the company's terms of service: If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, these measures appear designed to punish Anthropic. The preliminary injunction means that Anthropic can resume work on existing Federal Government contracts as per the status quo on 27 February, while the case proceeds through the courts. The company said in a statement: While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI. Meanwhile it's been all quiet from leading US Government sources on X since the ruling, although the Pentagon's official line is that it is reviewing the decision and will evaluate its options. It has a week to appeal Lin's ruling before the injunction takes effect. I don't like it - it's too quiet out there! The Pete Hegseth who was quick to take to X to denounce Anthropic's "master class in arrogance and betrayal" has held his fire so far since the Judge's ruling came in.
[6]
Anthropic wins first round in case against US administration's Claude ban
A US judge did not mince her words in a ruling that described the US administration's designation of Anthropic as a 'suppy chain risk' as 'arbitrary and capricious'. Anthropic has won its first round in the court case it has taken against the US administration's ban on the use of its products in government, with US district judge Rita F Lin issuing a preliminary injunction yesterday (26 March) pausing the US administration's plan to ban all use of Claude products. The administration now has seven days to appeal the judgement. Anthropic drew the ire of the US administration after a standoff with the Pentagon, where Anthropic refused to change its safeguards related to using its AI for fully autonomous weapons, or for mass surveillance of US citizens. Anthropic confirmed on March 5 that it received a letter from the Department of [Defense] saying it had been designated a 'supply chain risk' by the US administration, and said it had no choice but to challenge the decision in the courts. Many in Silicon Valley supported Anthropic's relatively principled stand, and general users sent it to the top of the US Apple charts for free downloads at the time - beating OpenAI's ChatGPT for the first time. The US 'supply chain risk' designation was seen by most as a way of punishing Anthropic for not bowing to government pressure, and now a district judge has backed that premise and granted a temporary injunction on the ban. "These broad measures do not appear to be directed at the government's stated national security interests," the judge said in her ruling. "If the concern is the integrity of the operational chain of command, the Department of War [sic] could just stop using Claude. Instead, these measures appear designed to punish Anthropic." "One of the amicus briefs described these measures as 'attempted corporate murder'. They might not be murder, but the evidence shows that they would cripple Anthropic. The record supports an inference that Anthropic is being punished for criticizing the government's contracting position in the press." "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation," she continued. "Moreover, Defendants' [US Government] designation of Anthropic as a 'supply chain risk' is likely both contrary to law and arbitrary and capricious." Siliconrepublic.com has reached out to Anthropic for comment on the decision. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[7]
Anthropic wins major injunction against Pentagon's AI blacklist
A federal judge granted Anthropic an injunction against the U.S. government's "supply chain risk" designation, ordering the administration to rescind the label. The ruling impacts the ability of federal agencies to continue working with the artificial intelligence company and follows a period of dispute over the Pentagon's use of Anthropic's software. Judge Rita F. Lin of the Northern District of California stated the government's orders had ignored free speech protections, according to the Wall Street Journal. "It looks like an attempt to cripple Anthropic," Lin reportedly said during court proceedings. The dispute began last month over guidelines for the government's use of Anthropic's AI software. Anthropic sought to restrict the government from using its AI models in autonomous weapons systems or for mass surveillance. The government disagreed with these limitations, subsequently labeling the company a "supply chain risk." President Trump ordered federal agencies to cease ties with Anthropic. Anthropic CEO Dario Amodei called the Defense Department's actions "retaliatory and punitive." The White House characterized Anthropic as "a radical-left, woke company" that is endangering U.S. national security. Anthropic stated: "We're grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits." The company added its focus remains on "working productively with the government to ensure all Americans benefit from safe, reliable AI," as reported by TechCrunch.
[8]
US Judge Blocks Pentagon's Anthropic Blacklisting for Now
March 26 (Reuters) - A U.S. judge on Thursday temporarily blocked the Pentagon's blacklisting of Anthropic, the latest turn in the Claude maker's high-stakes fight with the military over AI safety on the battlefield. Anthropic's lawsuit in California federal court alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated Anthropic a national security supply-chain risk, a label the government can apply to companies that expose military systems to potential infiltration or sabotage by adversaries. Hegseth's unprecedented move, which followed Anthropic's refusal to allow the military to use AI chatbot Claude for U.S. surveillance or autonomous weapons, blocked Anthropic from certain military contracts. Anthropic executives have said it could cost the company billions of dollars in lost business and reputational harm. Anthropic says that AI models are not reliable enough to be safely used in autonomous weapons and that it opposes domestic surveillance as a violation of rights, but the Pentagon says private companies should not be able to constrain military action. U.S. District Judge Rita Lin, an appointee of former Democratic President Joe Biden, handed down the ruling at a hearing in San Francisco after Anthropic asked for a temporary order blocking the designation while the litigation plays out. Lin's ruling is not final, and the case is still pending. Anthropic's designation was the first time a U.S. company has been publicly designated a supply-chain risk under an obscure government-procurement statute aimed at protecting military systems from foreign sabotage. In its March 9 lawsuit, Anthropic alleged the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute the designation, in violation of its Fifth Amendment right to due process. The lawsuit says the decision was unlawful, unsupported by facts and inconsistent with the military's past praise of Claude. The Justice Department countered that Anthropic's refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing. The government said the designation stemmed from Anthropic's refusal to accept contractual terms, not its views on AI safety. Anthropic has a second lawsuit pending in Washington, D.C., over a separate Pentagon supply-chain risk designation that could lead to its exclusion from civilian government contracts. (Reporting by Jack Queen in New York and Kanishka Singh in Washington, editing by Noeleen Walder and Matthew Lewis)
[9]
US judge blocks Pentagon's Anthropic blacklisting for now
pen AI and Anthropic logos are seen in this illustration created on September 12, 2025. A US judge on Thursday temporarily blocked the Pentagon's blacklisting of Anthropic, the latest turn in the Claude maker's high-stakes fight with the military over AI safety on the battlefield. Anthropic's lawsuit in California federal court alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated Anthropic a national security supply-chain risk, a label the government can apply to companies that expose military systems to potential infiltration or sabotage by adversaries. Hegseth's unprecedented move, which followed Anthropic's refusal to allow the military to use AI chatbot Claude for US surveillance or autonomous weapons, blocked Anthropic from certain military contracts. Anthropic executives have said it could cost the company billions of dollars in lost business and reputational harm. Anthropic says that AI models are not reliable enough to be safely used in autonomous weapons and that it opposes domestic surveillance as a violation of rights, but the Pentagon says private companies should not be able to constrain military action. US District Judge Rita Lin, an appointee of former Democratic President Joe Biden, handed down the ruling at a hearing in San Francisco after Anthropic asked for a temporary order blocking the designation while the litigation plays out. Lin's ruling is not final, and the case is still pending. Anthropic: Government violated First Amendment rights Anthropic's designation was the first time a US company has been publicly designated a supply-chain risk under an obscure government-procurement statute aimed at protecting military systems from foreign sabotage. In its March 9 lawsuit, Anthropic alleged the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute the designation, in violation of its Fifth Amendment right to due process. The lawsuit says the decision was unlawful, unsupported by facts, and inconsistent with the military's past praise of Claude. The Justice Department countered that Anthropic's refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing. The government said the designation stemmed from Anthropic's refusal to accept contractual terms, not its views on AI safety. Anthropic has a second lawsuit pending in Washington, DC, over a separate Pentagon supply-chain risk designation that could lead to its exclusion from civilian government contracts.
[10]
US court halts the Pentagon's 'supply chain risk' tag on Anthropic
A U.S. federal judge has temporarily blocked the Pentagon from blacklisting AI company Anthropic, marking a key moment in an ongoing dispute over how the military can use artificial intelligence, according to a Reuters report. U.S. District Judge Rita Lin ruled that the government's action to label the company a "national security supply-chain risk" may be unlawful. The order will take effect after seven days, giving the administration time to appeal. The case is still ongoing. Why the Pentagon acted: The dispute began after the Pentagon, under Defense Secretary Pete Hegseth, imposed the designation. This move followed Anthropic's refusal to allow its AI chatbot Claude to be used for domestic surveillance or fully autonomous weapons. The label could block the AI company from key military contracts and, according to Anthropic, cause major financial and reputational damage. Anthropic's legal challenge: In its lawsuit, Anthropic argued that the decision violated its constitutional rights. Anthropic claimed the move retaliated against its public stance on AI safety and violated the First Amendment (free speech) and the Fifth Amendment (due process), as the government did not give it a chance to challenge the designation. The U.S. government defended its action, saying the designation was based on contractual disagreements and concerns that Anthropic's restrictions could limit how the military uses AI during operations. What the court said: Judge Lin, however, questioned the intent behind the move. In her ruling, she wrote, "The record supports an inference that Anthropic is being punished for criticizing the government's contracting position in the press." She added, "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation." How the dispute escalated: The conflict has deeper roots in a breakdown of negotiations between the Pentagon and Anthropic in February 2026. The U.S. military had asked the company to allow its AI systems to be used for all "lawful" purposes inside classified networks. Anthropic refused to remove two safeguards: a ban on mass surveillance of American citizens and a restriction on fully autonomous lethal decision-making. The Pentagon then used procurement and national security tools, rather than new laws, to pressure the company. Officials considered invoking the Defense Production Act and began steps toward the same supply-chain risk designation now under challenge. This reflects how governments can shape AI use through contracts rather than formal regulation. Competitive and legal stakes: The dispute also unfolded amid rising competition. Rival firms like OpenAI and others entered talks with the Pentagon under broader "all lawful use" terms, reducing Anthropic's leverage as the only provider inside certain classified systems. This marks the first known instance of the U.S. government publicly labelling a domestic company as a supply-chain risk under a law typically used to guard against foreign threats. Court filings also allege that officials lacked evidence for the designation and used it as pressure after Anthropic refused to change its policies, a claim the government disputes. Anthropic has also filed a separate case in Washington, D.C., challenging a related designation that could affect its access to civilian government contracts. For now, the court's order does not require the Pentagon to use Anthropic's technology. It only pauses enforcement of the blacklisting, leaving the larger legal and policy questions unresolved.
Share
Share
Copy Link
A federal judge has temporarily blocked the Pentagon from designating Anthropic as a supply chain risk after the AI company refused to remove safety restrictions on mass surveillance and autonomous weapons. The ruling calls the government's retaliatory actions unconstitutional and a violation of First Amendment rights, setting a potential precedent for how tech companies can challenge military AI applications.
A federal court has delivered a significant blow to the Trump administration's attempt to punish Anthropic for refusing to compromise on AI safety guardrails. US District Judge Rita Lin granted a preliminary injunction that temporarily blocks the Pentagon from designating Anthropic as a supply chain risk, calling the government's actions "Orwellian" and unconstitutional
1
. The ruling prevents federal agencies from banning Anthropic's products and halts enforcement of Defense Secretary Pete Hegseth's order that labeled the company a national security risk2
.
Source: Engadget
The conflict erupted during negotiations over a $200 million Pentagon contract when Anthropic CEO Dario Amodei refused to allow the use of Claude for mass surveillance and fully autonomous weapons. Amodei explained that the government could purchase data on average Americans from data brokers and use AI to create comprehensive profiles without warrants—an operation he would not support with his company's technology. He also argued that AI isn't ready for deployment in fully autonomous weapons because it cannot make judgments like humans, stating "We will not knowingly provide a product that puts America's warfighters and civilians at risk"
1
.In her sharply worded 43-page decision, Judge Rita Lin found that the government's response appeared designed to punish Anthropic rather than address legitimate security concerns. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government," Lin wrote
4
. The judge emphasized that Anthropic's First Amendment rights were violated, stating that "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation"2
. She noted that the supply chain risk designation is typically reserved for foreign entities in adversarial nations like China, not domestic companies expressing ethical concerns5
.Source: TechSpot
The dispute escalated when Donald Trump took to social media, posting "THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS," and directing every federal agency to immediately cease using Anthropic's technology
1
. Pete Hegseth went further, warning companies wanting federal government contracts to sever ties with Anthropic2
. Judge Lin specifically cited this intemperate language from both Hegseth and Trump, who called the firm "woke" and made up of "left-wing nut jobs," as evidence of their retaliatory intent5
.Related Stories
The case is being closely watched across the technology industry as a potential precedent for how the government may treat companies that challenge military applications of their AI systems. Microsoft, along with employees of OpenAI and Google, submitted amicus briefs supporting Anthropic's position. The company argued that the designation violated its rights to due process, with Judge Lin agreeing that Secretary Hegseth acted without following proper procedure when he announced his decision via social media rather than seeking appropriate Congressional approvals
5
. The Department of Defense argued that giving Anthropic continued access to warfighting infrastructure would "introduce unacceptable risk" to supply chains, but Lin countered that "If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, these measures appear designed to punish Anthropic"1
.
Source: diginomica
While this preliminary injunction represents a significant victory, Anthropic's legal battle continues. The Pentagon has seven days to appeal Lin's ruling before the injunction takes effect, and the company has another case pending in the US Court of Appeals for the DC Circuit
1
. Judge Lin indicated that Anthropic "has shown a likelihood of success on its First Amendment claim"2
. Meanwhile, OpenAI has struck a deal with the Pentagon to deploy its AI models on the military's classified network, highlighting the divergent approaches companies are taking toward military partnerships1
. Anthropic stated it remains "focused on working productively with the government to ensure all Americans benefit from safe, reliable AI"2
.Summarized by
Navi
18 Mar 2026•Policy and Regulation

11 Mar 2026•Policy and Regulation

04 Mar 2026•Policy and Regulation

1
Technology

2
Technology

3
Technology
