21 Sources
21 Sources
[1]
Hegseth, Trump had no authority to order Anthropic to be blacklisted, judge says
"Classic First Amendment retaliation." That's how US District Judge Rita Lin described the Department of War's effort to blacklist Anthropic and designate it a supply-chain risk. By all appearances, "these measures appear designed to punish Anthropic," Lin wrote in an order granting Anthropic's request for a preliminary injunction. Officials seemingly had no authority to take such extreme actions without considering less restrictive alternatives or offering any evidence that Anthropic posed an urgent risk to national security, Lin said. Instead, "the Department of War's records show that it designated Anthropic as a supply chain risk because of its 'hostile manner through the press.'" "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation," Lin said. Anthropic's spokesperson told Ars the firm is "grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits." But Anthropic remains in a difficult position, still afraid that the fight will block it from competing for lucrative government contracts. In a blog earlier this month, Anthropic maintained that "Anthropic has much more in common with the Department of War than we have differences" and should be working together to deploy AI safely across government. Anthropic is still walking the same line in the aftermath of Lin's order. "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI," Anthropic's spokesperson said. DoW official calls order a "disgrace" For Anthropic this fight could be existential. After the DoW's actions, three trade deals were promptly cancelled, while other potential partners delayed talks. The company showed it was already suffering irreparable harms that would only worsen the longer the blacklisting was upheld -- including losing potentially billions in private and government contracts the company expected to sign over the next five years, Lin noted. To prevent ongoing harms, Lin ordered a preliminary injunction blocking US agencies from complying with directives from Donald Trump and Secretary of War Pete Hegseth. However, she also granted the government's request for an administrative stay, which delays the injunction from taking effect for seven days. That gives the government a brief window to seek an emergency stay from an appeals court. Asked for comment, DoW pointed to statements on X from Under Secretary of War Emil Michael, who emphasized that the supply-chain risk designation still applies over the next week. Showing that Trump officials don't plan to back off the fight, Michael claimed that Lin's order was "a disgrace" and contained "factual errors" due to the judge's supposed rush to order the injunction. According to Michael, Lin did not fully consider how disrupting Hegseth's directive could "disrupt" how US military operations are conducted. However, Lin cited a brief filed in support of Anthropic from military leaders, who warned that letting the directive stand "will materially detract from military readiness and operational safety." Anthropic has argued that its technology is not ready to be used for mass surveillance of Americans or in fully autonomous lethal weapons, potentially posing civil rights risks if leveraged now. Lin noted that the case touched on "an important public debate," which is whether an AI company can dictate how the government uses its models. But it was not up to her to decide if AI firms or the government should be in charge of deciding what AI uses are safe for the public. Instead, she had to rule on whether government officials violated Anthropic's First Amendment rights, denied Anthropic due process, or acted arbitrarily or capriciously. And at this stage of the case, Anthropic has shown enough to prove it's likely to succeed on all claims, she said. DoW is not authorized to "designate a domestic vendor a supply chain risk simply because a vendor publicly criticized DoW's views about the safe uses of its system," Lin wrote. In fact, "that designation has never been applied to a domestic company and is directed principally at foreign intelligence agencies, terrorists, and other hostile actors," she said. "I don't know": Lawyer has no defense of Hegseth The DoW began using Anthropic's Claude in March 2025 and had been using it for the past year without ever raising any concerns that Anthropic's terms limiting certain uses posed a national security risk, Lin said. Rather, the government thoroughly vetted Claude before implementing it, praised Anthropic publicly, and planned to expand the partnership. The amicable nature of the partnership only changed, the judge said, after DoW sought to deploy Claude on a military platform and Anthropic ultimately agreed to do so with "two critical exceptions: mass surveillance of Americans and lethal autonomous warfare." Based on its testing, Anthropic could not guarantee that Americans' civil rights would not be infringed if Claude was used for these purposes, Anthropic said. If the government disliked the terms, Anthropic repeatedly said it would understand if another vendor was selected, simply bowing out to avoid compromising on AI safety principles that might "undercut Anthropic's core identity," Lin wrote. Calling out Anthropic for "utopian idealism," DoW officials blasted Anthropic for supposedly trying to get the government to let a private company decide how military operations go down. "You can't have an AI company sell AI to the Department of War and [then say] don't let it do Department of War things," Michael told the press. They accused Anthropic of trying to use the DoW dispute to spin up positive press, and Trump joined the chorus. On Truth Social, he labeled Anthropic a "radical left, woke company," allegedly putting their "selfishness" above national security. Following Trump's post, Hegseth took to X, writing that "Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon." In both posts, officials claimed that orders to blacklist Anthropic were effective immediately, but neither cited what authority they had to do so. During oral arguments, a government lawyer later admitted that "he was not aware of any statute that gave Secretary Hegseth the authority to issue such a prohibition and agreed that the statement had 'absolutely no legal effect at all,'" Lin wrote. Further, "when asked why Hegseth made a public statement that had no legal effect and that did not reflect the immediate intent of DoW, counsel stated, 'I don't know.'" Perhaps most glaringly, Hegseth seemed to contradict himself when arguing that Anthropic at once "presented a grave threat to national security" requiring a supply-chain risk designation and also "Anthropic was essential to national security" and could be compelled to provide services under the Defense Production Act. The only reason that the government gave for labeling Anthropic a national security risk was that the company could supposedly update their products and compromise systems. They claimed that Anthropic would be motivated to sabotage the military as retaliation for the directives. But Lin didn't find that likely, either, since any other IT provider could potentially introduce the same risks. More importantly, Anthropic showed unrebutted evidence that it would be impossible to force updates or otherwise control the government's systems. To the judge, any national security risk could be foreclosed by simply ending the military's contract with Anthropic, which Anthropic had already agreed would be understandable. Lacking any statutory basis, it seemed clear from officials' statements that Anthropic was being punished for publicly criticizing the military's plans, the judge concluded. As Anthropic alleged, "defendants set out to publicly punish Anthropic for its 'ideology' and 'rhetoric,' as well as its 'arrogance' for being unwilling to compromise those beliefs," the judge said. "Secretary Hegseth expressly tied Anthropic's punishment to its attitude and rhetoric in the press." Anthropic retaliation "deeply troubling," judge says On top of rushing to shut down government contracts with Anthropic and influence its commercial deals with any business that also hoped to work with DoW, Hegseth also failed to give Anthropic an opportunity to defend itself from claims before taking action that the record shows wasn't urgent. Civil rights and public safety advocates had urged the court to block the government's actions or else risk a chilling effect preventing any AI firm from speaking up about unsafe government AI uses. Ultimately, Lin agreed that any time the government raised a red flag that a vendor was an "adversary," it was "deeply troubling." That could "chill open deliberation" and "professional debate" among those "best positioned to understand AI technology" and its potential for "catastrophic misuse," the judge wrote. As the government likely moves to try to block the preliminary injunction, their argument contends that Lin's order could force the government to pay for Anthropic products and never get that money back. They also claimed they were conducting an audit to see if any security risks currently exist that could justify the supply-chain risk designation. Lin doesn't see it that way, though. She wrote that "the preliminary injunctive relief that the Court authorizes does not require the government to continue to use Claude on its national security systems." She also noted that "the government 'cannot suffer harm from an injunction that merely ends an unlawful practice.'"
[2]
The Pentagon's culture war tactic against Anthropic has backfired
Last Thursday, a California judge temporarily blocked the Pentagon from labeling Anthropic a supply chain risk and ordering government agencies to stop using its AI. It's the latest development in the month-long feud. And the matter still isn't settled: The government was given seven days to appeal, and Anthropic has a second case against the designation that has yet to be decided. Until then, the company remains persona non grata with the government. The stakes in the case -- how much the government can punish a company for not playing ball -- were apparent from the start. Anthropic drew lots of senior supporters with unlikely bedfellows among them, including former authors of President Trump's AI policy. But Judge Rita Lin's 43-page opinion suggests that what is really a contract dispute never needed to reach such a frenzy. It did so because the government disregarded the existing process for how such disputes are governed and fueled the fire with social media posts from officials that would eventually contradict the positions it took in court. The Pentagon, in other words, wanted a culture war (on top of the actual war in Iran that began hours later). The government used Anthropic's Claude for much of 2025 without complaint, according to court documents, while the company walked a branding tightrope as a safety-focused AI company that also won defense contracts. Defense employees accessing it through Palantir were required to accept terms of a government-specific usage policy that Anthropic cofounder Jared Kaplan said "prohibited mass surveillance of Americans and lethal autonomous warfare" (Kaplan's declaration to the court didn't include details of the policy). Only when the government aimed to contract with Anthropic directly did the disagreements begin. What drew the ire of the judge is that when these disagreements became public, they had more to do with punishment than just cutting ties with Anthropic. And they had a pattern: Tweet first, lawyer later. President Trump's post on Truth Social on February 27 referenced "Leftwing nutjobs" at Anthropic and directed every federal agency to stop using the company's AI. This was echoed soon after by Defense Secretary Pete Hegseth, who said he'd direct the Pentagon to label Anthropic a supply chain risk. Doing so necessitates that the secretary take a specific set of actions, which the judge found Hegseth did not complete. Letters sent to congressional committees, for example, said that less drastic steps were evaluated and deemed not possible, without providing any further details. The government also said the designation as a supply chain risk was necessary because Anthropic could implement a "kill switch," but its lawyers later had to admit it had no evidence of that, the judge wrote. Hegseth's post also stated that "No contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." But the government's own lawyers admitted on Tuesday that the Secretary doesn't have the power to do that, and agreed with the judge that the statement had "absolutely no legal effect at all." The aggressive posts also led the judge to also conclude that Anthropic was on solid ground in complaining that its First Amendment rights were violated. The government, the judge wrote while citing the posts, "set out to publicly punish Anthropic for its 'ideology' and 'rhetoric,' as well as its 'arrogance' for being unwilling to compromise those beliefs."
[3]
US judge sides with Anthropic, says company supply chain risk branding over Pentagon disagreement 'Orwellian' -- Trump slapped AI company with designation after it refused to lower its guardrails for the military
U.S. courts say that the federal government cannot treat companies that disagrees with it this way. A U.S. court has sided with Anthropic and is temporarily blocking the Pentagon from calling the company a supply chain risk. The ruling comes after the AI tech company sued the Department of War for designating it as such, after the military's demand to bypass the firm's AI safety policies was refused. According to the Associated Press, the U.S. government argues that it should be able to use the AI tool in any way it deems lawful, but U.S. District Judge Rita Lin said that her ruling was not about how the Pentagon wanted to use Claude, but its response when Anthropic refused to give in to the department's demands. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," Judge Lin wrote in her decision. She also added, "If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, these measures appear designed to punish Anthropic." This issue stemmed from Anthropic CEO Dario Amodei's refusal to allow the use of Claude for mass domestic surveillance and fully autonomous weapons. Amodei said that the government can purchase information on the average American from data broken and then use AI to turn all these data points into one cohesive profile without requiring a warrant, and that he wouldn't allow his company's tool to be party to that operation. Aside from that, he said that AI isn't ready to be deployed in fully autonomous weapons because it cannot make judgments like humans. "We will not knowingly provide a product that puts America's warfighters and civilians at risk," the Anthropic chief said. The company's decision to go against the Pentagon's demand brought the ire of U.S. President Donald Trump. "THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS," Trump posted on his platform. He also wrote, "Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all of Anthropic's technology." Still, the company is fighting back, filing cases against the administration and arguing that the "supply chain risk" designation violated its First Amendment rights and also did not give Anthropic due process. Judge Rita Lin's decision is a win for the company, but it isn't over for Anthropic, as this is just a temporary block. Aside from that, the company also has another case filed against the government waiting to be heard in the U.S. Court of Appeals for the D.C. Circuit. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic said in its court filing. But in the meantime, OpenAI has struck a deal with the Pentagon to deploy its AI models on the military's classified network. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
[4]
Trump administration appeals ruling that blocked Pentagon action against Anthropic over AI dispute
SAN FRANCISCO (AP) -- The Trump administration is appealing a judge's order blocking the federal government from taking punitive measures against artificial intelligence company Anthropic after a dispute with the Pentagon over military use of AI. Department of Justice attorneys filed a notice in San Francisco federal court on Thursday of their intention to appeal last week's ruling by U.S. District Judge Rita Lin. Lin last week said she was blocking the Pentagon from labeling Anthropic a supply chain risk. She also said she was blocking enforcement of President Donald Trump's social media directive ordering all federal agencies to stop using Anthropic and its chatbot Claude. Lin said the "broad punitive measures" taken against the AI company by the Trump administration and Defense Secretary Pete Hegseth appeared arbitrary, capricious and could "cripple Anthropic," particularly Hegseth's use of a rare military authority that's previously been directed at foreign adversaries. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," Lin wrote. A top Pentagon official last week called Lin's order a "disgrace." U.S. Defense Undersecretary Emil Michael, the Pentagon's chief technology officer, said on social media it would disrupt Hegseth's "full ability to conduct military operations with the partners it chooses." Lin had stayed her order for a week, which gave time for the Pentagon to take the case to the Ninth Circuit Court of Appeals. She had also said her order doesn't require the Pentagon to use Anthropic's products or prevent it from transitioning to other AI providers. Anthropic has also filed a separate and more narrow case that is still pending in the federal appeals court in Washington, D.C. That case involves a different rule the Pentagon is using to try to declare Anthropic a supply chain risk. Trump and Hegseth publicly announced their actions against Anthropic on Feb. 27 after negotiations over a defense contract went sour over the company's attempt to prevent its AI technology from being deployed in fully autonomous weapons or surveillance of Americans. The Pentagon had argued that it should be able to use Claude in any way it deems lawful. A number of third parties had filed legal briefs supporting Anthropic's case, including Microsoft, industry trade groups, rank-and-file tech workers, retired U.S. military leaders and a group of Catholic theologians.
[5]
Court temporarily blocks US government from labeling Anthropic as a 'supply chain risk'
The court has granted Anthropic's request for a preliminary injunction, preventing the government from banning its products for federal use and from formally labeling it as a "supply chain risk," at least for now. If you'll recall, things turned sour between the company and the Trump administration when Anthropic refused to change the terms of its contract that would allow the government to use its technology for mass surveillance and the development of autonomous weapons. In response to Anthropic's refusal, the president ordered federal agencies to stop using Claude and the company's other services. The Defense Department also officially labeled it as a supply chain risk, which is typically reserved for entities typically based in US adversaries like China that threaten national security. In addition, department secretary Pete Hegseth warned companies that if they want to work with the government, they must sever ties with Anthropic. The AI company challenged the designation in court, calling it unlawful and in violation of free speech and its rights to due process. It asked the court to put a pause on the ban while the lawsuit is ongoing, as well. In a court filing, the Defense Department said giving Anthropic continued access to its warfighting infrastructure would "introduce unacceptable risk" to its supply chains. But Judge Rita F. Lin of the District Court for the Northern District of California said the measures the government took "appear designed to punish Anthropic." Lin wrote in her decision that it seems Anthropic is being punished for criticizing the government in the press. "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation," she continued. The judge also said that the supply chain risk designation is contrary to law, arbitrary and capricious. She added that the government argued that Anthropic showed its subversive tendencies by "questioning" the use of its technology. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government," she wrote. Anthropic told The New York Times that it's "grateful to the court for moving swiftly" and that it's now focused on "working productively with the government to ensure all Americans benefit from safe, reliable AI." The company's lawsuit is still ongoing, and the court has yet to issue its final decision. Judge Lin said, however, that Anthropic "has shown a likelihood of success on its First Amendment claim."
[6]
Judge blocks Pentagon from labeling Anthropic a national security risk
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. What just happened? A federal judge in California has blocked the Department of Defense from designating Anthropic as a national security risk. The ruling provides the company with temporary relief as it continues its legal battle with the Pentagon over whether private technology firms can object to how their AI is used in military programs. In a sharply worded 43-page order, US District Judge Rita F. Lin said the government's actions appeared to be driven by retaliation rather than legitimate security concerns. "The record supports an inference that Anthropic is being punished for criticizing the government's contracting position in the press," Judge Lin wrote. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the United States for expressing disagreement with the government." The injunction halts enforcement of a February 27 order from Defense Secretary Pete Hegseth labeling Anthropic a "supply chain risk," a designation typically reserved for foreign entities viewed as threats to US national security. The label would have barred Anthropic from selling its systems to federal agencies and potentially from working with other defense contractors. The ruling also prevents agencies from carrying out a related directive issued after President Trump amplified the designation on social media. The dispute began during negotiations over a $200 million Pentagon contract focused on artificial intelligence capabilities for defense operations. Anthropic, which has long advocated for guardrails on AI deployment, sought restrictions on how its models could be used in surveillance or autonomous weapons systems. Defense officials resisted, arguing that no private company should dictate how the military applies the technology it acquires. Shortly after those discussions broke down, the Pentagon labeled Anthropic a supply chain risk. The company responded by filing lawsuits in two federal courts, alleging that the government's actions were both punitive and unconstitutional. It argued that the designation violated its First Amendment rights and inflicted "irreparable harm" on its reputation and business. Judge Lin agreed that the company had demonstrated a strong likelihood of success on the merits, stating that "government officials cannot use the power of the state to punish or suppress disfavored expression." She added that the Defense Department's decision appeared "arbitrary and capricious" and that "the balance of equities and public interest favor Anthropic." The temporary injunction, which will take effect unless the Pentagon appeals within seven days, is being closely watched across the technology industry. Microsoft, along with employees of OpenAI and Google, submitted amicus briefs supporting Anthropic's position. Many in the sector view the case as a potential precedent for how the government may treat companies that challenge its use of AI systems in sensitive or military contexts. At a hearing earlier this week, Judge Lin signaled skepticism toward the Pentagon's assertion that Anthropic could manipulate its own technology "to suit its own interests" in wartime scenarios. "It looks like an attempt to cripple Anthropic," she said. In a statement following the ruling, Anthropic said it was "grateful to the court for moving swiftly" and reaffirmed that its "focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI." The Defense Department has not yet publicly commented on the decision.
[7]
Trump administration appeals Anthropic ruling
Why it matters: The appeal escalates a high-stakes fight that could reshape how the government works with AI companies. * The Trump administration isn't giving up its fight against Anthropic any time soon, even as there's been chatter that the deal could get revived. Driving the news: Government lawyers for the Justice Department filed a notice of appeal to the U.S. Court of Appeals for the Ninth Circuit on behalf of the Pentagon on Thursday. * Anthropic and the Pentagon have been at odds for months after a deal for the government to use Claude collapsed over the company's red lines for the technology. Context: A federal judge last week temporarily paused the Trump administration's designation of Anthropic as a supply chain risk in an early legal win for the company. * Anthropic said the designation was causing immediate and irreparable harm as business partners rethink their contracts and federal agencies remove Claude. * A parallel case is ongoing in a D.C. court. * Anthropic is arguing in both proceedings that the Pentagon is violating the First Amendment and procurement law, while the Defense Department says that the dispute is about the military's ability to use the technology, not speech. The supply chain risk designation is typically reserved for foreign adversaries that pose a national security risk and effectively bars companies from using Claude in work tied to the Defense Department.
[8]
Judge blocks Trump administration's 'Orwellian' branding of Anthropic
"Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government," judge Lin wrote. Anthropic has won an early round in its legal battle against the Trump administration, after a federal judge awarded the artificial intelligence company an injunction against the government's order that labelled it a "supply chain risk". US President Donald Trump and Defense Secretary Pete Hegseth publicly declared in February that it was cutting ties with Anthropic after it refused to allow unrestricted military use of its Claude AI model. The restrictions include the use of lethal autonomous weapons without human oversight and mass surveillance of Americans. In response, the US government labelled Anthropic a "supply chain risk to national security" and ordered federal agents to stop using Claude. Judge Rita F. Lin of the Northern District of California said at the outset of the hearing that "it looks like an attempt to cripple Anthropic," "cripple Anthropic" and "chill public debate" because the company was concerned over how the US Department of Defence was using its technology. "This appears to be classic First Amendment retaliation," she added. Lin said the "broad punitive measures" taken against the AI company by the Trump administration and Hegseth appeared arbitrary, capricious and could "cripple Anthropic," particularly Hegseth's use of a rare military authority that's previously been directed at foreign adversaries. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government," Lin wrote. Anthropic filed two lawsuits against the government over its designation as a supply chain risk. One is a case for reconsideration of the supply chain risk, and the other alleges the Trump administration violated the company's First Amendment right to speech. The order now means that Anthropic's technology will continue to be used in the government and by outside companies working with the Department of War until the lawsuit is resolved. Euronews Next has reached out to Anthropic for comment.
[9]
First blood to Anthropic as US judge slams Trump 2.0's 'Orwellian" attempt to cripple the firm as unconstitutional and illegal
The war goes on, but Anthropic has won a significant first battle as a US Judge grants a stay of execution against the Department of War's (DoW) "Orwellian" attempt to brand the US AI champion a national security risk, and ban public sector buyers from using it or anyone who associates with its tech. The background to the ruling is well-known by now. Anthropic went overnight - literally - from being the only AI provider with security clearance for the Pentagon's most sensitive systems to being deemed a threat to national security as a result of its refusal to remove red line clauses from its contract with the US Government. These centered on not allowing its tech to be used to spy on US citizens, and not allowing AI to launch missiles on its own! As well as ditching Anthropic directly, which all parties appear to agree is within the law, Secretary of War Pete Hesgeth and President Donald Trump went further, telling all Federal agencies that they need to remove all trace of Anthropic within six months, and warning third parties who want to pick up business with the Government to break ties with the firm. That would cost Anthropic multi-billions of dollars, according to the vendor, which legalled up and applied for an injunction to prevent what it called over-reach by the authorities being put into action. US District Court Judge Rita Lin heard evidence from both sides earlier this week, which included an admission from DoW lawyers that Hegseth mis-spoke in his pronouncements against Anthropic, and has now come down pretty firmly on the side of Anthropic. As per her ruling on Thursday, Secretary Hesgeth acted without following due procedure when he announced his 'final decision' via X, rather than seeking appropriate Congressional approvals. He and the Administration are also accused of being excessive in their actions: The record supports an inference that Anthropic is being punished for criticizing the government's contracting position in the press...Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation. [The Pentagon's] designation of Anthropic as a 'supply chain risk' is likely both contrary to law and arbitrary and capricious...Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government. Lin made a point of picking out intemperate language used by both Hesgeth and Trump in their public attacks on Anthropic, which called the firm "woke" and made up of "left-wing nut jobs" as being indicative of their true intent in terms of the actions they took: If this were merely a contracting impasse, DoW would presumably have just stopped using Claude. The challenged actions, however, far exceed the scope of what could reasonably address such a national security interest...The government's actions raise serious constitutional questions. Labelling a domestic company a 'supply chain risk' without meaningful process undermines the very principles the government claims to protect. She added in her 28-page opinion, that the Government made no finding of an actual or imminent threat to national security, but simply disagreed with the company's terms of service: If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, these measures appear designed to punish Anthropic. The preliminary injunction means that Anthropic can resume work on existing Federal Government contracts as per the status quo on 27 February, while the case proceeds through the courts. The company said in a statement: While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI. Meanwhile it's been all quiet from leading US Government sources on X since the ruling, although the Pentagon's official line is that it is reviewing the decision and will evaluate its options. It has a week to appeal Lin's ruling before the injunction takes effect. I don't like it - it's too quiet out there! The Pete Hegseth who was quick to take to X to denounce Anthropic's "master class in arrogance and betrayal" has held his fire so far since the Judge's ruling came in.
[10]
Anthropic wins first round in case against US administration's Claude ban
A US judge did not mince her words in a ruling that described the US administration's designation of Anthropic as a 'suppy chain risk' as 'arbitrary and capricious'. Anthropic has won its first round in the court case it has taken against the US administration's ban on the use of its products in government, with US district judge Rita F Lin issuing a preliminary injunction yesterday (26 March) pausing the US administration's plan to ban all use of Claude products. The administration now has seven days to appeal the judgement. Anthropic drew the ire of the US administration after a standoff with the Pentagon, where Anthropic refused to change its safeguards related to using its AI for fully autonomous weapons, or for mass surveillance of US citizens. Anthropic confirmed on March 5 that it received a letter from the Department of [Defense] saying it had been designated a 'supply chain risk' by the US administration, and said it had no choice but to challenge the decision in the courts. Many in Silicon Valley supported Anthropic's relatively principled stand, and general users sent it to the top of the US Apple charts for free downloads at the time - beating OpenAI's ChatGPT for the first time. The US 'supply chain risk' designation was seen by most as a way of punishing Anthropic for not bowing to government pressure, and now a district judge has backed that premise and granted a temporary injunction on the ban. "These broad measures do not appear to be directed at the government's stated national security interests," the judge said in her ruling. "If the concern is the integrity of the operational chain of command, the Department of War [sic] could just stop using Claude. Instead, these measures appear designed to punish Anthropic." "One of the amicus briefs described these measures as 'attempted corporate murder'. They might not be murder, but the evidence shows that they would cripple Anthropic. The record supports an inference that Anthropic is being punished for criticizing the government's contracting position in the press." "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation," she continued. "Moreover, Defendants' [US Government] designation of Anthropic as a 'supply chain risk' is likely both contrary to law and arbitrary and capricious." Siliconrepublic.com has reached out to Anthropic for comment on the decision. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[11]
Trump administration appeals ruling that blocked Pentagon action against Anthropic over AI dispute
SAN FRANCISCO (AP) -- The Trump administration is appealing a judge's order blocking the federal government from taking punitive measures against artificial intelligence company Anthropic after a dispute with the Pentagon over military use of AI. Department of Justice attorneys filed a notice in San Francisco federal court on Thursday of their intention to appeal last week's ruling by U.S. District Judge Rita Lin. Lin last week said she was blocking the Pentagon from labeling Anthropic a supply chain risk. She also said she was blocking enforcement of President Donald Trump's social media directive ordering all federal agencies to stop using Anthropic and its chatbot Claude. Lin said the "broad punitive measures" taken against the AI company by the Trump administration and Defense Secretary Pete Hegseth appeared arbitrary, capricious and could "cripple Anthropic," particularly Hegseth's use of a rare military authority that's previously been directed at foreign adversaries. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," Lin wrote. A top Pentagon official last week called Lin's order a "disgrace." U.S. Defense Undersecretary Emil Michael, the Pentagon's chief technology officer, said on social media it would disrupt Hegseth's "full ability to conduct military operations with the partners it chooses." Lin had stayed her order for a week, which gave time for the Pentagon to take the case to the Ninth Circuit Court of Appeals. She had also said her order doesn't require the Pentagon to use Anthropic's products or prevent it from transitioning to other AI providers. Anthropic has also filed a separate and more narrow case that is still pending in the federal appeals court in Washington, D.C. That case involves a different rule the Pentagon is using to try to declare Anthropic a supply chain risk. Trump and Hegseth publicly announced their actions against Anthropic on Feb. 27 after negotiations over a defense contract went sour over the company's attempt to prevent its AI technology from being deployed in fully autonomous weapons or surveillance of Americans. The Pentagon had argued that it should be able to use Claude in any way it deems lawful. A number of third parties had filed legal briefs supporting Anthropic's case, including Microsoft, industry trade groups, rank-and-file tech workers, retired U.S. military leaders and a group of Catholic theologians.
[12]
Anthropic wins major injunction against Pentagon's AI blacklist
A federal judge granted Anthropic an injunction against the U.S. government's "supply chain risk" designation, ordering the administration to rescind the label. The ruling impacts the ability of federal agencies to continue working with the artificial intelligence company and follows a period of dispute over the Pentagon's use of Anthropic's software. Judge Rita F. Lin of the Northern District of California stated the government's orders had ignored free speech protections, according to the Wall Street Journal. "It looks like an attempt to cripple Anthropic," Lin reportedly said during court proceedings. The dispute began last month over guidelines for the government's use of Anthropic's AI software. Anthropic sought to restrict the government from using its AI models in autonomous weapons systems or for mass surveillance. The government disagreed with these limitations, subsequently labeling the company a "supply chain risk." President Trump ordered federal agencies to cease ties with Anthropic. Anthropic CEO Dario Amodei called the Defense Department's actions "retaliatory and punitive." The White House characterized Anthropic as "a radical-left, woke company" that is endangering U.S. national security. Anthropic stated: "We're grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits." The company added its focus remains on "working productively with the government to ensure all Americans benefit from safe, reliable AI," as reported by TechCrunch.
[13]
Trump Administration Appeals Ruling That Blocked Pentagon Action Against Anthropic Over AI Dispute
SAN FRANCISCO (AP) -- The Trump administration is appealing a judge's order blocking the federal government from taking punitive measures against artificial intelligence company Anthropic after a dispute with the Pentagon over military use of AI. Department of Justice attorneys filed a notice in San Francisco federal court on Thursday of their intention to appeal last week's ruling by U.S. District Judge Rita Lin. Lin last week said she was blocking the Pentagon from labeling Anthropic a supply chain risk. She also said she was blocking enforcement of President Donald Trump's social media directive ordering all federal agencies to stop using Anthropic and its chatbot Claude. Lin said the "broad punitive measures" taken against the AI company by the Trump administration and Defense Secretary Pete Hegseth appeared arbitrary, capricious and could "cripple Anthropic," particularly Hegseth's use of a rare military authority that's previously been directed at foreign adversaries. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," Lin wrote. A top Pentagon official last week called Lin's order a "disgrace." U.S. Defense Undersecretary Emil Michael, the Pentagon's chief technology officer, said on social media it would disrupt Hegseth's "full ability to conduct military operations with the partners it chooses." Lin had stayed her order for a week, which gave time for the Pentagon to take the case to the Ninth Circuit Court of Appeals. She had also said her order doesn't require the Pentagon to use Anthropic's products or prevent it from transitioning to other AI providers. Anthropic has also filed a separate and more narrow case that is still pending in the federal appeals court in Washington, D.C. That case involves a different rule the Pentagon is using to try to declare Anthropic a supply chain risk. Trump and Hegseth publicly announced their actions against Anthropic on Feb. 27 after negotiations over a defense contract went sour over the company's attempt to prevent its AI technology from being deployed in fully autonomous weapons or surveillance of Americans. The Pentagon had argued that it should be able to use Claude in any way it deems lawful. A number of third parties had filed legal briefs supporting Anthropic's case, including Microsoft, industry trade groups, rank-and-file tech workers, retired U.S. military leaders and a group of Catholic theologians.
[14]
Trump admin asks court to reimpose Anthropic supply chain risk designation
The Trump administration has appealed a federal judge's temporary suspension of the Pentagon's supply chain risk designation for artificial intelligence firm Anthropic. The notice of appeal, filed in a California federal court Thursday, was largely expected, coming just one week after U.S. District Judge Rita Lin sided with Anthropic. Lin temporarily blocked the supply chain risk designation along with President Trump's order directing all federal agencies cut ties with Anthropic. This directive would ban Anthropic from being awarded government contracts and prevent anyone that does business with the military from contracting with the company. Lin paused the ruling from taking effect for one week to give the Trump administration a chance to first seek relief from an appeals court. Anthropic sued the Pentagon last month after the defense agency labelled the AI company as a supply chain risk after they failed to agree on safety guardrails. The company, which has partnered with the Pentagon since 2024, demanded its technology not be used in fully autonomous lethal weapons or for the mass surveillance of Americans. Anthropic contends the Pentagon retaliated against the AI firm over what it believes is a "protected viewpoint" of the company. Lin said in last week's ruling that Anthropic was likely to succeed in its claims that its constitutional due process rights were violated and Defense Secretary Pete Hegseth did not follow proper procedures. Lin's ruling was celebrated by various factions of the technology industry, which expressed support for a halt to the designation in court filings and behind closed doors last month. Anthropic did not immediately respond to a request for comment.
[15]
US Judge Blocks Pentagon's Anthropic Blacklisting for Now
March 26 (Reuters) - A U.S. judge on Thursday temporarily blocked the Pentagon's blacklisting of Anthropic, the latest turn in the Claude maker's high-stakes fight with the military over AI safety on the battlefield. Anthropic's lawsuit in California federal court alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated Anthropic a national security supply-chain risk, a label the government can apply to companies that expose military systems to potential infiltration or sabotage by adversaries. Hegseth's unprecedented move, which followed Anthropic's refusal to allow the military to use AI chatbot Claude for U.S. surveillance or autonomous weapons, blocked Anthropic from certain military contracts. Anthropic executives have said it could cost the company billions of dollars in lost business and reputational harm. Anthropic says that AI models are not reliable enough to be safely used in autonomous weapons and that it opposes domestic surveillance as a violation of rights, but the Pentagon says private companies should not be able to constrain military action. U.S. District Judge Rita Lin, an appointee of former Democratic President Joe Biden, handed down the ruling at a hearing in San Francisco after Anthropic asked for a temporary order blocking the designation while the litigation plays out. Lin's ruling is not final, and the case is still pending. Anthropic's designation was the first time a U.S. company has been publicly designated a supply-chain risk under an obscure government-procurement statute aimed at protecting military systems from foreign sabotage. In its March 9 lawsuit, Anthropic alleged the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute the designation, in violation of its Fifth Amendment right to due process. The lawsuit says the decision was unlawful, unsupported by facts and inconsistent with the military's past praise of Claude. The Justice Department countered that Anthropic's refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing. The government said the designation stemmed from Anthropic's refusal to accept contractual terms, not its views on AI safety. Anthropic has a second lawsuit pending in Washington, D.C., over a separate Pentagon supply-chain risk designation that could lead to its exclusion from civilian government contracts. (Reporting by Jack Queen in New York and Kanishka Singh in Washington, editing by Noeleen Walder and Matthew Lewis)
[16]
Anthropic Prevails In Initial Ruling, Spotlights 'First Amendment Retaliation'
San Francisco Federal Court District Judge Rita Lin sided with Anthropic in its request for a preliminary injunction in its legal battle against the Trump administration, calling it "illegal First Amendment retaliation." This decision temporarily halts the government's actions to blacklist the AI company and prevents the enforcement of a directive from President Donald Trump that bans federal agencies from using Anthropic's Claude models. The Judge noted that the defendant's designation of Anthropic as a "supply chain risk" is both contrary to the law and arbitrary and capricious. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," the report cited. The court held a virtual hearing earlier this week where Anthropic and the Department of Defense each brought their cases in front of the judge. During the hearing, Lin questioned the U.S. government about its motives for labeling Anthropic a national security threat. While the preliminary injunction provides temporary relief, the final resolution of the case may take several months. "We're grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI," a spokesperson for Anthropic emailed Benzinga. The Department of Defense did not provide a direct statement but referred to the following comment from Under Secretary of War Emil Michaels post on X. "There are dozens of factual errors in the 42 page judgment rushed out in 48 hours DURING A TIME OF CONFLICT that seeks to upend @POTUS' role as Commander In Chief and disrupt the @SecWar's full ability to conduct military operations with the partners it chooses. A disgrace," Michaels wrote. Photo: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[17]
US judge blocks Pentagon's Anthropic blacklisting for now
pen AI and Anthropic logos are seen in this illustration created on September 12, 2025. A US judge on Thursday temporarily blocked the Pentagon's blacklisting of Anthropic, the latest turn in the Claude maker's high-stakes fight with the military over AI safety on the battlefield. Anthropic's lawsuit in California federal court alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated Anthropic a national security supply-chain risk, a label the government can apply to companies that expose military systems to potential infiltration or sabotage by adversaries. Hegseth's unprecedented move, which followed Anthropic's refusal to allow the military to use AI chatbot Claude for US surveillance or autonomous weapons, blocked Anthropic from certain military contracts. Anthropic executives have said it could cost the company billions of dollars in lost business and reputational harm. Anthropic says that AI models are not reliable enough to be safely used in autonomous weapons and that it opposes domestic surveillance as a violation of rights, but the Pentagon says private companies should not be able to constrain military action. US District Judge Rita Lin, an appointee of former Democratic President Joe Biden, handed down the ruling at a hearing in San Francisco after Anthropic asked for a temporary order blocking the designation while the litigation plays out. Lin's ruling is not final, and the case is still pending. Anthropic: Government violated First Amendment rights Anthropic's designation was the first time a US company has been publicly designated a supply-chain risk under an obscure government-procurement statute aimed at protecting military systems from foreign sabotage. In its March 9 lawsuit, Anthropic alleged the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute the designation, in violation of its Fifth Amendment right to due process. The lawsuit says the decision was unlawful, unsupported by facts, and inconsistent with the military's past praise of Claude. The Justice Department countered that Anthropic's refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing. The government said the designation stemmed from Anthropic's refusal to accept contractual terms, not its views on AI safety. Anthropic has a second lawsuit pending in Washington, DC, over a separate Pentagon supply-chain risk designation that could lead to its exclusion from civilian government contracts.
[18]
DOJ Appeals to Restore Anthropic Ban | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. In the case, Anthropic sued to challenge the Defense Department's designation of the company as a supply chain risk, and the judge paused the government's plan while also putting her order on hold for a week so that the government could appeal, according to the report. It was reported Feb. 27 that the White House told federal agencies that the federal government will no longer work with Anthropic and that agencies using the company's AI models will get a six-month phaseout period. The move marked an escalation in a dispute that started inside the Defense Department but expanded to touch the broader government. The decision came just ahead of a Pentagon deadline for Anthropic to agree that the military can use its models in "all lawful use cases." The company refused that concession and sought contract language that would prohibit use of its models for autonomous weapons and mass domestic surveillance. After the Pentagon designated the company a security risk, Anthropic said Feb. 27 that it planned to challenge the government's decision in court. "Legally, a supply chain risk designation ... can only extend to the use of [Anthropic's AI model] Claude as part of Department of War contracts -- it cannot affect how contractors use Claude to serve other customers," Anthropic wrote on its blog. Anthropic then sued the Department of Defense on March 9 to block it from placing a supply chain risk designation on the company. The company argued that the designation violates its rights to free speech and due process and asked the court to reverse the designation and block the federal government from enforcing it. It was reported March 10 that during a court hearing, Anthropic said it could lose billions of dollars due to government ban on its services. An Anthropic attorney told the court that the government's actions had caused more than 100 customers to express concerns about continuing to engage with the company.
[19]
Anthropic Gets Injunction Against Trump Administration over Pentagon Mess
Court said classifying it as a supply-chain risk and debarring Anthropic from government use was a case of trampling free-speech protection A US federal judge has supported AI startup Anthropic in its legal battle with the Trump administration over the recent Pentagon fracas that saw it being summarily replaced by OpenAI. The judge awarded the tech company an injunction against the federal government's recent order labelling it a "supply-chain risk". Judge Rita F. Lin of the Northern District of California ruled that such a classification as well as the debarring of Anthropic models from government use by the Trump administration was a case of trampling free-speech protection. The judge also ordered the administration to desist from applying the president's directive that federal agencies stop using Anthropic's technology. "It looks like an attempt to cripple Anthropic," Lin noted during the court proceedings before arguing that the government's orders had flouted free speech protections for the company. Readers would recall that the current saga began in February when the Pentagon and Anthropic went head-to-head over guidelines for the government's usage of the latter's software. While Anthropic said it wanted to enforce certain limits on how the government should use its AI models that included banning its use on autonomous weapons systems or for mass surveillance of its own people without judicial approval. The Trump administration disagreed with the limitations and finally ended up pulling the plug on Dario Amodei's AI startup. Anthropic had sought the injunction to pause the actions that included removing them from all federal business as well as classifying the company as a supply-chain risk. The company said these actions would cause monetary and reputational harm as the case unfolds. Judge Lin's order now bars the administration from implementing, applying or enforcing the president's directive, and hampers the Pentagon's efforts to designate Anthropic as a threat to U.S. national security. "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation," Lin wrote in the order. A final verdict in the case could still be months away. Judge Lin asked the government attorneys why Anthropic was blacklisted and in the order she noted that "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government." In fact, Anthropic's lawsuit earlier this month followed several weeks of hard negotiations between officials of the Department of Defence and the company. Late in February, Defence Secretary Pete Hegseth took to X to declare Anthropic a supply-chain risk, meaning their tech "threatens US national security". A few days later, the Department notified Anthropic about the designation via a letter. Several tech giants rallied behind Anthropic as it became the first American company to be publicly named a supply-chain risk, given that the designation has historically been reserved for foreign adversaries. Such a label requires that other defence contractors such as Amazon, Microsoft and Palantir needed to certify their did not use Anthropic's Claude AI for their work. In fact, the mess was started off by none other than President Donald Trump who took to Truth Social to order federal agencies to "immediately cease" all use of Anthropic's tech. "WE will decide the fate of our Country -- NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about," Trump said. The move shocked officials in Washington, including many in the Trump administration itself as they had been relying on Anthropic technology. The company was the first to deploy their models across the Defence department's classified networks. Towards this end, Anthropic had signed a $200 million contract with the Pentagon in July last year. "Everyone, including Anthropic, agrees that the Department of [Defence] is free to stop using Claude and look for a more permissive AI vendor," Lin said earlier last week during a hearing. "I don't see that as being what this case is about. I see the question in this case as being a very different one, which is whether the government violated the law," she noted. Looks like there's some more egg on the Trump administration's face after the walloping they got over the tariff reversals. "We're grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI," Anthropic said in a statement.
[20]
US court halts the Pentagon's 'supply chain risk' tag on Anthropic
A U.S. federal judge has temporarily blocked the Pentagon from blacklisting AI company Anthropic, marking a key moment in an ongoing dispute over how the military can use artificial intelligence, according to a Reuters report. U.S. District Judge Rita Lin ruled that the government's action to label the company a "national security supply-chain risk" may be unlawful. The order will take effect after seven days, giving the administration time to appeal. The case is still ongoing. Why the Pentagon acted: The dispute began after the Pentagon, under Defense Secretary Pete Hegseth, imposed the designation. This move followed Anthropic's refusal to allow its AI chatbot Claude to be used for domestic surveillance or fully autonomous weapons. The label could block the AI company from key military contracts and, according to Anthropic, cause major financial and reputational damage. Anthropic's legal challenge: In its lawsuit, Anthropic argued that the decision violated its constitutional rights. Anthropic claimed the move retaliated against its public stance on AI safety and violated the First Amendment (free speech) and the Fifth Amendment (due process), as the government did not give it a chance to challenge the designation. The U.S. government defended its action, saying the designation was based on contractual disagreements and concerns that Anthropic's restrictions could limit how the military uses AI during operations. What the court said: Judge Lin, however, questioned the intent behind the move. In her ruling, she wrote, "The record supports an inference that Anthropic is being punished for criticizing the government's contracting position in the press." She added, "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation." How the dispute escalated: The conflict has deeper roots in a breakdown of negotiations between the Pentagon and Anthropic in February 2026. The U.S. military had asked the company to allow its AI systems to be used for all "lawful" purposes inside classified networks. Anthropic refused to remove two safeguards: a ban on mass surveillance of American citizens and a restriction on fully autonomous lethal decision-making. The Pentagon then used procurement and national security tools, rather than new laws, to pressure the company. Officials considered invoking the Defense Production Act and began steps toward the same supply-chain risk designation now under challenge. This reflects how governments can shape AI use through contracts rather than formal regulation. Competitive and legal stakes: The dispute also unfolded amid rising competition. Rival firms like OpenAI and others entered talks with the Pentagon under broader "all lawful use" terms, reducing Anthropic's leverage as the only provider inside certain classified systems. This marks the first known instance of the U.S. government publicly labelling a domestic company as a supply-chain risk under a law typically used to guard against foreign threats. Court filings also allege that officials lacked evidence for the designation and used it as pressure after Anthropic refused to change its policies, a claim the government disputes. Anthropic has also filed a separate case in Washington, D.C., challenging a related designation that could affect its access to civilian government contracts. For now, the court's order does not require the Pentagon to use Anthropic's technology. It only pauses enforcement of the blacklisting, leaving the larger legal and policy questions unresolved.
[21]
Federal Judge Blocks Trump's Move to Cut Anthropic From Government Work
Anthropic has won an early legal victory in its fight with the Trump administration after a federal judge blocked a ban on government use of the company's artificial intelligence tools. The order keeps Anthropic products, including Claude, available to US agencies and contractors for now while the case continues in court. The dispute began after Anthropic resisted broader Pentagon contract terms that tied the use of its AI systems to military applications. US District Judge Rita F. Lin issued a that stops the administration from enforcing its plan to cut ties with Anthropic. However, she paused the order for seven days to give the government time to appeal. For now, the ruling grants Anthropic temporary relief in a case that could shape how the government regulates private AI providers. In her written order, Lin questioned the basis for the administration's move. She said the action did not appear aimed at national security concerns. Instead, she wrote that it appeared designed to punish the company. The judge said this looked like "classic illegal First Amendment retaliation" after Anthropic raised concerns about how its technology could be used. Anthropic had argued that the government's move could cost it billions of dollars in lost revenue. The company also said the case goes beyond a single contract dispute and touches on whether the government can punish a contractor for views it does not like. As a result, the court's early order gives Anthropic room to continue serving public-sector customers while the case proceeds.
Share
Share
Copy Link
A federal judge temporarily blocked the Pentagon from designating Anthropic as a supply chain risk after the AI company refused to allow its Claude chatbot to be used for mass surveillance of Americans and fully autonomous weapons. Judge Rita Lin called the government's actions "classic First Amendment retaliation" for Anthropic's public criticism of military AI uses.
US District Judge Rita Lin granted Anthropic a preliminary injunction blocking federal agencies from complying with directives issued by Donald Trump and Secretary of War Pete Hegseth that sought to blacklist the AI company
1
. The dispute centers on an AI dispute that erupted when the Pentagon demanded Anthropic remove AI safety guardrails preventing the military application of AI for mass surveillance and autonomous weapons3
.
Source: Engadget
In her 43-page opinion, Rita Lin described the Department of War's effort to designate Anthropic as a supply chain risk as punishment for the company's "hostile manner through the press"
1
. "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation," Lin wrote1
. The judge found that officials had no authority to take such extreme actions without considering less restrictive alternatives or offering evidence that Anthropic posed an urgent risk to national security.
Source: CXOToday
The Department of War began using Anthropic's Claude chatbot in March 2025 and continued for a year without ever raising concerns that the company's terms limiting certain uses posed a national security risk
1
. The government thoroughly vetted Claude before implementation, praised Anthropic publicly, and planned to expand the partnership1
.Defense employees accessing Claude through Palantir were required to accept terms of a government-specific usage policy that Anthropic cofounder Jared Kaplan said "prohibited mass surveillance of Americans and lethal autonomous warfare"
2
. The amicable partnership only changed when the Department of War sought to contract directly with Anthropic and deploy Claude on a military platform with different terms1
.On February 27, Donald Trump posted on Truth Social referencing "Leftwing nutjobs" at Anthropic and directed every federal agency to stop using the company's AI
2
. Pete Hegseth echoed this directive, announcing he would label Anthropic a supply chain risk—a designation previously reserved for foreign intelligence agencies, terrorists, and hostile actors, never applied to a domestic company1
.
Source: Analytics Insight
Hegseth's post also stated that "No contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic"
2
. However, the government's own lawyers later admitted that the Secretary doesn't have the power to do that, agreeing with the judge that the statement had "absolutely no legal effect at all"2
.Rita Lin determined that what amounted to blacklisting AI company Anthropic violated its First Amendment rights and denied the company due process
3
. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," Judge Lin wrote3
.The judge noted that letters sent to congressional committees claimed less drastic steps were evaluated and deemed not possible, without providing any further details
2
. The government also claimed the supply chain risk designation was necessary because Anthropic could implement a "kill switch," but its lawyers later had to admit it had no evidence of that2
.Related Stories
After the Department of War's actions, three trade deals were promptly cancelled, while other potential partners delayed talks
1
. Anthropic demonstrated it was already suffering irreparable harms that would only worsen—including losing potentially billions in private and government contracts the company expected to sign over the next five years1
.Despite winning the preliminary injunction, Anthropic remains in a difficult position. The judge granted the government's request for an administrative stay, which delays the injunction from taking effect for seven days, giving the government time to seek an emergency stay from an appeals court
1
. The Trump administration filed a notice of appeal on Thursday4
. Anthropic also has a second case against the designation pending in the U.S. Court of Appeals for the D.C. Circuit4
.Under Secretary of War Emil Michael called Lin's order "a disgrace" and claimed it contained "factual errors" due to the judge's supposed rush to order the injunction
1
. Michael argued that disrupting Hegseth's directive could "disrupt" how US military operations are conducted1
.However, Rita Lin cited a brief filed in support of Anthropic from military leaders who warned that letting the directive stand "will materially detract from military readiness and operational safety"
1
. The case attracted support from unlikely bedfellows, including Microsoft, industry trade groups, rank-and-file tech workers, retired U.S. military leaders, and a group of Catholic theologians4
.What began as a contract dispute escalated into what observers describe as a culture war because the government disregarded existing processes and fueled conflict with social media posts that contradicted positions taken in court
2
. Meanwhile, OpenAI has struck a deal with the Pentagon to deploy its AI models on the military's classified network3
, positioning itself as an alternative partner willing to work within the government's preferred terms.Summarized by
Navi
[2]
[4]
11 Mar 2026•Policy and Regulation

18 Mar 2026•Policy and Regulation

09 Apr 2026•Policy and Regulation

1
Technology

2
Policy and Regulation

3
Policy and Regulation
