21 Sources
[1]
Anthropic Supply-Chain Risk Label Should Stay In Place, Appeals Court Says
Anthropic "has not satisfied the stringent requirements" to temporarily lose the supply-chain risk designation imposed by the Pentagon, a US appeals court in Washington, DC ruled on Wednesday. The decision is at odds with one issued last month by a lower court judge in San Francisco, and it wasn't immediately clear how the conflicting preliminary judgements would be resolved. The government sanctioned Anthropic under two different supply-chain laws with similar effects, and the San Francisco and Washington, DC courts are each ruling on only one of them. Anthropic has said it is the first US company to be designated under the two laws, which are typically used to punish foreign businesses that pose a risk to national security. "Granting a stay would force the United States military to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict," the three-judge appellate panel wrote on Wednesday in what they described as an unprecedented case. The panel said that while Anthropic may suffer financial harm from the ongoing designation, they did not want to risk "a substantial judicial imposition on military operations" or "lightly override" the military's judgements on national security. The San Francisco judge had found that the Department of Defense likely acted in bad faith against Anthropic, driven by frustration over the AI company's proposed limits on how its technology could be used and its public criticism of those restrictions. The judge ordered the supply-chain risk label removed last week, and the Trump administration complied by restoring access to Anthropic AI tools inside the Pentagon and throughout the rest of the federal government. Anthropic spokesperson Danielle Cohen says the company is grateful the Washington, DC court "recognized these issues need to be resolved quickly" and remains confident "the courts will ultimately agree that these supply chain designations were unlawful." The Department of Defense did not immediately respond to a request for comment. The cases are testing how much power the executive branch has over the conduct of tech companies. The battle between Anthropic and the Trump administration is also playing out as the Pentagon deploys AI in its war against Iran. The company has argued it is being illegally punished for insisting that its AI tool Claude lacks the accuracy needed for certain sensitive operations such as carrying out deadly drone strikes without human supervision. Several experts in government contracting and corporate rights have told WIRED that Anthropic has a strong case against the government, but the courts sometimes refuse to overrule the White House on matters related to national security. Some AI researchers have said the Pentagon's actions against Anthropic "chills professional debate" about the performance of AI systems. Anthropic has claimed in court that it lost business because of the designation, which government lawyers contend bars the Pentagon and its contractors from using the company's Claude AI as part of military projects. And as long as Trump remains in power, Anthropic may not be able to regain the significant foothold it held in the federal government. Final decisions in the company's two lawsuits could be months away. The court in Washington, DC is scheduled to hear oral arguments on May 19. The parties have revealed minimal details so far about how exactly the Department of Defense has used Claude or how much progress it has made in transitioning staff to other AI tools from Google DeepMind, OpenAI, or others. The military, which under President Donald Trump calls itself the Department of War, has said it has taken steps to ensure Anthropic can't purposely try to sabotage its AI tools during the transition.
[2]
Anthropic Fails for Now to Halt US Label as a Supply-Chain Risk
A federal appeals court declined for now Anthropic PBC's request to pause a declaration by the Pentagon that the artificial intelligence company poses a risk to the US supply chain, even as plans for a broader government ban on its technology remain blocked by a California judge. In its decision Wednesday, the three-judge panel in Washington, DC, said it would work quickly to review the case because Anthropic is likely to "suffer some irreparable harm" as the legal action plays out. The court set oral arguments in the case for May 19. The ruling leaves in place a US Defense Department declaration that was the legal basis for the Trump administration's plan to sever all ties to the company, which claims it could face billions of dollars in lost revenue. The government announced the ban after a dispute with Anthropic over how its AI technology would be used by the military. "On one side is a relatively contained risk of financial harm to a single private company," the court wrote in explaining why it rejected the request for an immediate pause. "On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict." Anthropic welcomed an expedited schedule for its lawsuit. "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI," the company said in a statement. The Claude chatbot maker has been fighting back on two fronts. It asked the appeals court on March 9 to review the Pentagon's declaration, calling it "arbitrary, capricious and an abuse of discretion." The same day, Anthropic sued in San Francisco federal court to challenge the ban, which could mean billions in lost revenue. On March 27, a judge in that case blocked the government following through while the lawsuit proceeds. The Trump administration is appealing. The Defense Department's declaration that Anthropic posed a supply-chain risk escalated a high-stakes dispute. The company demanded assurances that its AI wouldn't be used for mass surveillance of Americans or autonomous weapons deployment. The government rejected any restrictions, citing national security. In the San Francisco lawsuit, Anthropic claims it is being shut out for disagreeing with the administration. The company said the legal principles at stake affect every federal contractor whose views the government dislikes. The case is Anthropic v. US Department of War, 26-01049, US Court of Appeals, District of Columbia Circuit (Washington).
[3]
US court declines to block Pentagon's Anthropic blacklisting for now
NEW YORK, April 8 (Reuters) - A Washington, D.C., federal appeals court on Wednesday declined to block the Pentagon's national security blacklisting of Anthropic for now, a win for the Trump administration that comes after another appeals court came to the opposite conclusion in a separate legal challenge by Anthropic. Anthropic, developer of the popular Claude AI assistant, alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated the company a national security supply-chain risk, a label that blocks Anthropic from Pentagon contracts and could trigger a government-wide blacklisting. Anthropic executives have said the designation could cost the company billions of dollars in lost business and reputational harm. A panel of judges of the U.S. Court of Appeals for the District of Columbia Circuit denied Anthropic's bid to pause the designation while the case plays out. The decision is not a final ruling. The lawsuit is one of two Anthropic filed over Hegseth's unprecedented move, which came after Anthropic refused to allow the military to use AI chatbot Claude for U.S. surveillance or autonomous weapons due to safety and ethics concerns. Hegseth issued orders designating Anthropic under two different laws, and Anthropic is challenging each of them separately. A California federal judge blocked one of the orders on March 26, saying the Pentagon appeared to have unlawfully retaliated against Anthropic for its views on AI safety. Anthropic's designation was the first time a U.S. company has been publicly designated a supply-chain risk under obscure government-procurement statutes aimed at protecting military systems from enemy sabotage or infiltration. In its lawsuits, Anthropic says the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute its designation, in violation of its Fifth Amendment right to due process. The lawsuits say the designations were unlawful, unsupported by facts and inconsistent with the military's past praise of Claude. The Justice Department says that Anthropic's refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing. The government said its decision stemmed from Anthropic's refusal to accept contractual terms, not its views on AI safety. The D.C. case concerns a law that could lead to the blacklist widening to the broader civilian government following an interagency review process. The California case deals with a narrower statute that excludes Anthropic from Pentagon contracts related to military information systems. (This story has been refiled to correct the dateline to April 8) Reporting by Jack Queen in New York; Editing by Noeleen Walder and Cynthia Osterman Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Government * Constitutional Law * Civil Rights * Appellate * Public Policy Jack Queen Thomson Reuters Jack Queen covers major lawsuits against the Trump administration involving urgent questions of executive power and how their resolution could affect the law and the legal profession in the years to come. Previously, he covered criminal and civil cases against Trump during the interim of his presidential terms, including gavel-to-gavel coverage of his historic hush money trial in New York and his civil fraud trial, which ended in a half-billion-dollar judgment. Jack has also covered high-profile defamation cases including the Dominion Voting Systems' lawsuit against Fox News, which settled for $787 million after intense pretrial litigation. Based in New York, he specializes in breaking news as well as analysis, explainers and other explanatory reporting.
[4]
Anthropic loses appeals court bid to block Pentagon blacklisting temporarily
A federal appeals court in Washington, D.C., on Wednesday denied Anthropic's request for a stay in its lawsuit against the Department of Defense. The artificial intelligence startup sought the action to pause its blacklisting by the Pentagon and prevent further monetary and reputational harm as the case unfolds. The ruling comes after a judge in San Francisco federal court late last month, in a separate case, granted Anthropic a preliminary injunction that bars the Trump administration from enforcing a ban on the use of Claude. The DOD declared Anthropic a supply chain risk in early March, meaning that use of the company's technology purportedly threatens U.S. national security. The label requires defense contractors to certify that they don't use Anthropic's Claude artificial intelligence models in their work with the military. "In our view, the equitable balance here cuts in favor of the government," the appeals court said in its decision. "On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict." Anthropic had asked the appeals court to review the Pentagon's determination and argued that it's a form of retaliation that is unconstitutional, arbitrary, capricious and not in accord with procedures required by law, according to a filing. The DOD relied on two distinct designations - 10 U.S.C. § 3252 and 41 U.S.C. § 4713 - to justify the supply chain risk action, and they have to be challenged in two separate courts. The 41 U.S.C. § 4713 designation falls under the purview of the appeals court in Washington, D.C. Anthropic has filed a separate, more wide-ranging lawsuit in San Francisco federal court, which oversees the 10 U.S.C. § 3252 designation. A judge on Thursday granted Anthropic a preliminary injunction in that case, writing that the government's position is "classic illegal First Amendment retaliation." -- CNBC's Dan Mangan contributed to this report.
[5]
Appeals court rebuffs Anthropic in latest round of its AI battle with the Trump administration
WASHINGTON (AP) -- A federal appeals court on Wednesday refused to block the Pentagon from blacklisting artificial intelligence laboratory Anthropic in a decision that differed from the conclusions reached in another judge's ruling on the same issues. The U.S. Court of Appeals in Washington D.C. rejected Anthropic's request for an order that would shield from the San Francisco company from the fallout stemming from a dispute over how the Pentagon could deploy its Claude chatbot in fully autonomous weapons and potential surveillance of Americans while the panel is still collecting evidence about the case. But the setback in Washington came after Anthropic already had prevailed in separate case focused on the same issues in San Francisco federal court. In that case, a judge forced President Donald Trump's administration to remove a label tainting the company as a national security risk. Anthropic filed the two separate lawsuits in San Francisco and the Washington appeals court last month, asserting the Trump administration was engaging in an "unlawful campaign of retaliation" because of its attempt to impose limits on how its AI technology can be deployed. The Trump administration blasted Anthropic as a liberal-leaning company trying to dictate U.S. military policy. In the San Francisco case, U.S. District Judge Rita Lin ruled that the Trump administration had overstepped its bounds by labeling Anthropic a supply chain risk unqualified to work with military contractors and issuing other directives that could cripple a company locked in a race for AI supremacy against rivals such as ChatGPT maker Open AI and Google. That decision prompted the Trump administration to remove the stigmatizing labels from Anthropic and take other steps clearing the way for government employees and contractors to continue using Claude and other chatbots, according to court filing made in San Francisco earlier this week. The appeals court in Washington didn't see things the same way, even though it conceded the company would "likely suffer some degree of irreparable harm" if it's deemed a supply chain risk. But the appeals court didn't see sufficient reason to issue its own order revoking the Trump administration's actions, partly because "the precise amount of Anthropic's financial harm is not fully clear." Further evidence in the case is scheduled to be presented before the appeals court in a hearing scheduled for May 19. "We're grateful the court recognized these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful," Anthropic said in a statement. Matt Schruers, the CEO of the technology trade group Computer & Communications Industry Association, expressed worries that the conflicting court decisions issued so far in the standoff between Anthropic and the Trump administration will muddle the business landscape at a pivotal time. "The Pentagon's actions and the DC Circuit's ruling create substantial business uncertainty at a time when U.S. companies are competing with global counterparts to lead in AI," Schruers said.
[6]
Anthropic loses bid to pause Pentagon blacklisting as AI legal battle escalates
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. What just happened? Anthropic has suffered another setback in its escalating fight with the Pentagon. A federal appeals court in Washington, D.C., has refused to temporarily block the Defense Department from treating the AI firm as a supply chain risk while the case continues. The ruling keeps Anthropic locked out of DoD contracts for now, even though a separate federal court in California recently barred the Trump administration from enforcing a broader ban on the use of Claude. The split decisions leave Anthropic in an awkward position. It's still able to work with other government agencies, but effectively frozen out of Pentagon-related business. In its decision, the appeals court said the balance of harms favored the government. Judges described Anthropic's risk as primarily financial, while casting the Pentagon's position as part of wartime procurement during an active military conflict. Also read: Anthropic unveils new powerful AI that finds software flaws, but says it's too dangerous to release The panel acknowledged the company would likely suffer some irreparable harm without relief, but said Anthropic had not shown that its free speech rights were being meaningfully "chilled" while the case plays out. The court did, however, say the matter should be expedited, suggesting the legal fight will move faster because of the damage Anthropic is likely to face in the meantime. Anthropic CEO Dario Amodei The dispute stems from a breakdown in negotiations between Anthropic and the Pentagon over how Claude could be used in defense settings. Anthropic had pushed for assurances that its models would not be used for fully autonomous weapons or domestic mass surveillance, while the DoD wanted broader access across all lawful uses. When the two sides failed to reach an agreement, Defense Secretary Pete Hegseth publicly branded the company a supply chain risk before the department formalized the designation in early March. Defense contractors are now required to certify they are not using Anthropic's models in work tied to the military, though Claude remains available for non-DoD projects and other federal agencies. Anthropic has argued the move is retaliatory, unconstitutional, arbitrary, and procedurally unsound. Part of the confusion comes from the government relying on two separate legal authorities to justify its actions, forcing Anthropic to challenge them in different courts. That explains how the California court could block one aspect of the administration's restrictions while the Washington appeals court allowed the Pentagon blacklist to remain in place. Anthropic is reportedly the first American company to receive this kind of supply chain risk designation, a label more commonly associated with foreign adversaries. It's a dramatic turn of events for a company that signed a $200 million Pentagon contract last July and was once being positioned for deeper integration into the DoD's AI efforts.
[7]
Federal Court Denies Anthropic's Motion to Lift 'Supply Chain Risk' Label
The ruling was a setback for the artificial intelligence start-up in its battle with the Defense Department over the use of A.I. in warfare. A panel of federal judges on Wednesday denied a motion from Anthropic to stop the Department of Defense from labeling it a security risk, a setback for the artificial intelligence company in its battle with the Trump administration over how A.I. should be used in warfare. In a four-page ruling, the three-judge panel of the U.S. Court of Appeals for the District of Columbia Circuit said Anthropic had "not satisfied the stringent requirements" for a stay of the security risk label. "In our view, the equitable balance here cuts in favor of the government," the judges wrote, representing one of two federal courts where Anthropic sued the Pentagon. The other is in California. The ruling is not a final decision, and the case is set to continue. But it was an early win for the Trump administration, which has been fighting with Anthropic in the aftermath of a failed $200 million contract negotiation over the use of A.I. in classified systems. The Defense Department and Anthropic had disagreed over contract terms related to how the Pentagon could use the company's A.I., which led Defense Secretary Pete Hegseth to label the start-up a "supply chain risk." That designation is typically applied to foreign companies that pose national security concerns. Being labeled a supply chain risk effectively blacklists a company from doing business with U.S. government entities.
[8]
Judge blocks Trump crackdown on Anthropic, restoring Claude access
US court restores Claude access, but the battle is far from over * Federal workers regain Claude access after court blocks controversial designation * Judge calls government move unconstitutional retaliation against AI company * Anthropic rejects military use, triggering federal access shutdown and backlash Federal employees can now log back into Anthropic's Claude for Government service after a California federal judge blocked the Trump administration from designating the AI company as a supply chain risk. US District Judge Rita Lin issued a preliminary injunction, granting Anthropic's motion to prevent Defense Secretary Pete Hegseth and the administration from declaring the company a threat. Federal workers at agencies such as the Department of Health and Human Services received emails informing them that access to Claude has been restored, along with any previous conversation history and data. Cause of the dispute The conflict erupted in early 2026 after Anthropic refused to allow its Claude AI model to be used for developing lethal autonomous weapons or for mass surveillance of the US population. The company stepped away from partnership discussions with the US military over these concerns, which included fully autonomous weapons and mass surveillance capabilities. In response, the Trump administration designated Anthropic as a supply chain risk, a move that Anthropic CEO Dario Amodei described as "legally unsound." This decision by the Trump administration did not stop millions of users from signing up for Claude daily. The US government has never applied this designation to a domestic company, as it is typically directed at foreign intelligence agencies, terrorists, and other hostile actors. Judge Lin used striking language in her 43-page order granting the preliminary injunction. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government," Lin wrote. She labeled the administration's actions as "classic First Amendment retaliation." Lin noted that the designation has never been applied to a domestic company and is directed principally at foreign intelligence agencies, terrorists, and other hostile actors. The Department of Defense, which the Trump administration has dubbed the Department of War, has appealed Lin's order to the US Court of Appeals for the Ninth Circuit. The administration did not ask the appellate court to stay the district court injunction, allowing it to go into effect. Anthropic is also asking the US Court of Appeals for the DC Circuit to issue an emergency stay of the Defense Department's supply chain designation. The company argues that the administration violated the First and Fifth Amendments of the Constitution. This preliminary injunction allows federal workers to access Claude again, but the legal fight is far from over. Via MLex Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[9]
Anthropic suffers setback in Pentagon blacklisting fight
Driving the news: Anthropic sought the stay to fend off financial and reputational harm. The ruling comes after a federal judge in a San Francisco court last month granted the company a preliminary injunction keeping the administration from banning the use of Claude. Why it matters: The split rulings from two different courts mean Anthropic still has battles left to fight over its supply-chain risk designation. * The dispute centers on Pentagon restrictions on use of its Claude technology in classified settings. What they're saying: "We're grateful the court recognized these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful," an Anthropic spokesperson said. * "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI." * A source familiar with the case said the type of emergency relief Anthropic sought from the D.C. appeals court is rarely granted. How it works: The San Francisco injunction means non-Pentagon agencies no longer have to terminate contracts with Anthropic.
[10]
Court rejects Anthropic appeal over US national security risk label
The debate started when Anthropic refused to give the US government unfettered access to its AI chatbot, Claude. A court in the United States has rejected American artificial intelligence (AI) company Anthropic's request to shield it from being labelled a supply chain risk by the country's government. The label has never before been applied to an American company. The Trump administration labelled the AI company a supply chain risk and ordered federal agents to stop using Anthropic's AI assistant Claude in February, after the company refused to allow unrestricted military access to its model. This label blocks contractors who work with the Pentagon from using the company's AI models on Department of Defence contracts. The restrictions that are being disputed include the use of Claude for lethal autonomous weapons without human oversight and mass surveillance of Americans. In 2025, Anthropic signed a $200 million (€171.5 million) contract with the Pentagon to deploy its technology within the military's systems. Following that deal, the AI chatbot had been rolled out throughout the US government's classified information networks, deployed at national nuclear laboratories, and was doing intelligence analysis directly for the Department of Defence. This setback for Anthropic in Washington comes after the company won a separate lawsuit focused on the same issues in a San Francisco court, which forced President Donald Trump's administration to remove the label. Anthropic filed the two lawsuits in San Francisco and Washington last month and accused the Trump administration of engaging in an "unlawful campaign of retaliation." In their March filing, the Department of Defence wrote that Anthropic might "attempt to disable its technology or preemptively alter the behaviour of its model" before or during "warfighting operation" if the company "feels that its corporate 'red lines' are being crossed." The panel at the D.C. Circuit Court of Appeals said it did not see any reason to revoke the Trump administration's actions because "the precise amount of Anthropic's financial harm is not clear." However, the appeals court will be hearing more evidence from this case in May. "We're grateful the court recognised these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful," Anthropic said in a statement to the Associated Press.
[11]
US court expedites Anthropic's legal battle with Department of War
Washington (United States) (AFP) - A US appeals court on Wednesday denied Anthropic's request to put on hold a move by the Pentagon to label it a supply chain risk, but ordered the AI startup's legal battle with the Department of War to be put on a fast track. "On one side is relatively contained risk of financial harm to a single private company," the three-member appellate panel here reasoned. "On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict." The ruling stems from the Pentagon designating Anthropic, creator of the Claude AI model, as a national security supply chain risk -- a label typically reserved for organizations from unfriendly foreign countries. The AI startup sought a stay of the action in appellate court here and also sued the Department of War in federal court in Northern California. The appellate panel stated in its ruling that requiring the Department of War to prolong its use of Anthropic AI directly or through contractors "strikes us as a substantial judicial imposition on military operations." However, the appeals court agreed that Anthropic raised "substantial challenges" to the sanctions and ordered that proceedings in the underlying case be expedited. "We're grateful the court recognized these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful," an Anthropic spokesperson told AFP. "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI." In the suit filed in San Francisco, federal Judge Rita Lin temporarily froze the sanctions, reasoning that President Donald Trump's administration likely violated the law in blacklisting the AI powerhouse for expressing unease about the Pentagon's use of its technology. In her ruling, she said the government's designation of Anthropic as a supply chain risk was "likely both contrary to law and arbitrary and capricious." The dispute erupted in February after Anthropic infuriated Pentagon chief Pete Hegseth by insisting its technology should not be used for mass surveillance or fully autonomous weapons systems. The tech sector has largely supported Anthropic in the wake of the punitive measures.
[12]
US court declines to pause Anthropic ban, recommends case be expedited
A Washinton DC appeals court has declined to pause the US administration's 'supply chain risk' designation of Anthropic, but recommends case be expedited. Anthropic won its first round in court on March 26, when a district judge granted a temporary injunction against the US administration's decision to designate the Claude creator a "supply chain risk", something normally reserved for foreign actors. However, last night the Pentagon succeeded in reversing the injunction as a Washington DC appeals court declined to pause the effective 'ban'. The court did however recognise the 'harms' to Anthropic, and recommended the case be expedited. The court substantially sided with the US administration in its order, saying: "In our view, the equitable balance here cuts in favour of the government. On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict. For that reason, we deny Anthropic'smotion for a stay pending review on the merits." However, the court also recognised the potential harms that were being done to Anthropic and recommended the case be expedited: "Nonetheless, because Anthropic raises substantial challenges to the determination and will likely suffer some irreparable harm during the pendency of this litigation, we agree with Anthropic that substantial expedition is warranted." That latter request to expedite the process had been made by Anthropic's legal team as an alternative to any stay, should that be unsuccessful, and the AI company welcomed that element of the order. "We're grateful the court recognized these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful," an Anthropic spokesperson told siliconrepublic.com. "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI." "Anthropic's petition raises novel and difficult questions, including what counts as a supply-chain risk under section 4713 and what qualifies as an urgent national-security interest justifying the use of truncated statutory procedures", the judgement also found, and that will be the fundamental question as the case proceeds. US district judge Rita F Lin had found in the first court case which had granted a temporary injunction against the ban last month that: "These broad measures do not appear to be directed at the government's stated national security interests. If the concern is the integrity of the operational chain of command, the Department of War [sic] could just stop using Claude. Instead, these measures appear designed to punish Anthropic." It's a view held by many. Anthropic drew the ire of the US administration after a standoff with the Pentagon, where Anthropic refused to change its safeguards related to using its AI for fully autonomous weapons, or for mass surveillance of US citizens. The relatively ethical stance in the face of huge pressure from the US administration has earned the company many defenders and indeed a slew of new customers. Project Glasswing Anthropic again flexed its ethics and safety chops this week as it declined to release its powerful new Claude Mythos model to the public, as many fear the consequences of it falling into the hands of bad actors. Instead, its Project Glasswing will bring together leading businesses, including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JP Morgan Chase, the Linux Foundation, Microsoft, Nvidia and Palo Alto Networks, allowing them to access the Mythos preview (released on April 7) to boost their cyber defences. According to Anthropic, its unreleased Claude Mythos has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Anthropic's Mythos preview is significantly more capable at generating exploits. In its research, the company noted that Mythos developed working exploits 181 times out of the several hundred attempts, while Opus 4.6 had a near 0pc success rate. "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," said Anthropic, which has promised to share learnings from Project Glasswing to benefit the wider industry. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[13]
Appeals court rejects Anthropic's bid to block Pentagon blacklisting - SiliconANGLE
Appeals court rejects Anthropic's bid to block Pentagon blacklisting A federal appeals court in Washington DC today rejected Anthropic PBC's request for a stay in its lawsuit against the Department of Defense. A panel of three judges said the artificial intelligence start-up had failed to meet the strict requirements for an emergency stay in the case. Anthropic has sued the DoD in two separate lawsuits after the Pentagon designated the company as a supply chain risk. In a separate but related case last month, a judge in San Francisco granted Anthropic a preliminary injunction temporarily blocking the supply chain risk designation. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," said Judge Rita F. Lin. Of the decision today, the panel wrote. "In our view, the equitable balance here cuts in favor of the government." It added, "On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict." The disagreement began earlier in the year during failed negotiations over a $200 million contract. The dispute centered on Anthropic's reluctance to allow the DOD to use its Claude system "for all lawful purposes." The company warned that such vague language could open the door to uses it opposes, including mass surveillance of U.S. citizens or autonomous weapons systems. With the courts divided, Anthropic is temporarily excluded from Pentagon deals but not from the wider government. Contractors are barred from using Claude in Defense Department work, though it remains permitted in non-DOD projects. The panel acknowledged that Anthropic "will likely suffer some degree of irreparable harm absent a stay," but added that the company's interests seemed "primarily financial in nature." In relation to Anthropic's contention that the ban impinged on its right to free speech to criticize the government, the panel wrote, "Anthropic does not show that its speech has been chilled during the pendency of this litigation." While this is a setback for Anthropic, a spokesperson said the company is "confident the courts will ultimately agree that these supply chain designations were unlawful." Todd Blanche, the acting U.S. attorney general, writing on X, called today's decision a "resounding victory for military readiness." He added, "Our position has been clear from the start -- our military needs full access to Anthropic's models if its technology is integrated into our sensitive systems."
[14]
Anthropic Loses Bid To Block Pentagon AI Risk Label
"In our view, the equitable balance here cuts in favor of the government," said a panel of judges from the District of Columbia Court of Appeals. The US Court of Appeals for the District of Columbia Circuit has rejected AI company Anthropic's bid to pause the US Defense Department's designation of the firm as a national security supply chain risk. The three-judge panel denied the emergency motion for a stay on Wednesday, ruling that the government's interest in controlling how it secures AI technology during active military conflict outweighed any financial or reputational harm Anthropic may suffer from the label. The decision means that part of the Defense Department's official designation of Anthropic's products as a "supply-chain risk to national security" remains in place. This designation has never been applied to an American company before and also blocks contractors who work with the Pentagon from using Anthropic's AI models. It could set a chilling precedent for other tech companies that do not comply with government demands. "In our view, the equitable balance here cuts in favor of the government," wrote the three-judge panel. "On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict." The dispute stems from a deal between the AI firm and the Pentagon in July 2025 on a contract to make Anthropic's AI model Claude the first large language model approved for use on classified networks. However, negotiations collapsed in February, with the government seeking to renegotiate and insisting that Anthropic allow military use of Claude without restrictions. Anthropic maintained that its technology should not be used for lethal autonomous weapons and mass domestic surveillance of Americans. US President Donald Trump ordered all federal agencies to stop using Anthropic products in late February, stating that the company had made a "disastrous mistake trying to strong-arm the Department of War." Anthropic sued the Trump administration in March in what it termed an "unlawful campaign of retaliation." In late March, the District Court for the Northern District of California ordered a preliminary injunction against the Pentagon over the designation and temporarily halted Trump's directive, branding it "Orwellian." Related: Anthropic limits access to AI model over cyberattack concerns However, because of the way federal procurement law is written, Anthropic had to challenge the designation on two separate legal tracks -- in a California district court on constitutional grounds and directly at the D.C. Circuit under the specific statute that authorized the designation. The ruling acknowledged that Anthropic will "likely suffer some degree of irreparable harm absent a stay," and stated that "substantial expedition is warranted." Acting US Attorney General Todd Blanche said on X that it was a "resounding victory for military readiness." "Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company."
[15]
Appeals Court Rebuffs Anthropic in Latest Round of Its AI Battle With the Trump Administration
WASHINGTON (AP) -- A federal appeals court on Wednesday refused to block the Pentagon from blacklisting artificial intelligence laboratory Anthropic in a decision that differed from the conclusions reached in another judge's ruling on the same issues. The U.S. Court of Appeals in Washington D.C. rejected Anthropic's request for an order that would shield from the San Francisco company from the fallout stemming from a dispute over how the Pentagon could deploy its Claude chatbot in fully autonomous weapons and potential surveillance of Americans while the panel is still collecting evidence about the case. But the setback in Washington came after Anthropic already had prevailed in separate case focused on the same issues in San Francisco federal court. In that case, a judge forced President Donald Trump's administration to remove a label tainting the company as a national security risk. Anthropic filed the two separate lawsuits in San Francisco and the Washington appeals court last month, asserting the Trump administration was engaging in an "unlawful campaign of retaliation" because of its attempt to impose limits on how its AI technology can be deployed. The Trump administration blasted Anthropic as a liberal-leaning company trying to dictate U.S. military policy. In the San Francisco case, U.S. District Judge Rita Lin ruled that the Trump administration had overstepped its bounds by labeling Anthropic a supply chain risk unqualified to work with military contractors and issuing other directives that could cripple a company locked in a race for AI supremacy against rivals such as ChatGPT maker Open AI and Google. That decision prompted the Trump administration to remove the stigmatizing labels from Anthropic and take other steps clearing the way for government employees and contractors to continue using Claude and other chatbots, according to court filing made in San Francisco earlier this week. The appeals court in Washington didn't see things the same way, even though it conceded the company would "likely suffer some degree of irreparable harm" if it's deemed a supply chain risk. But the appeals court didn't see sufficient reason to issue its own order revoking the Trump administration's actions, partly because "the precise amount of Anthropic's financial harm is not fully clear." Further evidence in the case is scheduled to be presented before the appeals court in a hearing scheduled for May 19. "We're grateful the court recognized these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful," Anthropic said in a statement. Matt Schruers, the CEO of the technology trade group Computer & Communications Industry Association, expressed worries that the conflicting court decisions issued so far in the standoff between Anthropic and the Trump administration will muddle the business landscape at a pivotal time. "The Pentagon's actions and the DC Circuit's ruling create substantial business uncertainty at a time when U.S. companies are competing with global counterparts to lead in AI," Schruers said.
[16]
Appeals court rejects Anthropic's bid to temporarily halt Pentagon designation
A federal appeals court has rejected Anthropic's bid to temporarily halt the Pentagon's labelling of the artificial intelligence company as a supply chain risk, finding the firm failed to meet the strict requirements for an emergency stay. The order, issued Wednesday evening by a three-panel judge in Washington, D.C.'s federal appeals court, blocked Anthropic's bid to pause the designation, but granted its request for expedition. Oral arguments are slated to begin May 19. The decision breaks from that of a federal judge in California, who temporarily blocked the supply chain risk designation in a ruling late last month. Anthropic filed suits in both California and D.C. contesting both the Pentagon's designation, which is typically reserved for foreign adversaries, and President Trump's directive for civilian agencies to stop using Anthropic's products. Anthropic demanded its technology not be used in fully autonomous lethal weapons or for the mass surveillance of Americans. The Pentagon has insisted it be allowed to use Claude for "all lawful uses." The appeals panel recognized Anthropic is likely to face "some degree" of irreparable harm without a stay, though the exact amount of Anthropic's financial harm is not "fully clear," the order stated. Anthropic's lawyers have argued the designation could cost the firm billions of dollars in revenue. Further, the judges suggested Anthropic actually benefitted financially given the surge in app store downloads amid its riff with the Pentagon. Meanwhile, the panel also recognized a stay would force the U.S. military to prolong relations with "an unwanted vendor of critical AI services" amid the ongoing military conflict with Iran. "In our view, the equitable balance here cuts in favor of the government," the panel said. "On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict." The panel consisted of two Trump appointees -- Judges Neomi Rao and Gregory Katsas -- along with Karen LeCraft Hendersdon, an appointee of former President George H.W. Bush. Acting Attorney General Todd Blanche lauded the D.C. Circuit order, calling it a "resounding victory for military readiness." "Our position has been clear from the start -- our military needs full access to Anthropic's models if its technology is integrated into our sensitive systems," Blanche wrote in a post on X. "Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company." A spokesperson for Anthropic said the company is "grateful the court recognized these issues need to be resolved quickly and confident the courts will ultimately agree that these supply chain designations were unlawful." "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI," the spokesperson added.
[17]
US Court Declines to Block Pentagon's Anthropic Blacklisting for Now
(Corrects dateline to April 8. No changes to text) By Jack Queen NEW YORK, April 8 (Reuters) - A Washington, D.C., federal appeals court on Wednesday declined to block the Pentagon's national security blacklisting of Anthropic for now, a win for the Trump administration that comes after another appeals court came to the opposite conclusion in a separate legal challenge by Anthropic. Anthropic, developer of the popular Claude AI assistant, alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated the company a national security supply-chain risk, a label that blocks Anthropic from Pentagon contracts and could trigger a government-wide blacklisting. Anthropic executives have said the designation could cost the company billions of dollars in lost business and reputational harm. A panel of judges of the U.S. Court of Appeals for the District of Columbia Circuit denied Anthropic's bid to pause the designation while the case plays out. The decision is not a final ruling. The lawsuit is one of two Anthropic filed over Hegseth's unprecedented move, which came after Anthropic refused to allow the military to use AI chatbot Claude for U.S. surveillance or autonomous weapons due to safety and ethics concerns. Hegseth issued orders designating Anthropic under two different laws, and Anthropic is challenging each of them separately. A California federal judge blocked one of the orders on March 26, saying the Pentagon appeared to have unlawfully retaliated against Anthropic for its views on AI safety. Anthropic's designation was the first time a U.S. company has been publicly designated a supply-chain risk under obscure government-procurement statutes aimed at protecting military systems from enemy sabotage or infiltration. In its lawsuits, Anthropic says the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute its designation, in violation of its Fifth Amendment right to due process. The lawsuits say the designations were unlawful, unsupported by facts and inconsistent with the military's past praise of Claude. The Justice Department says that Anthropic's refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing. The government said its decision stemmed from Anthropic's refusal to accept contractual terms, not its views on AI safety. The D.C. case concerns a law that could lead to the blacklist widening to the broader civilian government following an interagency review process. The California case deals with a narrower statute that excludes Anthropic from Pentagon contracts related to military information systems. (Reporting by Jack Queen in New York;Editing by Noeleen Walder and Cynthia Osterman)
[18]
Trump Administration Scores Key Win As Appeals Court Lets Pentagon Blacklist Anthropic -- Claude Parent Re
On Wednesday, a federal appeals court in Washington declined to temporarily block the Pentagon's decision to label Anthropic a national security risk. Appeals Court Sides With Pentagon -- For Now The development handed the Donald Trump administration a key interim victory in a closely watched legal fight over artificial intelligence and military use, Reuters reported. The ruling allows the designation to remain in effect while litigation continues, though it does not resolve the case on its merits. Why The Pentagon Blacklisted Anthropic Defense Secretary Pete Hegseth classified Anthropic as a supply-chain risk after the company refused to loosen safeguards on its Claude chatbot for uses such as surveillance or autonomous weapons. The government argues that those restrictions could create uncertainty in military operations and potentially hinder system reliability during critical missions. Anthropic Pushes Back, Cites Constitutional Violations Anthropic has challenged the move in two separate lawsuits, alleging the government retaliated against its stance on AI safety. The company claims the designation violates its First Amendment rights and was imposed without due process, breaching Fifth Amendment protections. It also argues that the decision lacks a factual basis and contradicts prior positive assessments of its technology by the military. In a statement to Benzinga following the Wednesday decision, the company said it remains "confident the courts will ultimately agree that these supply chain designations were unlawful." Anthropic also pointed to key portions of the court's order, noting that judges acknowledged the case raises "novel and difficult questions," including how supply-chain risk is defined and what constitutes an urgent national security interest. The company further highlighted that the court recognized its claims as substantial and said it is likely to face some irreparable harm while the case proceeds, warranting an expedited review. Conflicting Court Signals Raise Stakes The Washington ruling contrasts with a separate decision by a California federal judge, who blocked a related Pentagon order, citing concerns it may have been unlawful retaliation. In February 2026, Anthropic closed a $30 billion Series G funding round. The deal valued the company at $380 billion post-money. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo Courtesy: carlos110 on Shutterstock.com Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[19]
Anthropic Back to Square One: Loses Appeals Court Bid to Block Pentagon Blacklisting
In an earlier instance, another court had actually granted an injunction to Anthropic on a similar case, but with this setback, the AI company finds itself where it started Anthropic finds itself back to square one in its legal battle versus the Trump administration after a federal court in Washington yesterday denied a request to block its blacklisting by the Department of Defence till the original lawsuit challenging the sanction plans out. The latest ruling squares off an earlier one by a judge at the San Francisco federal court late last month had granted Anthropic a preliminary injunction that barred the Trump administration from enforcing a ban on the use of its Claude model for AI purposes. This time the appeals court felt that the equitable balance cut in favour of the government. "On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict. For that reason, we deny Anthropic's motion for a stay pending review on the merits," the order said. (You can read our story on the earlier court ruling right here) The split decision between the two courts means that Anthropic is back to square one and would remain excluded from defence contracts. However, it would be able to continue working with other government agencies while its lawsuit plays out. Defence contractors too would be barred from using Claude in their work with the department, but can use it in other cases. Readers would recall that the Trump regime had declared Anthropic a supply chain risk early in March, the result of which meant that defence contractors have to certify that they do not use Claude AI models in their work for the military. This move also resulted in Anthropic getting booted out of the Pentagon which replaced them with OpenAI in the $200 million contract. The Washington judge agreed that while Anthropic would suffer some harm that is primarily financial in nature, it felt that the situation did not warrant an injunction. It also did not agree with the company's plea that the Department of Defence was blocking its "right to free speech" and said Anthropic does not show that its speech has been chilled during the pendency of this litigation. However, the court did seek substantial expedition of the original litigation. Responding to this ruling, a spokesperson at Anthropic said in a statement that the company was grateful that the court recognized these issues need to be resolved quickly" and that it's "confident the courts will ultimately agree that these supply chain designations were unlawful." "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI," Anthropic said in the statement.
[20]
Anthropic loses bid to temporarily block Pentagon blacklisting
A federal appeals court in Washington, DC, on Wednesday rejected Anthropic's request to temporarily block the Pentagon's blacklisting of its AI tech while its lawsuit against the Department of War plays out in court. The San-Francisco based AI firm is suing the department for designating it a supply-chain risk last month - alleging retaliation after Anthropic sought to prevent the government from using its tech for mass surveillance and weaponry. "In our view, the equitable balance here cuts in favor of the government," the appeals court said in its decision Wednesday. "On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict." In a separate decision last month, a judge in San Francisco federal court granted Anthropic a preliminary injunction during its legal challenge. After the split court decisions, Anthropic is unable to work with the Department of War and other defense contractors are barred from using its Claude chatbot in their work with the Pentagon. Anthropic can still work with other government agencies and defense contractors are allowed to use Claude outside of their interactions with the Pentagon. In its ruling Wednesday, the appeals court said the company "will likely suffer some degree of irreparable harm absent a stay," but that its concerns "seem primarily financial in nature." Despite Anthropic's argument that the Pentagon's blacklisting restricts its free speech, the court wrote that "Anthropic does not show that its speech has been chilled during the pendency of this litigation." But the court also said "substantial expedition is warranted" in the case because of the financial harm Anthropic will face during ongoing litigation. "We're grateful the court recognized these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful," an Anthropic spokesperson told The Post in a statement. "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI." Acting US Attorney General Todd Blanche, meanwhile, called the appeals court decision a "resounding victory" in a post on X late Wednesday. "Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company," he wrote. The Pentagon declined to comment. Anthropic signed a $200 million contract with the Department of War in July that made it the sole provider of AI models on the government's classified networks. But talks soured during contract negotiations as War Secretary Pete Hegseth blasted the firm for seeking exemptions on the use of its models for mass surveillance of citizens and weaponry, insisting that the Pentagon should have free rein over the tools "for all lawful purposes." Hegseth slapped Anthropic with a supply-chain risk label - a designation previously reserved for foreign businesses that present national security threats, like Chinese firm Huawei Technologies. If the designation remains in place, it will force all defense contractors to prove that they do not use Anthropic's AI models in their work with the military. Tensions heated up after a fiery 1,600-word missive from Anthropic CEO Dario Amodei bashing the Trump administration was leaked to the public last month. In it, the exec accused the Pentagon of targeting his company for not giving "dictator-style praise to Trump." Amodei apologized for the "tone" of his message, arguing his comments came hours after President Trump blasted Anthropic staff as "leftwing nut jobs" and Hegseth revealed his plans to label the company a supply-chain risk. After the Pentagon effectively blacklisted Anthropic, Sam Altman's OpenAI swooped in with a deal to provide AI services to the government. In his leaked memo, Amodei wrote that Anthropic was being punished because he didn't "donate to Trump," while "OpenAI/Greg have donated a lot," referring to OpenAI president Greg Brockman, the Information reported. Amodei - who donated to Democratic former Vice President Kamala Harris' failed presidential campaign - also mocked OpenAI's "straight up lies," adding that the platform's safeguards are "maybe 20% real and 80% safety theater." During a tech conference in March, Altman took a jab at Amodei, saying that it's "bad for society" if companies start abandoning their commitment to the democratic process because "some people don't like the person or people currently in charge."
[21]
US court declines to block Pentagon's Anthropic blacklisting for now
NEW YORK, March XX (Reuters) - A Washington, D.C. federal appeals court on Wednesday declined to block the Pentagon's national security blacklisting of Anthropic for now, a win for the Trump administration that comes after another appeals court came to the opposite conclusion in a separate legal challenge by Anthropic. Anthropic, developer of the popular Claude AI assistant, alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated the company a national security supply-chain risk, a label that blocks Anthropic from Pentagon contracts and could trigger a government-wide blacklisting. Anthropic executives have said the designation could cost the company billions of dollars in lost business and reputational harm. A panel of judges of the U.S. Court of Appeals for the District of Columbia Circuit denied Anthropic's bid to pause the designation while the case plays out. The decision is not a final ruling. The lawsuit is one of two Anthropic filed over Hegseth's unprecedented move, which came after Anthropic refused to allow the military to use AI chatbot Claude for U.S. surveillance or autonomous weapons due to safety and ethics concerns. Hegseth issued orders designating Anthropic under two different laws, and Anthropic is challenging each of them separately. A California federal judge blocked one of the orders on March 26, saying the Pentagon appeared to have unlawfully retaliated against Anthropic for its views on AI safety. Anthropic's designation was the first time a U.S. company has been publicly designated a supply-chain risk under obscure government-procurement statutes aimed at protecting military systems from enemy sabotage or infiltration. In its lawsuits, Anthropic says the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute its designation, in violation of its Fifth Amendment right to due process. The lawsuits say the designations were unlawful, unsupported by facts and inconsistent with the military's past praise of Claude. The Justice Department says that Anthropic's refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing. The government said its decision stemmed from Anthropic's refusal to accept contractual terms, not its views on AI safety. The D.C. case concerns a law that could lead to the blacklist widening to the broader civilian government following an interagency review process. The California case deals with a narrower statute that excludes Anthropic from Pentagon contracts related to military information systems. (Reporting by Jack Queen in New York;Editing by Noeleen Walder and Cynthia Osterman)
Share
Copy Link
A federal appeals court in Washington, DC refused to block the Pentagon's designation of Anthropic as a supply-chain risk, creating a legal conflict with a San Francisco judge's opposite ruling. The AI company, caught in a dispute over autonomous weapons and surveillance restrictions, faces potential billions in lost revenue as two separate lawsuits unfold under different supply-chain laws.
A US Court of Appeals in Washington, DC on Wednesday denied Anthropic's request to pause the Pentagon's national security supply-chain risk designation, delivering a setback to the AI company in its escalating legal dispute with the Trump administration
1
2
. The three-judge appellate panel ruled that Anthropic "has not satisfied the stringent requirements" to temporarily lose the label, even while acknowledging the company would "likely suffer some degree of irreparable harm" from the ongoing designation1
5
. The appeals court ruling creates immediate business uncertainty for the Claude AI developer, which has claimed it could face billions of dollars in lost revenue from the Anthropic blacklisting2
3
.
Source: AP
The Washington, DC appeals court ruling directly conflicts with a decision issued last month by US District Judge Rita Lin in San Francisco federal court, who found the Pentagon's decision appeared to constitute "classic illegal First Amendment retaliation"
4
5
. The San Francisco judge ordered the supply-chain risk label removed, and the Trump administration initially complied by restoring access to Anthropic AI tools inside the Pentagon and throughout the federal government1
. The conflicting preliminary judgments stem from the government sanctioning Anthropic under two different supply-chain laws with similar effects, with each court ruling on only one of them1
. Anthropic has said it is the first US company to be designated under the two laws, which are typically used to punish foreign businesses that pose a risk to national security1
.The legal dispute erupted after Anthropic refused to allow the military to use Claude AI for US surveillance or autonomous weapons deployment due to AI safety and ethics concerns
3
. The company has argued it is being illegally punished for insisting that Claude lacks the accuracy needed for certain sensitive operations such as carrying out deadly drone strikes without human supervision1
. Defense Secretary Pete Hegseth issued orders designating Anthropic under 10 USC § 3252 and 41 USC § 4713, requiring defense contractors to certify they don't use Claude in their work with the military4
. The Justice Department argues that Anthropic's refusal to lift restrictions could cause uncertainty over how the Pentagon could use Claude and risk disabling military systems during operations3
.
Source: NYT
In explaining the appeals court ruling, the panel emphasized military priorities during active conflict. "On one side is a relatively contained risk of financial harm to a single private company," the court wrote. "On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict"
2
4
. The panel stated they did not want to risk "a substantial judicial imposition on military operations" or "lightly override" the military's judgments on national security1
. The battle between Anthropic and the Trump administration is playing out as the Pentagon deploys AI in military contexts, including its war against Iran1
.Related Stories
In its lawsuits, Anthropic claims the government violated its right to free speech under the First Amendment by retaliating against its views on AI safety
3
. The company also alleges it was not given a chance to dispute its designation, in violation of its Fifth Amendment right to due process3
. The San Francisco judge had found that the Department of Defense likely acted in bad faith against Anthropic, driven by frustration over the AI company's proposed limits on how its technology could be used and its public criticism of those restrictions1
. Several experts in government contracting and corporate rights have indicated Anthropic has a strong case against the government, though courts sometimes refuse to overrule the White House on matters related to national security1
.The conflicting court decisions create substantial business uncertainty at a pivotal time when US companies compete globally to lead in AI development, according to Matt Schruers, CEO of the Computer & Communications Industry Association
5
. Some AI researchers have said the Pentagon's actions against Anthropic "chills professional debate" about the performance of AI systems1
. The court in Washington, DC is scheduled to hear oral arguments on May 19, with final decisions in the company's two lawsuits potentially months away1
2
. Anthropic spokesperson Danielle Cohen says the company remains confident "the courts will ultimately agree that these supply chain designations were unlawful"1
. The parties have revealed minimal details about how the Department of Defense has used Claude or its progress transitioning staff to other AI tools from Google DeepMind, OpenAI, or others1
.
Source: Reuters
Summarized by
Navi
18 Mar 2026•Policy and Regulation

04 Mar 2026•Policy and Regulation

27 Mar 2026•Policy and Regulation

1
Entertainment and Society

2
Health

3
Technology
