85 Sources
85 Sources
[1]
Anthropic sues US over blacklisting; White House calls firm "radical left, woke"
Anthropic sued the Trump administration yesterday in an attempt to reverse the government's decision to blacklist its technology. Anthropic argues that it exercised its First Amendment rights by refusing to let its Claude AI models be used for autonomous warfare and mass surveillance of Americans and that the government blacklisted it in retaliation. "When Anthropic held fast to its judgment that Claude cannot safely or reliably be used for autonomous lethal warfare and mass surveillance of Americans, the President directed every federal agency to 'IMMEDIATELY CEASE all use of Anthropic's technology' -- even though the Department of War had previously agreed to those same conditions," Anthropic said in a lawsuit in US District Court for the Northern District of California. "Hours later, the Secretary of War [Pete Hegseth] directed his Department to designate Anthropic a 'Supply-Chain Risk to National Security,' and further directed that 'effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.'" Anthropic said the First Amendment gives it "the right to express its views -- both publicly and to the government -- about the limitations of its own AI services and important issues of AI safety." Anthropic further argued that the process for designating it a supply chain risk did not comply with the procedures mandated by Congress. The supply chain risk designation is supposed to be used only to protect against risks that an adversary may sabotage systems used for national security, the lawsuit said. Trump's directive "requiring every federal agency to immediately cease all use of Anthropic's technology, and actions taken by other defendants in response to that directive, are outside any authority that Congress has granted the Executive," and violate the Fifth Amendment's due process clause, Anthropic said. Anthropic's lawsuit was filed against Hegseth, the Department of War (previously called the Department of Defense), and numerous other federal agencies. Anthropic also filed a motion for preliminary injunction and a second lawsuit asking for review in the US Court of Appeals for the District of Columbia Circuit. White House: Anthropic is "radical left, woke company" The Pentagon declined to comment. The White House responded by calling Anthropic a "radical left" and "woke" firm. "President Trump will never allow a radical left, woke company to jeopardize our national security by dictating how the greatest and most powerful military in the world operates," a White House spokesperson said in a statement provided to Ars. "The President and Secretary of War are ensuring America's courageous warfighters have the appropriate tools they need to be successful and will guarantee that they are never held hostage by the ideological whims of any Big Tech leaders. Under the Trump Administration, our military will obey the United States Constitution -- not any woke AI company's terms of service." A brief supporting Anthropic was filed in the California federal court by the Foundation for Individual Rights and Expression, the Electronic Frontier Foundation, the Cato Institute, the Chamber of Progress, and the First Amendment Lawyers Association. The groups said that Pentagon retaliation against Anthropic will "silence future speech from those who fear the government attempting to harm their business or extinguish it entirely." Calling the government's actions "transparently retaliatory and coercive," the advocacy groups wrote that the court "need not guess at the government's retaliatory motives because the Pentagon has already announced them... Until recently, it was rare for government leaders to so openly and proudly boast about retaliating against someone for their protected speech. Now it is commonplace. Evidently only those who agree to be complicit in this administration's assertion of unfettered power are safe." Google and OpenAI staff support lawsuit Another brief supporting Anthropic was filed by various technical, engineering, and research employees of Google and OpenAI. Google is an investor in Anthropic. The Google and OpenAI employees wrote that "mass domestic surveillance powered by AI poses profound risks to democratic governance -- even in responsible hands." On the topic of autonomous weapon systems, they wrote that "current AI models are not reliable enough to bear the responsibility of making lethal targeting decisions entirely alone, and the risks of their deployment for that purpose require some kind of response and guardrails." The Google and OpenAI employees said that in using the supply chain risk designation "in response to Anthropic's contract negotiations, [the Pentagon] introduces an unpredictability in our industry that undermines American innovation and competitiveness. It chills professional debate on the benefits and risks of frontier AI systems and various ways that risks can be addressed to optimize the technology's deployment." Anthropic CEO Dario Amodei explained the company's objections to certain AI uses in a February 26 post. "We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values," he wrote. Current law allows the government to "purchase detailed records of Americans' movements, web browsing, and associations from public sources without obtaining a warrant," and "AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person's life -- automatically and at massive scale," Amodei wrote. CEO: Autonomous weapons too risky Amodei expressed support for partially autonomous weapons like those used in Ukraine, but not for fully autonomous weapon systems "that take humans out of the loop entirely and automate selecting and engaging targets." He said that fully autonomous weapons "may prove critical for our national defense" eventually but that AI is not yet reliable enough to power them. "We will not knowingly provide a product that puts America's warfighters and civilians at risk," he wrote. "We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don't exist today." Trump responded with a Truth Social post on February 27. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," Trump wrote. "Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY." Hegseth then wrote that "Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon." Hegseth said the military "must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic." Anthropic said later that day that it had engaged in months of negotiations with the government and would challenge any supply chain risk designation in court. "Designating Anthropic as a supply chain risk would be an unprecedented action -- one historically reserved for US adversaries, never before publicly applied to an American company... No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons," Anthropic said.
[2]
OpenAI and Google employees rush to Anthropic's defense in DOD lawsuit | TechCrunch
More than 30 OpenAI and Google DeepMind employees filed a statement Monday supporting Anthropic's lawsuit against the U.S. Defense Department after the federal agency labeled the AI firm a supply chain risk, according to court filings. "The government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry," reads the brief, whose signatories include Google DeepMind chief scientist Jeff Dean. Late last week, the Pentagon labeled Anthropic a supply chain risk -- usually reserved for foreign adversaries -- after the AI firm refused to allow the Department of Defense to use its technology for mass surveillance of Americans or autonomously firing weapons. The DOD had argued that it should be able to use AI for any "lawful" purpose and not be constrained by a private contractor. The amicus brief in support of Anthropic showed up on the docket a few hours after the Claude maker filed two lawsuits against the DOD and other federal agencies. Wired was first to report the news. In the court filing, the Google and OpenAI employees make the point that if the Pentagon was "no longer satisfied with the agreed-upon terms of its contract with Anthropic," the agency could have "simply canceled the contract and purchased the services of another leading AI company." The DOD did, in fact, sign a deal with OpenAI within moments of designating Anthropic a supply chain risk -- a move many of the ChatGPT maker's employees protested. "If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness in the field of artificial intelligence and beyond," the brief reads. "And it will chill open deliberation in our field about the risks and benefits of today's AI systems." The filing also affirms that Anthropic's stated red lines are legitimate concerns warranting strong guardrails. Without public law to govern AI use, it argues, the contractual and technical restrictions developers impose on their systems are a critical safeguard against catastrophic misuse. Many of the employees who signed the statement also signed open letters over the last couple of weeks urging the DOD to withdraw the label and calling on the leaders of their companies to support Anthropic and refuse unilateral use of their AI systems.
[3]
Trump Administration Won't Rule Out Further Action Against Anthropic
At Anthropic's first court hearing challenging sanctions imposed by the Trump administration, the AI tech startup asked the government to commit that it wouldn't levy additional penalties on the company. That didn't happen. "I am not prepared to offer any commitments on that issue," James Harlow, a Justice Department attorney, told US district judge Rita Lin over video conference on Tuesday. In fact, the government is gearing up to take another step designed to sideline the company from doing business with federal agencies. President Trump is currently finalizing an executive order that would formally ban usage of Anthropic tools across the government, according to a person at the White House familiar with the matter but not authorized to discuss it. Axios first reported on the plan. Tuesday's hearing stemmed from one of the two federal lawsuits Anthropic filed against the Trump administration on Monday, alleging that the government unconstitutionally designated it a supply-chain risk and turned it into a tech industry pariah. Billions of dollars in revenue for Anthropic is now at risk, with current customers and prospective ones dropping out of deals and demanding new terms, according to the company. Anthropic seeking a preliminary court order suspending the risk designation and barring the administration from taking further punitive measures against the company. The court appearance on Tuesday was to decide on the schedule for a preliminary hearing, and Anthropic is eager for it to happen soon to prevent further harm to its business. Michael Mongan, an attorney for Anthropic at WilmerHale, told Lin he was less concerned about delaying it until April if the Trump administration could commit to not taking additional action. "The actions of defendants are causing irreparable injuries, and those injuries are mounting day by day," Mongan said. After Harlow declined, Lin moved up the date of the hearing to March 24 in San Francisco, though that timeline was still later than Anthropic wanted. "The case is quite consequential from both sides, and I want to make sure I'm deciding on an expedited record but also a full record," the judge said. Scheduling in the other case, which is in Washington, DC, is on hold while Anthropic pursues an administrative appeal to the Department of Defense, which is expected to fail on Wednesday. The months-long dispute between the Pentagon and Anthropic began when the AI startup refused to sign off on its current technologies being used by the military for any lawful purpose, which it fears could include broad surveillance of Americans and the launch of missiles without human supervision. The Defense Department contends usage decisions are its prerogative. Several attorneys with expertise in government contracts and the US Constitution believe the administration's action against Anthropic continues a pattern of abusing the law to punish perceived political enemies, including universities, media companies, and law firms (such as WilmerHale, the firm representing Anthropic). The experts believe Anthropic should prevail, but the challenge will be overcoming the deference that courts often give to national security arguments from the government, especially during times of war. "If this is a one-off, you might give the president some deference," says Harold Hongju Koh, a Yale Law School professor who worked in the Barack Obama presidential administration and has written about the Anthropic case. "But now, it's just unmistakable that this is just the latest in a chain of events related to a punitive presidency." David Super, a Georgetown University Law Center professor who studies the constitution, says the provisions the Defense Department used to sanction Anthropic were designed to protect the country from potential sabotage by its enemies.
[4]
Anthropic sues Defense Department over supply chain risk designation | TechCrunch
Anthropic has made good on its promise to challenge the Department of Defense in court after the agency labeled it a supply chain risk late last week. The Claude-maker filed a complaint against the Department on Monday. The complaint comes after a weeks-long conflict between Anthropic and the DOD over whether the military should have unrestricted access to Anthropic's AI systems. Anthropic had two firm red lines: it didn't want its technology to be used for mass surveillance of Americans and didn't believe it was ready to power fully autonomous weapons with no humans making targeting and firing decisions. Defense Secretary Pete Hegseth argued that the Pentagon should have access to AI systems for "any lawful purpose." A supply chain risk label is usually reserved for foreign adversaries, and requires any company or agency that does work with the Pentagon to certify that it doesn't use Anthropic's models. Anthropic called the DOD's actions "unprecedented and unlawful" in a complaint filed in San Francisco federal court. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech."
[5]
Anthropic Claims Pentagon Feud Could Cost It Billions
Anthropic executives allege that current customers and prospective ones have been demanding new terms and even backing out of negotiations since the US Department of Defense labeled the AI startup a supply-chain risk late last month, according to court papers that also revealed new financial details about the company. Hundreds of millions of dollars in expected revenue this year from work tied to the Pentagon is already at risk for Anthropic, the company's chief financial officer, Krishna Rao, wrote in a court filing on Monday. But if the government has its way and pressures a broad range of companies from doing business with the AI startup, regardless of any ties to the military, Anthropic could ultimately lose billions of dollars in sales, he stated. Its all-time sales, since commercializing its technology in 2023, exceed $5 billion, according to Rao. Anthropic's revenue exploded as its Claude models began outperforming rivals and showing advanced capabilities in areas such as generating software code. But the company spends heavily on computing infrastructure and remains deeply unprofitable. Rao specified that Anthropic has spent over $10 billion to train and deploy its models. Anthropic chief commercial officer Paul Smith provided several examples of partners who have privately raised concerns to the AI startup in recent days. He said a financial services customer paused negotiations over a $15 million deal because of the supply-chain label, and two leading financial services companies have refused to close deals valued together at $80 million unless they gain the right to unilaterally cancel their contracts for any reason. A grocery store chain cancelled a sales meeting, citing the supply-chain risk designation, Smith added. "All have taken steps that reflect deep distrust and a growing fear of associating with Anthropic," Smith wrote. The executives' comments are part of statements from six Anthropic leaders in support of a preliminary order that would allow the San Francisco company to continue doing business with the Department of Defense until lawsuits about the supply-chain risk issue are resolved. Anthropic has sued the Trump administration in two courts. A lawsuit filed in San Francisco federal court on Monday alleges the government violated the company's free speech rights. A separate case filed Monday in the federal appeals court in Washington, DC accuses the Defense Department of unfairly discriminating and retaliating against Anthropic. The company is seeking a hearing as soon as Friday in San Francisco for a temporary reprieve. The legal battle and sales fallout follows a weeks-long dispute between Anthropic and the Pentagon over the potential use of AI technologies for mass domestic surveillance and autonomous lethal weapons. Anthropic contends AI is not yet capable of safely undertaking the tasks, while the Pentagon wants the right to make that judgment on its own. By law, the supply-chain designation prevents a narrow set of companies that do business with the Pentagon from incorporating Anthropic into their systems. But Defense Secretary Pete Hegseth has cast a wider net. He posted on X late last month that "effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." Rao wrote that the Pentagon reinforced the message by reaching out to several startups about their use of Claude, which he said he learned had happened from speaking with an investor that Anthropic and the smaller companies all share. They "have grown worried and uncertain about their ability to use Claude," Rao wrote. The Pentagon declined to comment on the lawsuits and did not immediately respond to a request for comment about Rao's allegation about the outreach.
[6]
Employees across OpenAI and Google support Anthropic's lawsuit against the Pentagon
On Monday, Anthropic filed its lawsuit against the Department of Defense over being designated as a supply chain risk. Hours later, nearly 40 employees from OpenAI and Google -- including Jeff Dean, Google's chief scientist and Gemini lead -- filed an amicus brief in support of Anthropic's lawsuit, detailing their concerns over the Trump administration's decision and the technology's risks and implications. The news follows a dramatic few weeks for Anthropic, in which the Trump administration labeled the company a supply chain risk -- a designation typically reserved for foreign companies that the government deems a potential risk to national security in some way -- after Anthropic stood firm on two red lines regarding acceptable use cases for military use of its technology: domestic mass surveillance and fully autonomous weapons (or AI systems with the power to kill with no human involvement). Negotiations broke down, followed by public insults and other AI companies stepping in to sign contracts allowing "any lawful use" of their technology. The supply chain risk designation not only prevents Anthropic from working on military contracts, it also blacklists other companies if they used Anthropic products in their line of work for the Pentagon, forcing them to uproot Claude if they wished to maintain their lucrative contracts. As the first model cleared for classified intelligence, however, Anthropic's tools are already deeply integrated into the Pentagon's work -- so much so that just hours after Defense Secretary Pete Hegseth announced the designation, the U.S. military reportedly used Claude in the campaign that killed the leader of Iran, Ayatollah Ali Khamenei. The amicus brief seeks to make the points that Anthropic's supply chain risk designation "is improper retaliation that harms the public interest" and that the concerns behind Anthropic's red lines "are real and require a response." It also makes the point that Anthropic's two red lines are worth revisiting, stating that "mass domestic surveillance powered by AI poses profound risks to democratic governance -- even in responsible hands" and that "fully autonomous lethal weapons systems present risks that must also be addressed." The group behind the amicus brief described themselves as "engineers, researchers, scientists, and other professionals employed at U.S. frontier artificial intelligence laboratories." "We build, train, and study the large-scale AI systems that serve a wide range of users and deployments, including in the consequential domains of national security, law enforcement, and military operations," the group wrote. "We submit this brief not as spokespeople for any single company, but in our individual capacities as professionals with direct knowledge of what these systems can and cannot do, and what is at stake when their deployment outpaces the legal and ethical frameworks designed to govern them." On the domestic mass surveillance front, the group said that though data on American citizens exists everywhere in the form of surveillance cameras, geolocation data, social media posts, financial transactions, and more, "what does not yet exist is the AI layer that transforms this sprawling, fragmented data landscape into a unified, real-time surveillance apparatus." Right now, they wrote, these data streams are siloed, but if AI were used to connect them, it could combine "face recognition data with location history, transaction records, social graphs, and behavioral patterns across hundreds of millions of people simultaneously." When it comes to lethal autonomous weapons specifically, the group said that they can be unreliable in new or unclear conditions that don't align with the environment they were trained in -- meaning that they "cannot be trusted to identify targets with perfect accuracy, and they are incapable of making the subtle contextual tradeoffs between achieving an objective and accounting for collateral effects that a human can." Additionally, the group wrote, lethal autonomous weapons systems' potential for hallucination means that it's important for humans to be involved in the decision-making process "before a lethal munition is launched at a human target" -- especially since the system's chain of reasoning is often not available to operators and unclear even to the system's developers. The group behind the amicus brief wrote, "We are diverse in our politics and philosophies, but we are united in the conviction that today's frontier AI systems present risks when deployed to enable domestic mass surveillance or the operation of autonomous lethal weapons systems without human oversight, and that those risks require some kind of guardrails, whether via technical safeguards or usage restrictions."
[7]
Microsoft: Anthropic Claude remains available to customers except the Defense Department | TechCrunch
Enterprises and startups that use Anthropic Claude through Microsoft's products need not fear that the model will be ripped from their reach, Microsoft has confirmed to TechCrunch and other publications. Microsoft is the first big tech company to offer assurance that Anthropic's models will remain available to its customers even though the Trump Administration's Department of War -- formally known as the Department of Defense -- has escalated its feud with Anthropic. The Defense Department designated the American AI startup as a supply chain risk after the AI company refused to give it unrestricted access to its tech for applications the company said its AI could not safely support, such as mass surveillance and fully autonomous weapons. The supply-chain risk designation is typically reserved for foreign adversaries. For Anthropic, the designation means that the Pentagon can't use the company's products -- and also requires any company or agency that works with the Pentagon to certify that they don't use Anthropic's models, either. Anthropic has vowed to fight the designation in court. Microsoft sells an array of products, from Office to its cloud, to many federal agencies including the Defense Department. A Microsoft spokesperson said that the company will continue making Anthropic's models available within its own products and to Microsoft customers. "Our lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers -- other than the Department of War -- through platforms such as M365, GitHub, and Microsoft's AI Foundry, and that we can continue to work with Anthropic on non-defense related projects," the spokesperson said in an email. CNBC first reported on the comment. This echoes what Anthropic CEO Dario Amodei said in his statement vowing to fight the designation. "With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts," Amodei said, adding, "Even for Department of War contractors, the supply chain risk designation doesn't (and can't) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts." In the meantime, Claude's consumer growth surge has continued after Anthropic refused to give in to the department's demands.
[8]
Anthropic Sues Pentagon, Argues Blacklisting Violates Free Speech
Anthropic is suing the Pentagon over the Trump administration's "supply chain risk" designation. On Monday, Anthropic urged the US District Court in Northern California to intervene, arguing that the White House resorted to "unlawful" retaliation after the San Francisco company refused to remove guardrails that prevent its AI technologies from being used for mass surveillance and autonomous weapons. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic wrote in one of the court documents. Anthropic is also calling on the court to "grant the emergency relief" and issue a temporary restraining order to block the supply chain risk designation as the case proceeds. The lawsuit argues that the Trump administration is essentially punishing the company for expressing its views on AI use, in violation of the First Amendment. "The government does not have to agree with those views. Nor does it have to use Anthropic's products. But the government may not employ 'the power of the State to punish or suppress [Anthropic's] disfavored expression,'" it says. The company also claims it was denied due process and that the White House sanctions have "irreparably" harmed Anthropic after President Trump ordered all federal agencies to ditch the company's AI products. "Defendants are seeking to destroy the economic value created by one of the world's fastest-growing private companies, which is a leader in responsibly developing an emergent technology of vital significance to our Nation," Anthropic adds. The Pentagon is expected to disagree. "WE will decide the fate of our Country -- NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about," President Trump said last month after Anthropic refused to loosen safeguards on its AI. Defense Secretary Pete Hegseth has said that the supply chain risk designation, imposed on Thursday, means that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." However, Anthropic's lawsuit notes the Pentagon's letter about the supply chain risk label "does not explain the scope of procurements covered by the Secretary's action." It's why Anthropic has downplayed the threat. CEO Dario Amodei said last week that "the vast majority of our customers are unaffected by a supply chain risk designation," arguing the blacklisting is narrow in scope and pertains only to defense contracts. Microsoft and Google, which also supply software to the US government, seem to agree, indicating they plan to continue offering Anthropic's Claude chatbot through their cloud services. Still, Anthropic's lawsuit says the supply chain designation has caused major reputational harm, along with millions in losses from military contracts. "The Challenged Actions also inflict immediate and unrecoverable revenue losses: Anthropic stands to lose the federal contracts it already has, as well as its prospects to pursue federal contracts in the future," the suit adds. Despite the hostilities, Anthropic notes that it's still in talks with the Pentagon. "I would like to reiterate that we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible," Amodei wrote last week.
[9]
Anthropic sues Pentagon over 'supply chain risk' designation, citing free speech and due process violations -- company refused to allow its AI to be used for autonomous attacks, mass surveillance
The maker of Claude says the Dept. of War's supply chain risk designation was retaliation for its AI safety policies. Anthropic has filed two federal lawsuits against the Pentagon and other U.S. federal agencies, seeking to overturn the Department of War's decision to designate the AI company a "supply chain risk," a label that blocks Pentagon suppliers and contractors from using its Claude models, and that national security experts say has historically been reserved for foreign adversaries. The lawsuits, the first filed in the U.S. District Court for the Northern District of California and the second in the U.S. Court of Appeals for the D.C. Circuit, allege the Trump administration violated Anthropic's First Amendment and due process rights, according to Reuters. Anthropic is asking courts to vacate the designation, block its enforcement, and require federal agencies to withdraw directives to drop the company's tools. The company said the actions could jeopardize "hundreds of millions of dollars" in revenue in the near-term. This dispute traces back to a contract renegotiation between Anthropic and the Department of War that collapsed in late February. The Pentagon wanted unrestricted access to Claude for "any lawful use," while Anthropic refused to remove two guardrails: a prohibition on using its models for fully autonomous weapons without human oversight, and a prohibition on mass domestic surveillance of U.S. citizens. Defense Secretary Pete Hegseth formally issued the supply chain risk designation on February 27; Anthropic was officially notified on March 3. President Trump separately directed all federal agencies to stop using Anthropic's technology via a Truth Social post that same day, with a six-month phase-out period. The Pentagon argued that private companies cannot dictate how the government uses technology in national security scenarios, and that Anthropic's restrictions could endanger American lives. Anthropic countered that current AI models are not reliable enough for fully autonomous weapons deployment, and that domestic surveillance at scale would violate fundamental rights. The fallout has had immediate competitive consequences, with OpenAI's Sam Altman controversially striking a new Pentagon deal within hours of Anthropic's new designation. Altman publicly stated that the Dept. of Warshares OpenAI's principles on human oversight of weapons and opposition to mass surveillance. xAI, Elon Musk's AI company, is also understood to have since been cleared for use on classified Pentagon systems. Anthropic was previously the first AI lab permitted to operate on the Dept. of War's classified networks, and signed a contract worth up to $200 million with the department in July 2025. The Wall Street Journal has previously reported that Claude had been used in military operations, including intelligence assessments and target identification in the U.S.'s ongoing conflict with Iran, even after the Pentagon ousted the model. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic said in its filing with the U.S. District Court. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
[10]
Anthropic Sues Defense Department Over Supply Chain Risk Label
Anthropic PBC sued the Defense Department for declaring that the artificial intelligence giant posed a risk to the US supply chain, after a dispute with the Pentagon over whether the technology would be used for mass surveillance and fully autonomous weapons. San Francisco-based Anthropic is challenging a decision by the department to shift its AI work to other providers, based on a risk designation typically reserved for companies from countries the US views as adversaries. "These actions are unprecedented and unlawful," the company said in a complaint filed in San Francisco federal court. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech."
[11]
Anthropic Sues Department of Defense Over Supply-Chain Risk Designation
Anthropic filed a federal lawsuit against the US Department of Defense and other federal agencies on Monday, challenging its designation of the AI company as a "supply-chain risk." The Pentagon formally sanctioned Anthropic last week, capping a weeks-long, publicly aired disagreement over limits on use of its generative AI technology for military applications such as autonomous weapons. "We do not believe this action is legally sound, and we see no choice but to challenge it in court," Anthropic CEO Dario Amodei wrote in a blog post on Thursday. The lawsuit, which was filed in a federal court in California, requested that a judge reverse the designation and stop federal agencies from enforcing it. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic said in the filing. "Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation." The AI startup, which develops a suite of AI models called Claude, is facing the possibility of losing hundreds of millions of dollars in annual revenue from the Pentagon and the rest of the US government. It also may lose the business of software companies that incorporate Claude into services they sell to federal agencies. Several Anthropic customers have reportedly said they are pursuing alternatives due to the Defense Department's risk designation. Amodei wrote that the "vast majority" of Anthropic's customers will not have to make changes. The US government's designation "plainly applies only to the use of Claude by customers as a direct part of contracts with the" military, he said. General use of Anthropic technologies by military contractors should be unaffected. The Department of Defense, which also goes by the Department of War, and the White House did not immediately respond to requests for comment about Anthropic's lawsuit. Attorneys with expertise in government contracting say Anthropic faces a difficult battle in court. The rules that authorize the Department of Defense to label a tech company as a supply-chain risk don't allow for much in the way of an appeal. "It's 100 percent in the government's prerogative to set the parameters of a contract," says Brett Johnson, a partner at the law firm Snell & Wilmer. The Pentagon, he says, also has the right to express that a product of concern, if used by any of its suppliers, "hurts the government's ability to effectuate its mission." Anthropic's best chance of success in court could be proving it was singled out, Johnson says. Soon after Defense Secretary Pete Hegseth announced that he was designating Anthropic a supply-chain risk, rival OpenAI announced it had struck a new contract with the Pentagon. That could be instrumental to Anthropic's legal argument if the company can demonstrate it was seeking similar terms as the ChatGPT developer. OpenAI said its deal included contractual and technical means of assuring its technology would not be used for mass domestic surveillance or to direct autonomous weapons systems. It added that it opposed the action against Anthropic and did know why its rival could not reach the same deal with the government. Hegseth has prioritized military adoption of AI technologies, with posters recently seen in the Pentagon showing him pointing and that read, "I want you to use AI." The dispute with Anthropic kicked up in January after Hegseth ordered several AI suppliers to agree that the department was free to use their technologies for any lawful purpose. Anthropic, which is the only company currently providing AI chatbot and analysis tools for the military's most sensitive use cases, pushed back. It contends that its technologies are not yet capable enough to be used for mass domestic surveillance of Americans or fully autonomous weapons. Hegseth has said Anthropic wants veto power over judgments that should be left to the Defense Department.
[12]
Explainer: Anthropic's case against the government: what the AI company says happened
March 9 - Anthropic sued the U.S. government on Monday, escalating a dispute the AI company frames as retaliation for refusing to remove safety limits on its Claude model. The Amazon-backed company said it was willing to work with the military. Just not on any terms. It has also filed a related case in the U.S. Court of Appeals for the D.C. Circuit challenging a separate legal authority the government invoked. The following account is based on allegations made by Anthropic in its lawsuit. WHAT ANTHROPIC SAYS THE DISPUTE IS ABOUT Anthropic said it spent years building Claude into the government's most widely deployed frontier AI model, including on classified military networks, developing a specialized "Claude Gov" version and loosening many of its standard restrictions to accommodate national security work. The conflict began in the fall of 2025 during negotiations over the Pentagon's GenAI.mil platform, when the Department of Defense demanded Anthropic abandon its usage policy entirely and allow Claude to be used for, in the government's words, "all lawful uses". Anthropic said it largely agreed, โ except on two points it considered non-negotiable: it would not allow Claude to be used for lethal autonomous warfare without human oversight or for mass surveillance of Americans. The company says Claude has not been tested for those uses and cannot perform them safely. It said it also offered to help transition the work to another provider if no agreement could be reached. Pentagon officials have offered a different account of how the dispute began. The department's chief technology officer said publicly that tensions escalated after a U.S. raid in Venezuela, when an Anthropic executive called a counterpart at Palantir (PLTR.O), opens new tab to ask whether Claude had been used in the operation. That account does not appear in Anthropic's complaint. FROM ULTIMATUM TO ALL-OUT BAN Secretary of Defense Pete Hegseth met with Anthropic CEO Dario Amodei on February 24, presenting an ultimatum: comply within four days or face one of two punishments - compulsion under the Defense Production Act, or expulsion from the defense supply chain as a 'national security risk'. Amodei โ rejected the demand publicly on February 26. The next day, before a 5:01 p.m. Eastern deadline had expired, President Donald Trump posted a directive on Truth Social ordering every federal agency to immediately cease all use of Anthropic's technology. In the social media post, the president characterized Anthropic as a "RADICAL LEFT, WOKE COMPANY". Hours later, Hegseth announced on X that Anthropic was a "Supply-Chain Risk to National Security" and that no military contractor or supplier could do commercial business with the company. Agencies fell in line quickly. The General Services Administration terminated โ Anthropic's government-wide contract. Treasury, State, and the Federal Housing Finance Agency publicly cut ties. The Anthropic complaint alleges the Pentagon launched a major air attack on Iran using Anthropic's tools hours after the ban. White House spokeswoman Liz Huston said the administration would not allow a company to "jeopardize our national security by dictating how the greatest and most powerful military โ in the world operates," adding that U.S. forces would "never be held hostage by the ideological whims of Big Tech leaders" and would follow the Constitution, "not any woke AI company's terms of service". WHY ANTHROPIC DECIDED TO SUE Anthropic argues the supply chain designation has no factual basis. The company points to its FedRAMP authorization, โ active security clearances, and years of government praise, including from Hegseth, who called Claude's capabilities "exquisite" at the February 24 meeting. Two senior Pentagon officials subsequently told reporters there was "no evidence of supply-chain risk" and that the designation was "ideologically driven". Anthropic raises five legal claims, arguing the actions violated the Administrative Procedure Act, the First Amendment, the Fifth Amendment, the president's statutory authority, and the APA's prohibition on unauthorized agency sanctions. Reporting by Akash Sriram in Bengaluru Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Artificial Intelligence * Intellectual Property Akash Sriram Thomson Reuters Akash reports on technology companies in the United States, electric vehicle companies, and the space industry. His reporting usually appears in the Autos & Transportation and Technology sections. He has a postgraduate degree in Conflict, Development, and Security from the University of Leeds. Akash's interests include music, football (soccer), and Formula 1.
[13]
Anthropic is suing the Department of Defense
Anthropic has sued the US government over its designation as a supply-chain risk, the latest move in a weekslong battle between it and the Pentagon over the acceptable use cases for its military AI tech. The suit, filed in a California district court, accuses the Trump administration of illegally punishing the company for setting "red lines" on mass domestic surveillance and fully autonomous weapons. "The federal government retaliated against a leading frontier AI developer for adhering to its protected viewpoint on a subject of great public significance -- AI safety and the limitations of its own AI models -- in violation of the Constitution and laws of the United States," the suit reads. "Defendants are seeking to destroy the economic value created by one of the world's fastest-growing private companies, which is a leader in responsibly developing an emergent technology of vital significance to our Nation."
[14]
Microsoft, Google Won't Cut Ties With Anthropic Amid Pentagon Feud
Despite the Trump administration's recent clampdown on AI firm Anthropic, Microsoft and Google have confirmed that their customers will still be able to access its tools, like the chatbot Claude, via their services. The news follows the Pentagon dubbing Anthropic "a supply chain risk" earlier this week, a designation usually only applied to companies from foreign nations considered adversaries, like China's Huawei. The blacklist came after Anthropic CEO Dario Amodei publicly refused to give the US military "unrestricted access" to its AI systems. "Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic," said Defense Secretary Pete Hegseth in a post on X. "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." In particular, the CEO objected to the use of Anthropic's AI systems for mass domestic surveillance -- for example, monitoring the internet history or personal movements of Americans -- or for fully autonomous weapons, such as a missile that aims and fires at its target without human involvement. The AI firm, which competes with OpenAI, has made it clear that it expects to fight the administration's decision in court, questioning whether it is lawful. A Microsoft spokesperson told CNBC that its lawyers had "studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers, other than the Department of War, through platforms such as M365, GitHub, and Microsoft's AI Foundry." In addition, the spokesperson confirmed that it can "continue to work with Anthropic on non-defense-related projects." Google told CNBC that customers would continue to be able to use Anthropic's products via its platforms like Google Cloud, and that the administration's stance does not preclude it from working with Anthropic on non-defense-related projects. The confirmations from Big Tech come after Anthropic's Amodei promised last week that the "vast majority" of its customers are "unaffected by a supply chain risk designation." Plenty of other tech firms have praised Anthropic's recent stance on how its technology is used. Almost 500 Google employees and another 80 OpenAI staffers have signed an open letter in support of the company, while the company's flagship app Claude has also had an impressive boost to downloads recently amid the controversy.
[15]
Anthropic was the Pentagon's choice for AI. Now it's banned and experts are worried
Last August, Pentagon technology chief Emil Michael, a former Uber executive and attorney, took on the added role of overseeing the Defense Department's artificial intelligence portfolio. A month earlier, Anthropic had been awarded a $200 million DOD contract that expanded its work with the agency. "I said, 'I just want to see the contracts,'" Michael told the All-In Podcast on Friday, reflecting on his early days managing the AI portfolio. "You know, the old lawyer in me." Michael's request kicked off a months-long review process that culminated in the Defense Department banning Anthropic's technology, leaving the military without its hand-picked AI models to operate in the most sensitive environments. In an extraordinary move, the DOD designated Anthropic a supply chain risk, a label that's historically only been applied to foreign adversaries. It will require defense vendors and contractors to certify that they don't use the company's models in their work with the Pentagon. Anthropic sued the Trump administration on Monday, calling the government's actions "unprecedented and unlawful," and claiming that they are "harming Anthropic irreparably," putting hundreds of millions of dollars worth of contracts in jeopardy. The DOD's sudden reversal came as a shock to many officials in Washington who viewed Anthropic's models as superior -- they were the first to be deployed in the agency's classified networks -- and championed the company's ability to integrate with existing defense contractors like Palantir. The decision was all the more puzzling since the Trump administration had threatened during negotiations to invoke the Defense Production Act, which could have forced Anthropic to grant the military access to its technology. "I don't know how those two things can both be true in reality," said Mark Dalton, a retired Navy rear admiral who now leads technology and cybersecurity policy at R Street, a think tank in Washington, D.C. "Something is so necessary that you need to invoke DPA and so harmful that you put a designation on it that's reserved for foreign adversaries." Defense experts like Dalton expressed concern about the government's decision. Not only does it set a troubling precedent, they argue, but it also means the administration is banishing a key technology vendor that's been lauded for its diligence with respect to AI safety, tough rhetoric against China and its entrepreneurial chops, becoming one of the fastest-growing tech startups in the U.S.
[16]
Anthropic sues US government over supply chain risk designation
According to Reuters, Anthropic has filed a lawsuit to prevent the Pentagon from adding the company it a national security blocklist. This comes days after the Department of Defense sent a letter to Anthropic confirming the company was labeled a supply chain risk; at the time CEO Dario Amodei had all but guaranteed Anthropic would fight back with legal action. The lawsuit claims the designation was unlawful and violated first-amendment free speech rights as well as due process. "These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic said in a statement published by Reuters. Engadget received the following statement from an Anthropic spokesperson: "Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners. We will continue to pursue every path toward resolution, including dialogue with the government." The lawsuit comes after several weeks of back-and-forth between the AI company and the government. In late February, news broke that the Department of Defense and Defense Secretary Pete Hesgeth were pressuring Anthropic to remove certain safeguards from its AI systems, but Amodei made it clear the company would refuse to allow its model to be used for mass surveillance or development of autonomous weapons. On the February 27 deadline, Amodei refused to budge, leading Hesgeth to threaten the company with the supply chain risk designation; he also said the US government would cancel its $200 million contract with the company. The same day, President Trump ordered all federal agencies to cease using Anthropic as well.
[17]
Anthropic sues Trump administration seeking to undo "supply chain risk" designation
Anthropic is suing the Trump administration, asking federal courts to reverse the Pentagon's decision designating the artificial intelligence company a "supply chain risk" over its refusal to allow unrestricted military use of its technology. Anthropic filed two separate lawsuits Monday, one in California federal court and another in the federal appeals court in Washington, D.C., each challenging different aspects of the Pentagon's actions against the company. The Pentagon last week formally designated San Francisco tech company a supply chain risk after an unusually public dispute over how its AI chatbot Claude could be used in warfare. The lawsuits aim to undo the designation and block its enforcement.
[18]
The US military is still using Claude -- but defense-tech clients are fleeing | TechCrunch
The aftermath of Anthropic's dispute with the Department of Defense has left the company in an awkward place -- both actively in use as part of the ongoing conflict between the U.S. and Iran, and decoupling from many of its clients in the defense industry. Part of the confusion is the overlapping and contradictory restrictions made by the U.S. government. President Trump has directed civilian agencies to discontinue use of Anthropic products, but the company was given six months to wind down its operations with the Department of Defense. The next day, the U.S. and Israel launched a surprise attack on Tehran, launching a continued conflict before Trump's directive could be fully executed. The result is that, as the U.S. continues its aerial attack on Iran, Anthropic models are being used for many targeting decisions. And while Secretary of Defense Pete Hegseth has pledged to designate the company as a supply-chain risk, no official steps have been taken to that end, so there are no legal barriers to the use of the system. An article in The Washington Post on Wednesday unearthed new details on how Anthropic's systems are being used in conjunction with Palantir's Maven system. As Pentagon officials planned the strikes, the systems "suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance," the Post reports. The article characterized the system's function as "real-time targeting and target prioritization." At the same time, many companies involved in the defense industry have already replaced Anthropic models with competitors. Lockheed Martin and other defense contractors began swapping out the company's models this week, according to a Reuters report. Many subcontractors are caught in a similar bind: a managing partner at J2 Ventures told CNBC that 10 of his portfolio companies "have backed off of their use of Claude for defense use cases and are in active processes to replace the service with another one." The biggest open question is whether Hegseth will make good on the supply-chain risk designation, which would likely result in a heated legal case. But in the meantime, one of the leading AI labs is quickly being partitioned out of military tech -- even as it's used in an active war zone.
[19]
Anthropic sues US government for calling it a risk
Artificial intelligence (AI) firm Anthropic has filed a first of its kind lawsuit against the US government over claims that it is a "supply chain risk". The AI firm's chief executive Dario Amodei and Defence Secretary Pete Hegseth have been publicly rowing due to the company's refusal to allow the military unfettered use of its AI tools. The Pentagon retaliated by making Anthropic the first US company to be labelled a "supply chain risk", but Anthropic said in its lawsuit on Monday against a list of US government agencies that this was unlawful and unprecedented. The BBC has contacted the US Department of Defense for comment.
[20]
Anthropic Unveils Amazon-Inspired Marketplace for AI Software
Anthropic faces uncertainty due to a standoff with the Pentagon, which has deemed the company a supply-chain risk, threatening its work with the Pentagon and partnerships with other companies on defense work. Anthropic PBC is launching a new platform for its corporate customers to purchase third-party software, broadening the AI developer's offerings at a time when its business faces new uncertainty from a standoff with the Pentagon. The company said Friday that Anthropic Marketplace will let its customers more easily purchase a mix of software applications that use Anthropic's models, with options including services from Snowflake Inc., Harvey and Replit Inc. The OpenAI rival will not take a cut of these purchases and will allow its customers to use some of their committed annual spending on Anthropic's own services toward third-party tools, a model it likens to software marketplaces from Amazon.com Inc. and Microsoft Corp. Anthropic has been working to sell AI software to a broader mix of industries and firmly establish itself as a key pillar for how companies get work done. It's now confronting a new set of risks after a feud over AI guardrails culminated in the Defense Department deeming Anthropic a supply-chain risk, a designation usually reserved for foreign adversaries. The move threatens Anthropic's work with the Pentagon as well as its partnerships with other companies on their defense work. Anthropic has said it intends to challenge the decision in court. "This is something we're talking a lot with our customers about," Kate Jensen, Anthropic's head of Americas, said on Wednesday about the Pentagon's designation. "The good news here is, for the most part, we expect that it will be business as usual for the vast majority of our customers." On Thursday, Anthropic Chief Executive Officer Dario Amodei said the government restrictions are narrowly tailored enough to keep it from affecting other Anthropic business that's unrelated to specific Pentagon contracts. A Microsoft spokesperson said the company had concluded that it can continue to work with Anthropic on non-defense projects. Still, significant uncertainty remains for businesses that work with the Pentagon and Anthropic. "I think we're all figuring out exactly what the outcome here will be," Jensen said.
[21]
Anthropic sues the US government over its Pentagon blacklist
The AI company filed two federal lawsuits on Monday, arguing the Trump administration's 'supply chain risk' designation is unconstitutional retaliation for protected speech. There is a phrase in Anthropic's court filing that sets the tone for everything that follows: "Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation." It is the language of a company that believes it is not simply fighting a contract dispute, but a constitutional one. On Monday, the San Francisco-based AI company filed two federal lawsuits against the Trump administration, targeting the Pentagon's decision last week to formally designate Anthropic a "supply chain risk to national security", a label that has historically been reserved for companies tied to foreign adversaries such as China and Russia. It is believed to be the first time the designation has been applied to an American company. The first lawsuit was filed in the US District Court for the Northern District of California. It asks a judge to vacate the designation and grant an immediate stay while the case proceeds. A second, shorter suit was filed in the US Court of Appeals for the District of Columbia Circuit, targeting a separate statute the government invoked that can only be challenged in that jurisdiction. Both cases make substantially the same argument: that the administration acted unlawfully, without proper statutory authority, and in violation of Anthropic's First Amendment rights. More than a dozen federal agencies are named as defendants, including the Department of Defence, the Treasury, the State Department, and the General Services Administration. The legal action is the culmination of a two-week standoff that escalated with unusual speed into one of the more remarkable confrontations between a technology company and the US government in recent memory. The dispute centres on two conditions Anthropic has insisted on in its contracts with the Pentagon: that its Claude AI system not be used for mass domestic surveillance of American citizens, and that it not be used to power fully autonomous weapons, systems capable of targeting and firing without human authorisation. The Pentagon, which has been using Claude on classified networks since the company became the first AI lab to achieve that clearance, demanded that any renewed contract drop these restrictions and grant the military use of Claude for "all lawful purposes." Anthropic refused. What followed was a sequence of events that proceeded with striking speed. On 27 February, President Trump posted on Truth Social calling Anthropic a "radical left, woke company" and directing every federal agency to "immediately cease" all use of its technology. Within hours, Defence Secretary Pete Hegseth announced on X that he was designating Anthropic a supply chain risk, meaning no contractor, supplier, or partner doing business with the US military could conduct any commercial activity with the company. The formal letter confirming the designation arrived on 3 March, five days after the deadline Anthropic had been given to agree to the Pentagon's terms. The practical scope of the designation turned out to be narrower than Hegseth's initial announcement implied. Anthropic CEO Dario Amodei said in a statement last Thursday that the relevant statute limits the designation's reach to the direct use of Claude in Pentagon contracts, it cannot, Amodei argued, be used to sever all commercial relationships between defence contractors and the company. Microsoft, Google, and Amazon all reviewed the designation and reached the same conclusion, issuing statements confirming that Claude would remain available to their customers for work unrelated to defence contracts. Hegseth had explicitly said the opposite in his original post. The economic stakes are nonetheless substantial. In declarations accompanying Monday's filings, Anthropic executives laid out the damage in granular terms. Chief Financial Officer Krishna Rao warned the court that if the designation were allowed to stand and customers took a broad reading of its scope, it could reduce Anthropic's 2026 revenue by "multiple billions of dollars", an impact he described as "almost impossible to reverse." Chief Commercial Officer Paul Smith cited a specific example: one partner with a multi-million-dollar annual contract had already switched to a rival AI model, eliminating an anticipated revenue pipeline of more than $100 million; negotiations with financial institutions worth roughly $180 million combined had also been disrupted. The complaint itself makes two distinct legal arguments. The first is a First Amendment claim: that the administration's actions punish Anthropic for its public advocacy around AI safety, its position on autonomous weapons and domestic surveillance, which constitutes protected speech. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," the filing states. The second argument challenges the statutory basis of the designation, invoking 10 USC 3252, the procurement law the Pentagon relied upon. Anthropic argues the statute requires the government to use "the least restrictive means" to protect the supply chain, not deploy it as a punitive instrument against a domestic company over a policy disagreement. The Pentagon's position is that the dispute is fundamentally about operational control rather than speech. Pentagon officials have argued that a private contractor cannot insert itself into the chain of command by restricting the lawful use of a critical capability, and that the military must retain full discretion over how it deploys technology in national security scenarios. In an indication that the designation was not straightforwardly about security, a Pentagon official was quoted in Anthropic's court filing as saying the government intended to "make sure they pay a price" for the company's refusal, language Anthropic's lawyers have flagged as evidence of improper motivation. The case has drawn an unusual show of solidarity from Anthropic's direct competitors. A group of 37 researchers and engineers from OpenAI and Google DeepMind, including Google's chief scientist Jeff Dean, who signed in a personal capacity, filed an amicus brief on Monday supporting the lawsuit. The brief argues that the designation "chills professional debate" about AI risks and undermines American competitiveness. "By silencing one lab," the researchers wrote, "the government reduces the industry's potential to innovate solutions." The filing is notable given that OpenAI struck a new deal with the Pentagon within hours of the Trump administration's order, a move that drew sharp criticism from OpenAI employees and that Altman later acknowledged looked "sloppy and opportunistic." Legal observers have been sceptical that the designation will survive judicial scrutiny. Paul Scharre, a former Army Ranger and now executive vice president of the Center for a New American Security, told Breaking Defense that Hegseth's initial characterisation of the ban simply exceeded what the supply chain risk statute permits, and that even the narrower formal designation would likely struggle in court, given the law's requirement for the least restrictive means. Procurement laws passed by Congress, Anthropic argues in its filings, do not give the Pentagon or the president authority to blacklist a company over a policy disagreement. A first hearing could take place in San Francisco as early as this Friday, according to reports. Anthropic has asked for a temporary order that would allow it to continue working with military contractors while the legal case unfolds. The DoD said it does not comment on litigation. Among the contradictions the complaint highlights: the military reportedly continued to use Claude during active combat operations in Iran, after the ban had been announced. A six-month phaseout was also ordered simultaneously with an immediate prohibition. And the company retains active FedRAMP authorisation and facility and personnel security clearances that would ordinarily be incompatible with a national security risk finding. None of these inconsistencies have been publicly addressed by the government. Whatever the court decides, the case has already set a precedent of a different kind: a major AI company, backed by researchers at its own rivals, publicly litigating the government's right to weaponise procurement law against a domestic company for taking a public stance on how its technology should and should not be used. The outcome could determine, as Anthropic's complaint puts it, whether any American company can "negotiate with the government" without risking its existence.
[22]
Employees of Google and OpenAI Just Filed a Legal Brief in Support of Anthropic
Sometimes enemies can be friends, especially when their interests align. Anthropicรขโฌโขs legal fight against the federal government just received support from rival players in the AI game in the form of an amicus curiae brief. Earlier today, Anthropic filed a pair of lawsuits contesting the federal governmentรขโฌโขs legal authority to brand the AI company a รขโฌลsupply chain risk to national securityรขโฌ and prevent major firms from working with it. The amicus brief filed afterward has 37 signatoriesรขโฌ"technically called amici because it's an amicus brief, not a letterรขโฌ"identified as รขโฌลengineers, researchers, scientists, and other professionalsรขโฌ at Google and OpenAI. Perhaps most notable among those named by Wired is Jeff Dean, who is chief scientist across all of Google, and, in his spare time, a prolific funder of other AI companies. In high-profile cases, courts are often inundated with amicus curiae (รขโฌลfriend-of-the-courtรขโฌ in Latin) briefs, which can essentially be filed by anyone in an attempt to sway the outcome of a proceeding. Amicus brief have a reputation for being dull to read, and an interesting briefรขโฌ"say, one written in support of the plaintiff by the plaintiff's business rivalsรขโฌ"can have a major impact. The argument in the brief is divided up into three main points, although the second two are closely related: Anthropic, the amici argue, was right to stick to its guns on its now-famous รขโฌลred linesรขโฌรขโฌ"concerns about mass surveillance and fully autonomous lethal weaponsรขโฌ"that made the federal government so incensed that it took the measure now at issue. The first point is that what the government is doing constitutes an รขโฌลimproper and arbitrary use of power,รขโฌ in the views of the amici, and that it has รขโฌลserious ramifications for our industry.รขโฌ Other amici include Grant Birkinbine, a security engineer at OpenAI, Sanjeev Dhanda, a software engineer at Google, Leo Gao, a member of the technical staff at OpenAI, Zach Parent, a forward deployed engineer at OpenAI Kathy Korevec, director of product at Google Labs, and Ian McKenzie, a research engineer at Google. OpenAI CEO Sam Altman has spoken critically of the governmentรขโฌโขs Anthropic decision since early on in these events, writing on X on February 28, รขโฌลTo say it very clearly: I think this is a very bad decision from the [Department of War] and I hope they reverse it. If we take heat for strongly criticizing it, so be it.รขโฌ Altman has acknowledged, however, that his companyรขโฌโขs deal with the Pentagon coinciding with the explosive rupture between Anthropic and the Pentagon รขโฌลlooked opportunistic and sloppy.รขโฌ
[23]
Anthropic's legal claims against Trump's blanket government ban on AI startup
March 9 (Reuters) - Anthropic filed a lawsuit on Monday to block the Pentagon from placing it on a national security blacklist, escalating the artificial intelligence lab's high-stakes battle with the U.S. military over usage restrictions on its technology. The AI startup's lawsuit against the U.S. government, President Donald Trump and Defense Secretary Pete Hegseth, among others, makes the following claims: FIRST AMENDMENT VIOLATION The startup claimed that the Pentagon retaliated against Anthropic for protected activities in violation of the First Amendment, which ensures free speech rights. Anthropic said in its lawsuit that the Constitution confers upon it "the right to โ express its views -- both publicly and to the government -- about the limitations of its own AI services and important issues of AI safety." The lawsuit claims that the U.S. government's blacklisting of the company constitutes retaliation against Anthropic's expressive activities, including protected speech, viewpoints and petitioning of the government. PRESIDENTIAL ACTION BEYOND LEGAL AUTHORITY Anthropic claimed that President Donald Trump directing the U.S. government to stop work with Anthropic -- announced in a post on his social media platform Truth Social late last month -- was "ultra vires," or went beyond his legal power and authority. FIFTH โ AMENDMENT VIOLATIONS Anthropic alleges the U.S. government violated its Fifth Amendment rights to due process by effectively blacklisting the company without following required legal protocols. According to the lawsuit, the government bypassed mandatory legal procedures by terminating contracts and blocking future work without providing prior notice or a meaningful โ opportunity to respond. ADMINISTRATIVE PROCEDURE ACT VIOLATION Anthropic alleged that the DOD designating the company as a supply-chain risk, and prohibiting the department's contractors, suppliers and partners from conducting any commercial activity with it, violates โ the Administrative Procedure Act. The APA sets out procedures agencies must follow when making decisions and permits courts to override agency actions that are arbitrary, an abuse of discretion โ or otherwise unlawful. Anthropic said Defense Secretary Pete Hegseth's decision to deem the company a supply-chain risk overstepped his authority, did not follow proper legal procedures and lacked supporting evidence. Reporting by Arsheeya Bajwa and Anhata Rooprai in Bengaluru; Editing by Alan Barona Our Standards: The Thomson Reuters Trust Principles., opens new tab
[24]
OpenAI and Google Workers File Amicus Brief in Support of Anthropic Against the US Government
More than 30 employees from OpenAI and Google, including Google DeepMind chief scientist Jeff Dean, filed an amicus brief on Monday in support of Anthropic in its legal fight against the US government. "If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness in the field of artificial intelligence and beyond," the employees wrote. The brief was filed just hours after Anthropic sued the Department of Defense and other federal agencies over Pentagon's decision to designate the company a "supply-chain risk." The sanction, which severely limits Anthropic's ability to work with military contractors, went into effect after Anthropic's negotiations with the Pentagon fell apart. The AI startup is seeking a temporary restraining order to continue its work with military partners as the lawsuit progresses. This brief specifically supports this motion. Signatories of the brief include Google DeepMind researchers Zhengdong Wang, Alexander Matt Turner, and Noah Siegel, as well as OpenAI researchers Gabriel Wu, Pamela Mishkin, and Roman Novak, among others. Amicus briefs are legal filings submitted by parties that are not directly involved in a court case but that have expertise relevant to it. The employees signed in a personal capacity and don't represent the views of their companies, according to the brief. OpenAI and Google did not immediately respond to WIRED's request for comment. The amicus brief says that the Pentagon's decision to blacklist Anthropic "introduces an unpredictability in our industry that undermines American innovation and competitiveness" and "chills professional debate on the benefits and risks of frontier AI systems." It notes that the Pentagon could have simply dropped Anthropic's contract if it no longer wished to be bound by its terms. The brief also says that the red lines Anthropic claims it requested, including that its AI wouldn't be used for mass domestic surveillance and the development of autonomous lethal weapons, are legitimate concerns and require sufficient guardrails. "In the absence of public law, the contractual and technological requirements that AI developers impose on the use of their systems represent a vital safeguard against their catastrophic misuse," the brief says. Several other AI leaders have also publicly questioned the Pentagon's decision to label Anthropic a supply-chain risk. OpenAI CEO Sam Altman said in a post on social media that "enforcing the SCR [supply-chain risk] designation on Anthropic would be very bad for our industry and our country." He added that "this is a very bad decision from the DoW and I hope they reverse it." As Anthropic's relationship with the Pentagon soured, OpenAI quickly signed its own contract with the US military, a decision some people criticized as opportunistic.
[25]
OpenAI, Google AI researchers back Anthropic's Pentagon lawsuit
The restriction typically targets foreign companies that pose national security risks. U.S. officials previously used it against Chinese firms such as Huawei and ZTE. Applying it to a U.S.-based AI developer marks a major shift in federal policy. Anthropic reportedly refused to allow its technology to support mass surveillance of Americans or autonomous weapons systems. Shortly after the designation, the Defense Department finalized a deal with OpenAI. Researchers from both OpenAI and Google quickly responded. They filed a statement in federal court supporting Anthropic's challenge. "The government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry," reads the brief, whose signatories include Google DeepMind chief scientist Jeff Dean. (Wired) The filing appeared on the court docket only hours after Anthropic launched lawsuits against the Defense Department and other federal agencies. The researchers argue that the government had simpler options if it disagreed with Anthropic's contract terms.
[26]
Anthropic sues Pentagon over being labeled a national security risk
Logos for the "U.S. Department of War" and Anthropic are seen in an illustration this month. (Dado Ruvic/Reuters) Anthropic sued the Trump administration Monday, calling on a federal judge in San Francisco to strike down a government order forbidding military contractors from partnering with the artificial intelligence company on the grounds that it poses a risk to national security. "These actions are unprecedented and unlawful," the company's lawyers wrote. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech." The Defense Department last week formally tagged the AI company as a supply-chain risk, the kind of label usually reserved for Chinese and Russian firms suspected of helping foreign spies. The move followed increasingly bitter negotiations over how the company's technology might be used in warfare, with Anthropic seeking guarantees that the Claude model would not be used for mass domestic surveillance or to power fully autonomous weapons. The unprecedented step by the Pentagon came even as Anthropic's tools were playing a central role in President Donald Trump's bombing campaign in Iran. The Pentagon declined to comment on the lawsuit. In legal filings, Anthropic said the administration had violated its First Amendment rights to speak about the limits of AI's military applications and had overstepped its legal authority. "Anthropic was founded based on the belief that AI technologies should be developed and used in a way that maximizes positive outcomes for humanity, and its primary animating principle is that the most capable artificial-intelligence systems should also be the safest and the most responsible," the company's lawyers wrote in a complaint filed in U.S. District Court for the Northern District of California. "Anthropic brings this suit because the federal government has retaliated against it for expressing that principle." The battle has reverberated through Silicon Valley, raising questions about what limits AI developers should be able to impose on their technology when they do business with the government. Administration officials and the Defense Department have demanded the freedom to use AI systems for any lawful purpose, arguing that the government must have the final say. After Anthropic CEO Dario Amodei refused to agree, Trump said last month that he was ordering federal agencies to stop using Claude. Defense Secretary Pete Hegseth went further, saying he was imposing a far-reaching ban on the company doing any work with military contractors. But behind the scenes, the two sides continued to talk last week. Technology and defense figures lobbied the two sides to de-escalate, warning of the ripple effects that would come with branding a leading American company a security risk in an industry where AI labs, industry giants and hardware makers are intertwined with both one another and the Pentagon. The discussions finally came to an end Thursday, according to a defense official -- a day after tech news site the Information published a caustic internal staff memo in which Amodei said the administration was opposed to the company "because we haven't given dictator-style praise to Trump." The leak of the note contributed to the ultimate breakdown of the talks, according to the defense official and a second person familiar with the discussions. "It blew up negotiations," said the second person, speaking on the condition of anonymity to describe private talks. For now, though, the military is continuing to rely on Claude to help carry out the assault on Iran. The AI tool is embedded in the military's Maven Smart System, which helps commanders analyze intelligence and identify targets to strike. In the lead-up to the attack, the system suggested hundreds of targets, with precise coordinates, and ranked them in order of importance, people familiar with the system previously told The Washington Post. It also speeds up planning dramatically and helps evaluate the aftermath of strikes, one of the people said. Defense officials have said they are aware of their dependence on the system, and Trump said he was providing a six-month phaseout of Anthropic's tools. In the long term, Anthropic's competitors are positioned to supplant it, even if the company is victorious in court. As officials were labeling the company a pariah, its chief rival, OpenAI, was finalizing an agreement to work on the Pentagon's secret networks. OpenAI said it had been able to secure protections related to surveillance and autonomous weapons, while agreeing to the "all lawful uses" standard that officials wanted. Elizabeth Dwoskin and Aaron Schaffer contributed to this report.
[27]
Amazon says customers can keep using Anthropic's Claude on its cloud for non-defense workloads
Amazon CEO Andy Jassy speaks during a keynote address at AWS re:Invent 2024, a conference hosted by Amazon Web Services, at The Venetian Las Vegas on December 3, 2024 in Las Vegas, Nevada. Amazon said Friday it will continue offering Anthropic's artificial intelligence technology to its cloud customers, excluding work involving the Department of Defense. The announcement comes after the federal agency informed Anthropic on Thursday that it would label the company a "supply chain risk." Anthropic responded by saying it has "no choice" but to challenge the designation in court. "AWS customers and partners can continue to use Claude for all their workloads not associated with the Department of War (DoW)," an Amazon Web Services spokesperson said in a statement. "For all DoW workloads which use Anthropic technologies, we are supporting customers and partners as they transition to alternatives running on AWS."
[28]
Anthropic sues the Trump administration over 'supply chain risk' label
Left: Secretary of Defense Pete Hegseth arrives for the inaugural Americas Counter Cartel Conference at the U.S. Southern Command headquarters in Doral, Fla., on March 5. Right: Dario Amodei, co-founder and CEO of Anthropic at the Vivatech technology start-ups and innovation fair in Paris in 2024. Eva Marie Uzcategui and Julien de Rosa/AFP via Getty Images hide caption Anthropic filed two federal lawsuits on Monday against the Trump administration alleging that Pentagon officials illegally retaliated against the company for its position on artificial intelligence safety. Defense Department officials last week designated Anthropic a supply chain risk, citing national security concerns. It followed CEO Dario Amodei's announcement that he would not allow the company's Claude's AI model to be used for autonomous weapons, or to surveil on American citizens. The lawsuit says the administration's decision to place the firm on what is effectively a blacklist that blocks Pentagon suppliers from using Claude is an attempt to punish the company over its AI guardrails. "The federal government retaliated against a leading frontier AI developer for adhering to its protected viewpoint on a subject of great public significance -- AI safety and the limitations of its own AI model -- in violation of the Constitution and laws of the United States," the lawsuit states, adding that Trump officials "are seeking to destroy the economic value created by one of the world's fastest-growing private companies." A Pentagon spokesperson declined to comment. The lawsuits, filed in the U.S. District Court for the Northern District of California and the federal appeals court in Washington, D.C.,, alleges the Trump administration violated the company's First Amendment rights and exceeded the scope of supply chain risk law by using the label against Anthropic. The suit is asking a federal judge to block Pentagon officials from enforcing the blacklist designation. It's the latest turn in what has been a contentious standoff pitting the Pentagon against Anthropic over the company's safety rules that govern its powerful services. Lawyers for Anthropic say in the suit that Claude was not developed to be used for lethal autonomous weapons without human oversight, nor to be deployed to spy on U.S. citizens, and using the tools in these ways represent an abuse of its technology. "Allowing Claude to be used to enable the Department to surveil U.S. persons at scale and to field weapons systems that may kill without human oversight would therefore be inconsistent with Anthropic's founding purpose and public commitments," according to the suit. Pentagon officials have disputed that the fight with Anthropic is over lethal weapons and mass surveillance, instead claiming that private companies cannot dictate how the government uses technology in scenarios like warfare and tactical operations, claiming all of its uses would be "lawful." The supply-chain risk designation follows a meeting in February between Defense Secretary Peter Hegseth and Amodei. National security experts say such a label typically applies to foreign adversary contractors that could potentially sabotage U.S. interests. It is highly unusual, experts say, to use the blacklist against an American company. After Pentagon officials informed Anthropic of the designation, President Trump said on social media that all federal agencies would stop using Anthropic's tools. Anthropic was the first AI frontier lab permitted to be used by the U.S. officials on classified networks. But since the feud began, Pentagon officials have said Elon Musk's xAI and OpenAI's ChatGPT have now been cleared for use in classified systems. The Wall Street Journal has reported that Anthropic's Claude has been used in military operations, including the raid that led to the arrest of Venezuelan leader Nicolรกs Maduro and for intelligence assessments and identifying targets in the U.S.'s ongoing conflict with Iran. (NPR has not independently confirmed The Journal's reporting.) While Anthropic has strongly resisted the administration on lethal weaponry and mass surveillance, the company says in the suit that since 2024 it has partnered with national security contractors, like Palantir, to assist the government in operations including "rapid processing of complex data, identifying trends, streamlining document review, and helping government officials make more informed decisions in time sensitive situations."
[29]
Anthropic Officially Sues the Pentagon for Labeling the AI Company a 'Supply Chain Risk'
Anthropic says the Pentagon is "seeking to destroy the economic value created by one of the worldรขโฌโขs fastest-growing private companies." Anthropic filed two lawsuits against the U.S. Department of Defense on Monday over its designation as a "supply chain risk to national security" that would prohibit the AI company from obtaining U.S. government contracts and blacklist it among other defense contractors. The Pentagon labeled the AI company a supply chain risk after it refused to agree to new terms that would allow the U.S. government to use its AI model Claude for mass domestic surveillance and the development of fully autonomous weapons. "These actions are unprecedented and unlawful," the Anthropic lawsuits read. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here." The lawsuits, which were filed in the U.S. District Court of the Northern District of California and the D.C. Circuit Court of Appeals, according to the New York Times, list almost three dozen defendants, including entire government agencies that were using Claude as well as the heads of those agencies. รขโฌลSeeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners," an Anthropic spokesperson said in a statement. "We will continue to pursue every path toward resolution, including dialogue with the government.รขโฌ Anthropic has previously explained that its objections to domestic surveillance and prohibitions on the use of Claude for fully autonomous weapons are largely over concerns surrounding technical ability. The company made a similar argument in its lawsuits Monday, explaining that Anthropic has never tested Claude for these uses and the guardrails are rooted in the company's understanding of its risks and limitations, along with a belief in the U.S. Constitution. "Anthropic has collaborated with the Department of War [sic] on modifications to its usage restrictions to facilitate the Departmentรขโฌโขs work with Claude, in recognition of the Departmentรขโฌโขs unique missions. But Anthropic has always maintained its commitment to those two specific restrictions, including in its work with the Department of War," the company wrote in its lawsuits. Anthropic's lawsuits have been expected since President Donald Trump and Defense Secretary Pete Hegseth first threatened to either invoke the Defense Production Act to force Anthropic to do its bidding or be labeled a supply chain risk, a designation never applied to a U.S. company. Anthropic CEO Dario Amodei met with Hegseth on Feb. 24, but the Pentagon didn't formally label Anthropic a supply chain risk until March 5. The lawsuits include screenshots of posts from Trump on Truth Social and Hegseth on X, as well as links to tweets from members of the cabinet, like Treasury Secretary Scott Bessent. In its lawsuits, Anthropic argues that the Department of Defense has every right to look out for supply chain risks, but it has a responsibility to do that in the least restrictive manner. Anthropic writes that DoD "had a straightforward and unrestrictive option that would have fully served that interest: terminate the contract and hire a different developer." Instead, the Pentagon went with a punitive response to make the company toxic. There are some thorny legal questions about what it means to be labeled a supply chain risk, including the question of whether it means that other private companies that do business with the federal government are prohibited from using Anthropic's software in any capacity. Whatever the letter of the law, defense contractors like Lockheed Martin have been cutting ties with Anthropic anyway. The Pentagon has drawn up new AI guidelines that would require companies to allow the military to engage in "any lawful use" of their models, according to the Financial Times. The definition of "lawful" is obviously very flexible for the Trump regime, given the fact that there's effectively no one to hold them accountable when they break the law. "The consequences of this case are enormous," the lawsuits read. "The federal government retaliated against a leading frontier AI developer for adhering to its protected viewpoint on a subject of great public significanceรขโฌ"AI safety and the limitations of its own AI modelsรขโฌ"in violation of the Constitution and laws of the United States." Several tech observers have argued that harming Anthropic harms U.S. competitiveness and gives China an edge in the race to build the most advanced AI systems. And Anthropic made a similar argument in its lawsuits on Monday. "Defendants are seeking to destroy the economic value created by one of the worldรขโฌโขs fastest-growing private companies, which is a leader in responsibly developing an emergent technology of vital significance to our Nation," the lawsuits read. "The Challenged Actions inflict immediate and irreparable harm on Anthropic; on others whose speech will be chilled; on those benefiting from the economic value the company can continue to create; and on a global public that deserves robust dialogue and debate on what AI means for warfare and surveillance." Anthropic didn't immediately respond to a request for comment Monday morning. Gizmodo will update this article when we hear back.
[30]
'These actions are unprecedented and unlawful': Anthropic sues Pentagon over "supply chain risk" designation -- claims free speech and due process violations
* Anthropic has filed two lawsuits against the Pentagon * The lawsuits call upon the courts to remove the "supply chain designation" * Anthropic feared its Claude model would be used for mass domestic surveillance and fully autonomous weapons systems. Anthropic has filed two federal lawsuits against the Pentagon for designating the AI company as a "supply chain risk" over its refusal to provide full access to its Claude model. Citing fears of mass domestic surveillance and fully autonomous weapons systems, Anthropic had drawn red-lines in the Pentagon's usage of its model. Anthropic subsequently lost its $200 million contract with the Department of Defense, and its technology is now banned from use by government contractors working on behalf of the US military. Far reaching implications for AI "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here," Anthropic wrote, with the lawsuits also alleging the company's due process rights have been violated, according to Reuters. The two lawsuits request that the courts remove the "supply chain risk" designation, block its enforcement, and make federal agencies withdraw directives to stop the use of Anthropic's tools. Following the battle between Anthropic and the Pentagon over the use of Claude, Anthropic CEO Dario Amodei made clear his stance on the issue, stating, "We cannot in good conscience accede to their request." While the Department of Defense has not issued a comment on the lawsuit, Liz Huston, a spokeswoman for the White House, said Anthropic was "a radical left, woke company," adding that, "Under the Trump Administration, our military will obey the United States Constitution - not any woke AI company's terms of service." Before Defense Secretary Pete Hegseth levelled the designation against Anthropic, President Trump stated that the company was being run by "left wing nut jobs." President Trump also previously threatened Anthropic to "get their act together," or he would "use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow." OpenAI has since stepped up to fill the gap left by Anthropic by extending its contract with the Pentagon, but was forced to quickly backtrack on wording within its contract that would have allowed the Pentagon to use its AI models in exactly the same way Anthropic feared its Claude model would be used. OpenAI's head of robotics has resigned over the company's new deal with the Pentagon, stating in a social media post that, "surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got." The Trump administration has frequently cited its desire to reduce regulations on technology in order to allow the US to retain AI superiority - most recently in its newly unveiled cyber strategy. However, less regulation appears to translate into allowing the Trump administration free reign to use technology however it sees fit to achieve its own aims, regardless of the consequences.
[31]
Anthropic sues Pentagon as Claude app usage soars
Anthropic's AI app, Claude, is surging to the top of global download charts -- while the company wages a legal battle against the Pentagon for designating it a national security risk. In a complaint filed Monday in the U.S. District Court for the Northern District of California, Anthropic claims the federal government launched an unprecedented campaign against the company after it stood by its safety restrictions. Anthropic says it doesn't want its AI to be used for lethal autonomous warfare or mass surveillance of Americans. "Anthropic brings this suit because the federal government has retaliated against it for expressing that principle," the complaint states. "When Anthropic held fast to its judgment that Claude cannot safely or reliably be used for autonomous lethal warfare and mass surveillance of Americans, the President directed every federal agency to 'IMMEDIATELY CEASE all use of Anthropic's technology.'" The fallout has been swift and wide-ranging. The General Services Administration terminated Anthropic's government-wide contract. The Treasury Department, the Federal Housing Finance Agency, the State Department, and other government agencies announced they were cutting ties with the company. Yet the controversy appears to have done little to dampen public enthusiasm for Anthropic's products. If anything, users are more enthusiastic now Anthropic is going head to head with the Trump administration. The company says it is now adding more than one million new users every day globally -- breaking its own signup records every day since the dispute erupted. Claude currently holds the top spot on Apple's App Store in 16 countries, surpassing both OpenAI's ChatGPT and Google's Gemini in more than 20 markets, according to data from AppFigures. The lawsuit marks the culmination of mounting tensions between Anthropic and the Department of Defense, which the Trump administration calls the Department of War. The company had a major contract that made its generative AI systems the most used across the Pentagon. That relationship unraveled when Defense Secretary Pete Hegseth pushed to dramatically expand AI's role throughout the military, and wanted unrestricted access to AI technologies. The effort required every AI company with Pentagon contracts to renegotiate its agreements. But because Anthropic had become the military's dominant AI provider -- with Claude reportedly the only advanced model allowed to operate on classified systems -- the company found itself at the center of a contentious standoff with Hegseth and Trump. The breakdown was as much about clashing personalities as competing principles, according to the New York Times. Pentagon Chief Technology Officer Emil Michael, a former Uber executive, grew increasingly frustrated with Anthropic CEO Dario Amodei throughout weeks of negotiations. As talks deteriorated, Michael began negotiating a fallback deal with OpenAI -- a company whose CEO, Sam Altman, had been actively courting the Trump administration. Hours after the Pentagon's deadline passed without a deal, Altman announced that OpenAI had reached an agreement with the Defense Department. The lawsuit argues the government's actions -- including Trump's directive ordering every federal agency to immediately stop using Anthropic's AI, and Secretary Hegseth's designation of the company as a supply chain risk -- violate the First Amendment, as well as the Fifth Amendment's due process protections, and the Administrative Procedure Act. Anthropic's filing notes that the supply chain risk label has historically been reserved for foreign companies believed to pose a threat to national security. It has never before been applied to an American firm. The company is asking the court to declare the government's actions unlawful, and to issue a permanent injunction blocking their enforcement.
[32]
Anthropic sues Pentagon over rare "supply chain risk" label
Why it matters: Supply chain risk designations are usually reserved for foreign adversaries that pose a national security risk -- a punishment that could be hard for the government to square as it relied on Claude for operations in Iran. State of play: The Pentagon last week designated Anthropic a supply chain risk, meaning companies must stop using Claude in cases directly tied to the department. * President Trump also told the federal government in a Truth Social post to stop using Anthropic's technology, and some agencies have begun offboarding the tools. Anthropic is asking courts to undo the supply chain risk designation, block its enforcement and require federal agencies to withdraw directives to drop the company. * The company says its two lawsuits are not meant to force the government to work with Anthropic, but prevent officials from blacklisting companies over policy disagreements. What's inside: The first lawsuit -- filed in the U.S. District Court for the Northern District of California -- claims the designation punishes Anthropic for being outspoken about its views on AI policy, including its advocacy for safeguards against its technology being used for mass domestic surveillance or autonomous weapons. * The Pentagon has a right to disagree and choose not to work with Anthropic, the company argues, but it can't stigmatize the company as a security risk over protected speech. * The case challenges the statutory authority underpinning the Pentagon's designation, 10 U.S.C. 3252, arguing that Congress required the department to use the least restrictive means to protect the government and mitigate supply chain risk, not punish a supplier. Procurement laws passed by Congress do not give the Pentagon or President Trump the power to blacklist a company, Anthropic says. * Companies including Microsoft and Google have said they'll be able to continue non-defense related work with Anthropic. A second, shorter lawsuit was filed in the D.C. Circuit Court of Appeals because another statute the government invoked can only be challenged there and similar arguments are being made there, Anthropic says. * The company is seeking relief in both jurisdictions. The other side: The Pentagon argues the dispute is about operational control, not speech. * Department officials say this has always been about the military's ability to use technology legally, without a vendor inserting itself into the chain of command and putting warfighters at risk. The big picture: This doesn't preclude the two sides from reaching an agreement. * Defense undersecretary Emil Michael last week told Pirate Wires he would be open-minded: "I have a responsibility to the Department of War, and if there was a way to ensure that we had the best technology, I have no ego about it." What's next: Anthropic says it's committed to continuing to serve the Pentagon amid major combat operations.
[33]
AI firm Anthropic sues US defense department over blacklisting
Sign up for the Breaking News US email to get newsletter alerts in your inbox Anthropic filed two lawsuits against the Department of Defense on Monday, alleging that the government's decision to label the artificial intelligence firm a "supply chain risk" was unlawful and violated its first amendment rights. The two sides have been locked in a monthslong heated feud over the company's attempt to implement safeguards against the military's potential use of its AI models for mass domestic surveillance or fully autonomous lethal weapons. The lawsuits, which Anthropic filed in the northern district court of California and the US court of appeals for the Washington DC Circuit, come after the Pentagon formally issued the supply chain risk designation last Thursday, the first time the blacklisting tool has been used against a US company. The AI firm previously vowed to challenge the designation and its demand that any company that does business with the government cut all ties with Anthropic, a serious threat to its business model. Anthropic's lawsuit contends that the Trump administration is punishing the company for its refusal to comply with the ideological demands of the government, in a violation of its protected speech and an attempt to punish the company for not complying. "These actions are unprecedented and unlawful. The constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic stated in its California lawsuit. Anthropic's AI model, called Claude, has been deeply integrated into the Department of Defense over the past year. Until recently, Claude was also the only AI mode approved for use in classified systems. The DoD has reportedly used it extensively in its military operations, including deciding where to target missile strikes in its war against Iran. Anthropic emphasized in its lawsuit that it was still committed to providing AI for national security purposes. The company also stated in its California lawsuit that it has previously collaborated with the Department of Defense to modify its systems for unique use cases. The company also wants to continue its negotiations with the government, according to a statement. "Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners," a spokesperson for Anthropic said in a statement to The Guardian. "We will continue to pursue every path toward resolution, including dialogue with the government". The AI firm alleged in the suit that the Trump administration and Pentagon's punitive actions are "harming Anthropic irreparably," an accusation that somewhat contradicts CEO Dario Amodei telling CBS News last week that "the impact of this designation is fairly small" and the company was "gonna be fine". "Defendants are seeking to destroy the economic value created by one of the world's fastest-growing private companies, which is a leader in responsibly developing an emergent technology of vital significance to our Nation," Anthropic alleged in its suit. The Department of Defense did not immediately respond to a request for comment.
[34]
Anthropic sues to block Pentagon blacklisting over AI use restrictions
NEW YORK, March 9 (Reuters) - Anthropic on Monday filed a lawsuit to block the Pentagon from placing it on a national security blacklist, escalating the artificial intelligence lab's high-stakes battle with the U.S. military over usage restrictions on its technology. The Pentagon on Thursday slapped a formal supply-chain risk designation on Anthropic, limiting use of a technology that a source said was being used for . Anthropic said in its lawsuit that the designation was unlawful and violated its free speech and due process rights. The filing in federal court in California asked a judge to undo the designation and block federal agencies from enforcing it. "These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic said. Defense Secretary Pete Hegseth designated Anthropic a national security supply-chain risk last week after the startup refused to remove guardrails against using โ its AI for autonomous weapons or domestic surveillance. The designation poses a big threat to Anthropic's business with the government, and the outcome could shape how other AI companies negotiate restrictions on military use of their technology, though the company's CEO Dario Amodei clarified on Thursday that the designation had "a narrow scope" and businesses could still use its tools in projects unrelated to the Pentagon. President Donald Trump has also directed the government to stop working with Anthropic, whose financial backers include Alphabet's Google (GOOGL.O), opens new tab and Amazon.com (AMZN.O), opens new tab. Trump and Hegseth said there would be a six-month phase-out. Reuters has reported that Anthropic's caused by the fallout with the Pentagon. Trump and Hegseth's actions on February 27 came after months of talks with Anthropic over whether the company's policies could constrain military action and shortly after Amodei met with Hegseth in hopes of reaching a deal. The Pentagon said โ U.S. law, not a private company, would determine how to defend the country and insisted on having full flexibility in using AI for "any lawful use," asserting that Anthropic's restrictions could endanger American lives. Anthropic said even the best AI models were not reliable enough for fully autonomous weapons and that using them for that purpose would be dangerous. The company also drew a red line on domestic surveillance of Americans, calling that a violation of fundamental rights. After Hegseth's announcement, Anthropic โ said in a statement that the designation would be legally unsound and set a dangerous precedent for companies that negotiate with the government. The company said it would not be swayed by "intimidation or punishment," and on Thursday Amodei reiterated that Anthropic would challenge the designation in court. He also apologized for an internal memo published โ on Wednesday by tech news site The Information. In the memo, which was written last Friday, Amodei said Pentagon officials did not like the company in part because "we haven't given dictator-style praise to Trump." The Defense Department signed agreements worth up to $200 million each with major AI labs in โ the past year, including Anthropic, OpenAI and Google. Microsoft-backed (MSFT.O), opens new tab OpenAI announced a deal to use its technology in the Defense Department network shortly after Hegseth moved to blacklist Anthropic. CEO Sam Altman said the Pentagon shared OpenAI's principles of ensuring human oversight of weapon systems and opposing mass U.S. surveillance. Reporting by Jack Queen in New York; Editing by Noeleen Walder, Lisa Shumaker, Daniel Wallis and Nick Zieminski Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Government * Civil Rights * Public Policy Jack Queen Thomson Reuters Jack Queen covers major lawsuits against the Trump administration involving urgent questions of executive power and how their resolution could affect the law and the legal profession in the years to come. Previously, he covered criminal and civil cases against Trump during the interim of his presidential terms, including gavel-to-gavel coverage of his historic hush money trial in New York and his civil fraud trial, which ended in a half-billion-dollar judgment. Jack has also covered high-profile defamation cases including the Dominion Voting Systems' lawsuit against Fox News, which settled for $787 million after intense pretrial litigation. Based in New York, he specializes in breaking news as well as analysis, explainers and other explanatory reporting.
[35]
Google and OpenAI employees back Anthropic in a legal fight with the Pentagon | Fortune
Anthropic is getting support from some unusual allies in its legal fight with the Trump administration over the Pentagon's decision to label the company a "supply-chain risk": the employees of rival AI companies. More than 30 employees from OpenAI and Google DeepMind, including Google chief scientist Jeff Dean, filed an amicus brief that warns a Pentagon blacklist of Anthropic threatens to damage the entire American AI industry. "This effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness in the field of artificial intelligence and beyond," the employees said in the filing. As researchers at rival companies rally around Anthropic, a dispute that began over military contracts could tip into a broader reckoning over who controls AI. The brief came just hours after Anthropic launched two lawsuits contesting the government's decision to designate it a supply-chain risk -- a label that had previously only been applied to foreign companies and which was designed to prevent adversaries from sabotaging U.S. military systems. Relations between the Trump Administration and Anthropic collapsed spectacularly last week after the two failed to agree on a revised contract governing how the company's AI model Claude could be used. Anthropic had been trying to secure two "red lines" around the models use for domestic mass surveillance and autonomous weapons. The Pentagon, instead, was insisting that Anthropic agree that the U.S. military could use the company's AI systems for "all lawful use." Anthropic refused to agree to this language. In response, the administration cancelled its government contracts and labeled the company a national security risk. Just hours after Anthropic's negotiations collapsed, OpenAI swooped in to secure its own deal with the Pentagon, seemingly agreeing to terms Anthropic had rejected. The optics of the deals sparked a war of words between the two companies' CEOs, with Anthropic CEO Dario Amodei calling OpenAI's approach to the deal "safety theater" and describing OpenAI CEO Sam Altman's public statements as "straight up lies." Altman then took indirect aim at Anthropic, saying it's "bad for society" when companies abandon democratic norms because they dislike who's in power, in what appeared to be a veiled response to Amodei accusing Altman of offering "dictator-style praise to Trump." The mood among these company executives may be less conciliatory, but the amicus filing is an unusual show of solidarity across employees from rival companies. While employees said they signed in personal capacity, it also follows an open letter that was signed by nearly 900 employees at Google and OpenAI that urged their own leadership to refuse government requests to deploy AI for domestic mass surveillance or autonomous lethal targeting -- the same "red lines" that Anthropic drew in its negotiations with the Pentagon. OpenAI has lost at least one staffer over the controversy. Caitlin Kalinowski, who had led hardware and robotics at OpenAI since November 2024, resigned over the company's Pentagon deal, saying domestic surveillance without judicial oversight and lethal autonomy without human authorization "are lines that deserved more deliberation than they got." Anthropic's fight with the Pentagon is already set to have big implications for the control of AI overall and the relationship between business and government, but it could also spark a wider revolt of tech workers against their management teams. Google has been hit by this kind of employee dissent previously, in 2018, when it was considering working with the U.S. military on Project Maven, part of which involved using AI to analyze aerial surveillance images. The employee objections contributed to Google declining to renew its work on analyzing drone surveillance, which was subsequently taken over by Amazon and Microsoft.
[36]
Anthropic sues Pentagon over 'supply chain risk' designation
Anthropic sued the Trump Administration on Monday as part of its ongoing feud with the Pentagon for using its artificial intelligence for military missions. The company behind Claude AI filed two lawsuits, one in a federal appeals court in Washington, D.C., and another in California federal court, to challenge the Pentagon's decision to classify it as a "supply chain risk," the Associated Press reported. The U.S. struck Iran while still using Anthropic's tools for targeting and intelligence systems in Central Command, the military's Middle East headquarters. But Anthropic said it refused to let its technology be used for autonomous weapons or mass surveillance and wouldn't budge when officials demanded blanket permission to use it in any lawful scenario. Trump responded by calling Anthropic a "radical-left, woke company" that would never dictate how the military fights. It then classified it as a supply-chain risk, a designation that in the past has generally been reserved for Chinese firms suspected of espionage. The decision means the government could force any company doing business with the Defense Department to prove it doesn't use Anthropic's tools. Anthropic rejected the classification. Its CEO, Dario Amodei, said in a letter last week he does "not believe this action is legally sound, and we see no choice but to challenge it in court." -- Jackie Snow contributed to this report.
[37]
Anthropic Sues Pentagon
Can't-miss innovations from the bleeding edge of science and tech Anthropic CEO Dario Amodei picked a major fight with the Department of Defense last month, asserting that his company's AI models couldn't be used for mass surveillance of Americans or direct autonomous weapons systems. Defense secretary Pete Hegseth and president Donald Trump lambasted Amodei for dictating what they could or couldn't do with the company's tech. They quickly announced that the company would be labeled a supply chain risk "effective immediately," in sanctions conventionally reserved for companies from adversary countries. It was an unprecedented move that sent shockwaves across Silicon Valley, with a coalition of tech industry groups signing a public letter condemning the decision. Even Amodei's top rival, OpenAI CEO Sam Altman, argued that the Trump administration had overstepped by labelling Anthropic's tech non grata. Anthropic risks losing hundreds of millions of dollars' worth of US government contracts. While Amodei has since issued an apology for pushing back against Trump in a note to staffers that was leaked to The Information, Anthropic is now ready to challenge the White House's latest designation in court. As Wired reports, Anthropic filed a federal lawsuit against the Pentagon on Monday, challenging its move to label it a "supply chain risk." As Amodei noted in his blog post, "we do not believe this action is legally sound, and we see no choice but to challenge it in court." In the company's lawsuit, filed in a California court, it argues that White House officials acted unconstitutionally and out of retaliation. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," the lawsuit reads. "Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation." But experts believe Anthropic may have a very difficult road ahead. For one, "it's 100 percent in the government's prerogative to set the parameters of a contract," Snell & Winter partner Brett Johnson told Wired, effectively meaning there may be very little chance of an appeal. Johnson argued it's in Anthropic's best interest to argue in court that it was singled out among the US government's other AI contractors. While the Pentagon has officially ratified its decision to name Anthropic a supply chain risk, the company's Claude chatbot continues to be widely used in the US war on Iran, meaning the Department of Defense is now using compromised tech by its own admission. Government agencies outside of the military have revealed they would immediately follow the direction of the president and stop using Claude. A Microsoft spokesperson also told Wired that it will continue to offer the chatbot to all other agencies, except for the Defense Department. The lawsuit may greatly complicate efforts to bury the hatchet. "Anthropic has much more in common with the Department of War than we have differences," Amodei wrote in his apology. "We both are committed to advancing US national security and defending the American people, and agree on the urgency of applying AI across the government." But the lawsuit itself takes a dramatically different tone. The Trump administration's actions "are as unlawful as they are unprecedented," the document reads, calling out Hegseth specifically for sidestepping Congress. "The Challenged Actions inflict immediate and irreparable harm on Anthropic," the lawsuit reads. "On others whose speech will be chilled; on those benefiting from the economic value the company can continue to create; and on a global public that deserves robust dialogue and debate on what AI means forwarfare and surveillance."
[38]
Google joins Microsoft in telling users Anthropic is still available outside defense projects
Google CEO Sundar Pichai gestures to the crowd during Google's annual I/O developers conference in Mountain View, California, on May 20, 2025. Google said it will continue offering Anthropic's artificial intelligence technology for clients, excluding for defense work, a day after Microsoft issued a similar statement to customers. The announcements from two of the three leading cloud infrastructure vendors follow the Defense Department's official designation of Anthropic as a supply chain risk. "We understand that the Determination does not preclude us from working with Anthropic on non-defense related projects, and their products remain available through our platforms, like Google Cloud," a Google spokesperson said Friday. Amazon, the leader in public cloud, still hasn't issued a comment on the matter. Anthropic's Claude models are available through Google Cloud via the Vertex AI platform. The search giant is also a significant financial backer of Anthropic and, in January 2025, agreed to an additional $1 billion investment, adding to its previous $2 billion stake. Anthropic uses Google Cloud's AI infrastructure to train its models, and recently expanded the partnership by gaining access to up to 1 million of Google's custom tensor processing units (TPUs). After Anthropic refused to agree to the DOD's requested terms of use last week, President Donald Trump instructed federal agencies to stop using the company's technology, and Defense Secretary Pete Hegseth said its work with Anthropic would wind down over six months. CNBC has confirmed that Anthropic models were used by the U.S. in its latest attack on Iran. Some defense technology companies have told employees to stop using Anthropic's Claude models and to switch to alternatives, including from rival OpenAI. Microsoft was the first major partner to say it will keep working with Anthropic after the Pentagon's actions. "Our lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers -- other than the Department of War," Microsoft said late Thursday. Anthropic CEO Dario Amodei said on Thursday that his company has "no choice" but to challenge the supply chain risk designation in court. -- CNBC's Jordan Novet contributed to this report.
[39]
Anthropic Sues Trump Admin Over 'Supply Chain Risk' Designation - Decrypt
The designation bars Pentagon contractors from doing business with the firm. Anthropic has turned to the federal courts to fight a sweeping blacklist by the Donald Trump administration, claiming the government branded the AI startup a national security threat in retaliation for its refusal to relax safety protocols. The lawsuit, filed Monday in the United States District Court of Northern California, challenges actions taken after President Trump directed federal agencies in February to stop using Anthropic's technology. This followed public comments from Anthropic CEO Dario Amodei, who said the company would not comply with the Pentagon's request for unrestricted access to Claude. The complaint names multiple federal agencies and senior officials as defendants, including Defense Secretary Pete Hegseth, Treasury Secretary Scott Bessent, and Secretary of State Marco Rubio. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," attorneys for Anthropic said in the lawsuit. "No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation." The dispute began in January when Pentagon officials demanded AI contractors allow their systems to be used for "any lawful use," including military applications. While Anthropic had by then already entered into a $200 million contract with the Department of Defense, it refused to remove two safeguards prohibiting the use of Claude for mass domestic surveillance of Americans or for fully autonomous lethal weapons systems. "The Challenged Actions inflict immediate and irreparable harm on Anthropic; on others whose speech will be chilled; on those benefiting from the economic value the company can continue to create; and on a global public that deserves robust dialogue and debate on what AI means for warfare and surveillance," attorneys for Anthropic stated in the lawsuit. For AI developers, including SingularityNET CEO Ben Goertzel, the designation is an odd choice and doesn't fit with the typical meaning of a supply chain threat, something usually reserved for software from adversaries that could contain hidden malware, viruses, or spyware. "Anthropic not being willing to have their software used for autonomous killing or mass surveillance doesn't seem to pose a risk of that nature," Goertzel told Decrypt. "That just means if you want to use software for autonomous killing or mass surveillance, then buy somebody else's software. So the logic of making it a supply chain risk eludes me." Goertzel said differences among leading AI models may limit the practical impact of the decision. "In the end, Claude, ChatGPT, and Gemini are not that far off from each other," he said. "As long as one of these top systems is being used by the U.S. government, it's all about the same thing. And the intelligence agencies, under the cloak of top secret clearance, would use the software however they wanted." Anthropic is asking the court to declare the government's actions unlawful and block enforcement of the "supply chain risk" designation that prevents federal agencies and Pentagon contractors from doing business with the company. "There is no valid justification for the Challenged Actions," the lawsuit said. "The Court should declare them unlawful and enjoin Defendants from taking any steps to implement them." Anthropic did not immediately respond to requests for comment by Decrypt. Even after designating Anthropic a risk to national security, Claude has been used in ongoing military operations, including by U.S. Central Command to help analyze intelligence and identify targets during strikes on Iran. Jennifer Huddleston, a senior fellow in technology policy at the Cato Institute, said in a statement shared with Decrypt that the case raises concerns about constitutional protections when national security claims are used to justify government action. "While the courts have been hesitant in the past to question the government's claims of national security concerns, the circumstances of this case certainly highlight the real risk to the First Amendment rights of Americans if the underlying considerations of such claims are not thoroughly scrutinized," she said.
[40]
Internal Pentagon memo orders military commanders to remove Anthropic AI technology from key systems
Eleanor Watson is a CBS News multi-platform reporter and producer covering the Pentagon. The Defense Department has officially notified senior leadership figures throughout the U.S. military that they must remove Anthropic's artificial intelligence products from their systems within 180 days, according to an internal memorandum obtained by CBS News. The memo was dated March 6, a day after the Pentagon formally designated Anthropic a supply chain risk. It was distributed to senior leaders on Monday, alleging Anthropic's AI "presents an unacceptable supply chain risk for use in all [Department of War] systems and networks." The document, signed by Defense Department Chief Information Officer Kristen Davies, represents the latest salvo in an escalating feud between the Trump Administration and Anthropic. The notice sheds light on the wide-ranging steps military commanders will need to take to remove Anthropic AI from key national security systems, including those for nuclear weapons, ballistic missile defense and cyber warfare. It also demanded that any other company doing business with the Pentagon must stop using all Anthropic products on work related to Defense Department contracts within 180 days. In the memo, Davies warned that adversaries "can exploit vulnerabilities" of the daily operations of the Pentagon, and possible exploitation could pose "potential catastrophic risks to the warfighter." Davies said she is the only one who can grant an exception. "Exemptions will only be considered for mission-critical activities directly supporting national security operations where no viable alternative exists, and the requesting Component must submit a comprehensive risk mitigation plan for approval," she wrote. A senior Pentagon official confirmed the memo's authenticity. Anthropic did not immediately respond to a request for comment. The federal government's action is said to be unprecedented -- the first time an American company has been designated a supply chain risk. During President Trump's first term, the government took similar action to restrict foreign-based companies like Chinese telecommunications giant Huawei. It comes after an impasse over Anthropic's request for two "red lines" that would explicitly prevent the U.S. military from using its Claude model to conduct mass surveillance on Americans or power fully autonomous weapons. "We believe that crossing those lines is contrary to American values, and we wanted to stand up for American values," Anthropic CEO Dario Amodei told CBS News. The Pentagon previously said it wanted to be able to use Claude for "all lawful purposes," without restrictions, arguing that the uses of AI that Anthropic is concerned about are already prohibited. Claude is currently being used by the US military in the war on Iran, according to sources familiar with the military's use of AI. Anthropic is currently the only AI company whose models are deployed on the Pentagon's classified systems. After talks between the two sides broke down last month, one of Anthropic's largest rivals -- ChatGPT creator OpenAI -- said it had signed a deal with the Pentagon. On Monday, Anthropic filed two lawsuits against the federal government, alleging that Pentagon officials' decision to deem the company a supply chain risk amounted to illegal retaliation. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," the company said in the lawsuit. "No federal statute authorizes the actions taken here." White House spokesperson Liz Huston responded to the lawsuit by saying President Trump "will never allow a radical left, woke company to jeopardize our national security by dictating how the greatest and most powerful military in the world operates." A source directly familiar with Claude's military capabilities told CBS News the main task Claude is undertaking for the military is sifting through large amounts of intelligence reports, like synthesizing patterns, summarizing findings, and surfacing relevant information faster than a human analyst could. "The military is now processing roughly a thousand potential targets a day and striking the majority of them, with turnaround time for the next strike potentially under four hours," said retired Navy Admiral Mark Montgomery, now a senior director at the Foundation for Defense of Democracies. "A human is still in the loop, but AI is doing the work that used to take days of analysis -- and doing it at a scale no previous campaign has matched."
[41]
Anthropic is suing the US government for blacklisting it and it's calling in support from Google and OpenAI
Late last month, Anthropic refused to remove AI autonomous weapons and mass surveillance safeguards when the US government asked it to do so. The administration then cancelled its contract in response. OpenAI took up that contract, and the Department of War soon threatened to label Anthropic a 'supply chain risk'. In yet another twist and turn in the story, Anthropic is suing the US government in response. According to Anthropic CEO Dari Amodei, "They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a "supply chain risk" -- a label reserved for US adversaries, never before applied to an American company -- and to invoke the Defense Production Act to force the safeguards' removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." Anthropic filed its complaint (PDF warning) on March 09, claiming that Anthropic held to its principles of building AI that "maximises positive outcomes for humanity" and the US government, in turn, "retaliated against it for expressing that principle." In response to the initial claim from Antropic, Under Secretary of War Emil Michael accused Antropic CEO Amodei of being a 'liar' and having a 'god complex'. Michael argues, "He [Amodei] wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk." Notably, the suit mentions that Anthropic agreed to much of the Department of War's request to discard usage restrictions, except for two, which are for the use of the technology in lethal autonomous warfare and mass surveillance of Americans. It goes on: "Throughout these discussions, Anthropic expressed its strongly held views about the limitations of its AI services. It also made clear that, if an arrangement acceptable to the Department could not be reached, Anthropic would collaborate with the Department on an orderly transition to another AI provider willing to meet its demands." Supporting its case, OpenAI and Google have both signed (PDF warning) an amicus brief (where a party with an interest in the case, who is not directly involved, gives some sort of testimony or argument) in support of Anthropic. It is likely in the best interest of OpenAI, which is currently working with the US Department of War, and Google, which has invested heavily in AI, to support any action made to stop the US government from blacklisting AI companies. The brief declares, "Defendants recklessly invoked national security authorities intended to protect the procurement process from interference by foreign adversaries. If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness in the field of artificial intelligence and beyond. And it will chill open deliberation in our field about the risks and benefits of today's AI systems." OpenAI tagging in here is interesting, given CEO Sam Altman admitted to rushing out a deal with the Department of War and a hardware leader resigning in response. OpenAI has taken criticism for its deal both externally and internally, and has been in a bit of a weird place ever since. Luiza Jarovsky, the cofounder of the AI, Tech and Privacy academy, who holds a PhD in Law, reckons "Anthropic is likely to win" and that it could lead to a precedent in regard to AI governance in the US. Getting the support of both Google and OpenAI only bolsters its defence. Anthropic is seeking a hearing to declare the Department of War's action a violation of the First and Fifth Amendments of the United States Constitution and to seek damages, plus payment of any attorney fees. It also pleads that the judge grant "such further and other relief as this Court deems just and proper." A hearing could be held as soon as this Friday.
[42]
Anthropic takes Trump administration to court over Pentagon row
Washington (United States) (AFP) - Anthropic filed suit Monday against the Trump administration, alleging the US government retaliated against the AI company for refusing to let its Claude AI model be used for autonomous lethal warfare and mass surveillance of Americans. In the 48-page complaint, filed in federal court in San Francisco, Anthropic seeks to have its designation as a national security supply-chain risk declared unlawful and blocked. In its lawsuit, Anthropic said it was founded on the belief that its AI should be "used in a way that maximizes positive outcomes for humanity" and should "be the safest and the most responsible." "Anthropic brings this suit because the federal government has retaliated against it for expressing that principle," the lawsuit says. Anthropic is the first US company ever to have been publicly punished with such a designation, a label typically reserved for organizations from foreign adversary countries, such as Chinese tech giant Huawei. The label not only blocks use of the company's technology by the Pentagon, but also requires all defense vendors and contractors to certify that they do not use Anthropic's models in their work with the department. "The consequences of this case are enormous," the lawsuit states, with the government "seeking to destroy the economic value created by one of the world's fastest-growing private companies." The suit names more than a dozen federal agencies and cabinet officials as defendants. The dispute erupted after Anthropic infuriated Pentagon chief Pete Hegseth by insisting its technology should not be used for mass surveillance or fully autonomous weapons systems. President Donald Trump subsequently ordered every federal agency to cease all use of Anthropic's technology. Hours later, Hegseth designated Anthropic a "Supply-Chain Risk to National Security" and ordered that no military contractor, supplier or partner "may conduct any commercial activity with Anthropic," while allowing a six-month transition period for the Pentagon itself. The row erupted days before the US military strike on Iran. Claude is the Pentagon's most widely deployed frontier AI model and the only such model currently operating on the Defense Department's classified systems. In its lawsuit, Anthropic argues the actions taken against it violate the First Amendment by punishing the company for protected speech on AI safety policy, exceed the Pentagon's statutory authority, and deprive it of due process under the Fifth Amendment. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," the complaint states. Founded in 2021 by siblings Dario and Daniela Amodei, both former staffers at ChatGPT-maker OpenAI, Anthropic has positioned itself as a safety-focused alternative in the AI race.
[43]
Anthropic sues US Defense Department over supply chain risk label
The Pentagon last week designated Anthropic a supply chain risk, meaning companies must stop using Claude in the defence department. Anthropic sued the US Department of Defense in court on Monday after the agency labelled it a supply chain risk last week. The legal complaint comes after a conflict between Anthropic and the DOD over whether the military should have unrestricted access to Anthropic's AI systems. The Pentagon sanctioned the Claude maker last week, deeming it a supply chain risk, which means companies must stop using Claude in cases directly tied to the department. President Donald Trump also said he would order federal agencies to stop using Claude, though he gave the Pentagon six months to phase out a product that's deeply embedded in classified military systems, including those used in the Iran war. "We do not believe this action is legally sound, and we see no choice but to challenge it in court," Anthropic CEO Dario Amodei wrote in a blog post on Thursday. The lawsuit has requested that a judge reverse the 'supply chain risk label' and stop federal agencies from enforcing it. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic said in the filing. "Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation." The Defense Department declined to comment Monday, citing a policy of not commenting on matters in litigation, AP reported. Anthropic said it sought to restrict its technology from being used for two high-level usages: mass surveillance of Americans and fully autonomous weapons. Defense Secretary Pete Hegseth and other officials publicly insisted the company must accept "all lawful uses" of Claude and threatened punishment if Anthropic did not comply. Anthropic makes the chatbot Claude and is the last of its peers to not supply its technology to a new US military internal network. Anthropic won a $200 million (โฌ167 million) contract from the US Department of Defence last July to "prototype frontier AI capabilities that advance US national security," Anthropic said. The company signed a partnership with Palantir Technologies in 2024 to integrate Claude into US intelligence software.
[44]
Anthropic sues Trump administration after AI dispute with Pentagon
Dario Amodei, co-founder and CEO of Anthropic, during the company's Builder Summit in Bengaluru, India, on Feb. 16.Samyukta Lakshmi / Bloomberg via Getty Images Anthropic has sued the Defense Department and other federal agencies after the Pentagon announced last week it would label the leading AI company as a threat to national security and ban the use of its products for defense purposes. In late February, President Donald Trump said he would also ban the use of Anthropic's products across other federal agencies. In its filing, Anthropic alleged that the federal government's moves to ban Anthropic go beyond a normal contract dispute and instead represent an "unlawful campaign of retaliation." The company said its "reputation and core First Amendment freedoms are under attack" given the government's actions and sought to prevent the Trump administration from implementing the bans. Anthropic said the supply chain-risk designation and messaging from the White House was already "jeopardizing hundreds of millions of dollars," illegally ignored required procedures and overstepped presidential authority. "Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security," an Anthropic spokesperson told NBC News, "but this is a necessary step to protect our business, our customers, and our partners. We will continue to pursue every path toward resolution, including dialogue with the government." The suit comes after months of increasingly tense negotiations between the company and the Pentagon over how the military should be able to use Anthropic's advanced AI systems. Anthropic and CEO Dario Amodei had sought stronger guarantees from the Pentagon that its systems would not be used for mass domestic surveillance or direct use in deadly autonomous weapons, while the Pentagon sought to use the systems for "all lawful use." Anthropic's flagship AI systems, Claude, have reportedly been used by the Pentagon on classified networks for support in intelligence assessments, targeting recommendations and battle simulations as part of its partnership with data analytics company Palantir. Claude has also been used across federal agencies to assist with data analysis and other administrative functions, much like consumer-facing chatbots. CEO Dario Amodei has said that the supply-chain risk label, historically reserved for foreign adversaries and associated companies that cannot be trusted in critical industries, has never before been publicly applied to an American company. When the two parties could not come to an agreement by a Pentagon-set deadline on February 27, President Donald Trump announced on Truth Social that all federal agencies must "IMMEDIATELY CEASE all use of Anthropic's technology." Shortly afterwards, Defense Secretary Pete Hegseth announced on X that he would direct the Pentagon to label Anthropic as a "Supply-Chain Risk to National Security." Hegseth made good on the threat on Wednesday, officially informing Anthropic that the company was banned from doing business with the Pentagon and its contractors for defense purposes.
[45]
Defense firms navigate Anthropic-Pentagon quarrel
The very public Pentagon-Anthropic feud also risks spooking the defense-tech market as it gains wider public acceptance. Driving the news: Axios asked 14 big-name defense contractors about their dealings with Anthropic as well as whether and how they use Claude. * Northrop Grumman said it has "very limited use of Claude" but would not continue the "pilot effort." The AI space, it said, is "rapidly changing." * Leidos said it employs a "range of commercial AI tools" and has licensed Claude for "limited use." The company is "prepared to adjust" its "technology stack as required" to match government policy. * HII said it does not have a relationship with Anthropic. * Both RTX and L3Harris Technologies declined to comment. * Others, including Anduril Industries, BAE Systems and GE Aerospace, did not respond by deadline. Flashback: The Defense Department last week probed Boeing and Lockheed Martin's exposure to Anthropic. At the time, a source familiar told Axios the department would reach out to "all the traditional primes." * Boeing is the seventh-largest defense contractor in the world ranked by defense revenue. Lockheed is first.
[46]
Anthropic sues to scrap Trump administration's Claude ban - SiliconANGLE
Anthropic PBC is challenging the Trump administration's recent decision to restrict the use of its software in the public sector. The artificial intelligence developer first announced its intent to litigate last month. Today, it filed two complaints with the U.S. District Court for the Northern District of California and the federal appeals court in Washington, D.C. Last June, Anthropic won a contract to provide the Pentagon with access to its Claude series of large language models. Not long thereafter, defense officials asked the company to modify Claude's terms of service to permit "all lawful use." The company declined. In particular, Anthropic indicated that it wouldn't permit Claude to be used for domestic mass surveillance or the development of fully autonomous weapons. Last month, U.S. President Donald Trump ordered that federal agencies stop using Claude within 6 months. In a related move, U.S. Defense Secretary Pete Hegseth designated the company as a supply chain risk. The latter move barred U.S. defense contractors from using Claude to deliver products or services to the Pentagon. The first lawsuit that Anthropic filed today seeks to scrap the two directives. It brings more than a half dozen arguments in favor of voiding the orders. First, the complaint alleges that the orders are causing "immediate and irreparable" economic harm to Anthropic. It states that the U.S. General Services Administration, which manages federal procurement programs, has scrapped a contract that allowed multiple agencies to access Claude. The lawsuit goes on to argue that the federal ban's impact extends to Anthropic's private sector business. "Current and future contracts with private parties are also in doubt, jeopardizing hundreds of millions of dollars in the near-term," the company wrote. A subsequent section of the complaint alleges that Trump's ban on Claude in federal networks is "outside any authority that Congress has granted the Executive." Anthropic is making a similar case against Hegseth's move to designate it as a supply chain risk. The lawsuit states that the latter directive is "contrary" to Section 3252," a part of the U.S. legal code focused on such regulatory actions. Anthropic's second complaint, which it filed with a Washington, D.C. appeals court, takes aim at the process through which Hegseth issued the designation. The order was preceded by a "determination that Anthropic presents a supply chain risk to national security under the Federal Acquisition Supply Chain Security Act of 2018." The AI developer is asking the court to review the determination. According to the lawsuit, Anthropic is seeking the review because it believes that the process through which the designation was issued didn't follow the required procedures. Furthermore, the company alleges that the designation represents a "a pretextual form of retaliation in violation of the First and Fifth Amendments to the U.S. Constitution; arbitrary, capricious, and an abuse of discretion." Anthropic is also asking that the Claude ban be suspended until the courts issue a ruling in the matter.
[47]
Anthropic sues the Pentagon after being labeled a national security risk
After running afoul of the U.S. defense apparatus, Anthropic is going on offense. On Monday, the AI developer filed a lawsuit against the Trump administration after it was labeled a "supply chain risk." The filing is the latest salvo in the back-and-forth between Anthropic and the federal government, which escalated after the Pentagon asked it to make changes to its safety guardrails. Anthropic refused, which led to the Trump administration retaliating. President Trump, in a social media post on February 27, called Anthropic a "radical left, woke company" run by "Leftwing nut jobs," and directed "EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology." That same day, Secretary of War Pete Hegseth announced that he was "directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." Anthropic has responded by filing suit. "These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," the court filing reads. "No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation."
[48]
Anthropic sues the Pentagon after being labeled a threat to national security | Fortune
Anthropic is suing the Department of Defense and other federal agencies after the Trump administration formally designated the company a "supply chain risk" late last week. The development is the latest development in a disagreement between the Pentagon and Anthropic over how the Trump Administration's use of its AI technology with big implications for the control of AI overall and the relationship between business and government. Anthropic had been trying to ensure that the government does not use its AI model Claude for domestic mass surveillance and autonomous weapons. The Pentagon, which has been using Claude for a variety of purposes, including processing intelligence, wanted these restrictions removed from Anthropic's existing contract and for Anthropic to agree to a new contract in which it allowed the military to deploy Claude for "all lawful use." Anthropic refused to agree to these terms. In response, the Trump Administration cancelled the company's deals with the government and designated it a supply chain risk, a label historically reserved for companies tied to foreign adversaries. The designation means that no defense contractors can use Anthropic's technology in the completion of any work for the Department of War. Pete Hegseth, the secretary of war, had said that the military would stop using Claude "immediately," but also that there would be a six month phase out of the technology to prevent a disruption in critical operations. The military has reportedly been using Claude in its current war with Iran to help process intelligence and targeting data. Hegseth had also said that the supply chain risk designation would mean that defense contractors would have to sever all commercial ties to Anthropic, something most legal experts have said is not supported by the statute on supply chain risks and Anthropic has said that the formal designation it received from the Pentagon applied only to work on defense contracts and not other commercial work unrelated to those contracts. The lawsuit, filed Monday in the U.S. District Court for the Northern District of California, calls the administration's actions "unprecedented and unlawful" and claims they threaten to harm "Anthropic irreparably." The complaint claims that government contracts are already being canceled and that private contracts are also in doubt, putting "hundreds of millions of dollars" at near-term risk. An Anthropic spokesperson told Fortune: "Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners." "We will continue to pursue every path toward resolution, including dialogue with the government," they added. The Department of War told Fortune is does not comment on litigation as a matter of policy. The supply chain risk designation requires defense vendors and contractors to certify they are not using Anthropic's models in their Pentagon work. Trump also directed federal agencies in a social media post to "immediately cease" all use of Anthropic's technology, writing in a Truth Social post: "WE will decide the fate of our Country - NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about." Legal experts have questioned whether the supply chain risk designation is legally sound. In an article in the non-profit publication Lawfare, lawyers Michael Endrias and Alan Z. Rozenshtein argued the designation "exceeds what the statute authorizes," that "the required findings don't hold up," and that Hegseth's own public statements "may have doomed the government's litigation posture before it even begins." "The government cannot simultaneously claim a vendor poses an acute supply chain threat requiring emergency exclusion and that it's perfectly safe to keep using the vendor for half a year," they wrote, calling the overall designation "political theater: a show of force that will not stick." Complicating matters, just hours after the Anthropic-Pentagon negotiations fell apart, OpenAI struck its own deal with the Department of War, seemingly agreeing to provide its models without the contractual limitations Anthropic had been insisting upon but with what OpenAI has said are additional contractual and technical safeguards that will, in practice, result in the same restrictions on the use of its technology that Anthropic had been trying to achieve. The deal quickly drew sharp criticism, with many questioning whether OpenAI's contract language offered meaningfully different protections than what Anthropic had rejected. OpenAI later acknowledged the announcement looked "sloppy and opportunistic" and said it was renegotiating some of its terms. Relations between the two rival companies have since deteriorated. In an internal memo reported by The Information, Amodei called OpenAI staff "gullible" and accused the company's leadership of spreading "straight up lies." Amodei later apologized for the message, saying the memo had been written just hours after negotiations had fallen apart and did not reflect his "careful or considered views."
[49]
AI tech firm Anthropic sues over blacklisting by Pentagon
Anthropic argues it is being punished by the Pentagon with an "unlawful campaign of retaliation" that violates free speech. Anthropic, which owns the AI assistant Claude, is suing the Trump administration after what it called an "unprecedented and unlawful" decision to blacklist the firm on national security grounds. The Pentagon designated the artificial intelligence company a "supply chain risk" on Thursday over its refusal to allow unrestricted military use of its technology. It has been involved in an unusually public dispute over how Anthropic's AI chatbot Claude could be used in warfare. Anthropic responded on Monday by filing two separate lawsuits, one in California federal court and another in the federal appeals court in Washington DC each challenging different aspects of the Pentagon's actions against the company. "These actions are unprecedented and unlawful," Anthropic's lawsuit says. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorises the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation." The defence department declined to respond, saying its policy is not to comment on ongoing litigation. Anthropic, whose financial backers include Alphabet's Google and Amazon, has insisted on restricting its technology from being used for mass surveillance of Americans and fully autonomous weapons. US defence secretary Pete Hegseth had threatened to punish Anthropic if it did not accept "all lawful uses" of Claude. Donald Trump also said he would order federal agencies to stop using Claude, though he gave the Pentagon six months to stop using the AI assistant, which is deeply embedded in classified military systems, including those used in the Iran war. Designating Anthropic a supply chain risk would cut off its defence work by using powers designed to prevent foreign adversaries from harming national security systems. It is the first time the federal government is known to have used the designation against a US company. Read more from Sky News: Trump's furious response to Anthropic Anthropic's model is scaring lawyers AI willing to 'go nuclear' in wargames Anthropic, which has been recently valued at $380bn (ยฃ284bn), has attempted to convince businesses and other government agencies that the Trump administration's penalty is narrow, and only affects military contractors when they are using Claude for defence work. Most of its projected $14bn (ยฃ10.5bn) in revenue this year comes from businesses and government agencies, which are using Claude for computer coding and other tasks. The defence department signed agreements worth up to $200m each with major AI labs in the past year, including Anthropic, OpenAI and Google. Microsoft-backed OpenAI announced a deal with the US military to use its technology, shortly after Mr Hegseth moved to blacklist Anthropic.
[50]
Defense tech companies are dropping Claude after Pentagon's Anthropic blacklist
"Most of our companies are actively involved in large defense contracts and so are very strict in their interpretation of the requirements," said Alexander Harstrick, managing partner at J2 Ventures, which backs startups in the space. Harstrick told CNBC in an email that 10 of his firm's portfolio companies that work with the Department of Defense, "have backed off of their use of Claude for defense use cases and are in active processes to replace the service with another one." Meanwhile, defense contractors like Lockheed Martin are expected to remove Anthropic's technology from their supply chains, Reuters reported late Tuesday. It's a sudden reversal for Anthropic, which gets about 80% of its revenue from enterprise customers, CEO Dario Amodei told CNBC in January. The company entered the DoD's ecosystem in late 2024 through a partnership with software and services provider Palantir. Months after that agreement, Claude became the first major model deployed in the government's classified networks through a $200 million contract with the DoD. The model's popularity continued to soar across the business world, particularly in the area of coding assistants. Defense Secretary Pete Hegseth declared on X that any contractor or supplier doing business with the U.S. military is barred from commercial activity with Anthropic. The announcement came after Anthropic executives refused to comply with the government's demands over its model use. They wanted assurances that their AI would not be tapped for fully autonomous weapons or mass domestic surveillance of Americans. Anthropic's models are still being used to support the U.S. military operations in Iran, even after the announcement from the Trump administration, as CNBC previously reported.
[51]
Anthropic files lawsuits against US government over supply chain risk
Anthropic filed a lawsuit against the U.S. government to prevent the Pentagon from adding the company to a national security blocklist. The Department of Defense sent a letter to Anthropic confirming the company was labeled a supply chain risk. The lawsuit claims the designation is unlawful and violates free speech and due process rights. The legal action follows a dispute between the AI company and the government regarding the use of its models for surveillance and autonomous weapons. The Department of Defense and Defense Secretary Pete Hegseth pressured Anthropic to remove safeguards from its AI systems. This development highlights the escalating conflict between AI safety principles and national security demands. On February 27, Anthropic CEO Dario Amodei refused to allow its model to be used for mass surveillance or autonomous weapons. Hegseth threatened a supply chain risk designation and the cancellation of a $200 million contract. President Trump ordered all federal agencies to cease using Anthropic on the same day. Anthropic agreed to collaborate with the Department on an orderly transition to another AI provider following the deadline. The lawsuit characterizes the government's actions as an "unprecedented and unlawful [...] campaign of retaliation." Anthropic stated, "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech." Anthropic rival OpenAI quickly made a deal with the Department of Defense. OpenAI CEO Sam Altman stated prohibitions on domestic mass surveillance and human responsibility for the use of force. OpenAI wrote into its contract that its AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals. OpenAI head of robotics hardware Caitlin Kalinowski resigned this weekend in response to the Defense Department deal. Kalinowski wrote on X that surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation. An Anthropic spokesperson stated the company will continue to pursue every path toward resolution, including dialogue with the government.
[52]
Anthropic Sues Trump Admin to Undo 'Supply Chain Risk' Label
Anthropic, the creator of the AI software Claude, has sued the Trump administration for what it says is an "unlawful campaign of retaliation" after the company refused to allow the military unrestricted use of its technology. Anthropic sued multiple government agencies and officials in a California federal court on Monday, asking the court to reverse the Department of Defense's decision to label the company a "supply chain risk." It also seeks to overturn US President Donald Trump's directive to federal employees to stop using Claude. Anthropic also filed suit in a Washington, D.C., appeals court to challenge the Defense Department's decision. "These actions are unprecedented and unlawful," Anthropic argued. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech." Last month, Defense Secretary Pete Hegseth, who is named in the lawsuit, moved to label Anthropic as a supply chain risk, which was finalized on March 3, meaning any person or business doing business with the military can't also deal with Anthropic. It is the first time an American company has been designated a supply chain risk, a label usually reserved for companies tied to foreign adversaries. The US government and the Pentagon have used Anthropic since 2024, and the company's technology is the first AI to be deployed for use in classified work. Anthropic said that Hegseth's decision came after he demanded the company "discard its usage restrictions altogether," but Anthropic maintained its technology shouldn't be used for lethal autonomous warfare and mass surveillance of Americans, clauses that were always part of its government contracts. "Anthropic has never tested Claude for those uses," the company said in its lawsuit. "Anthropic currently does not have confidence, for example, that Claude would function reliably or safely if used to support lethal autonomous warfare." Related: US military used Anthropic in Iran strike despite ban order by Trump: WSJ Anthropic's lawsuit also named the US Treasury and its secretary, Scott Bessent, the State Department, and Secretary of State Marco Rubio, along with 17 other government agencies and officials. A group of more than 30 AI engineers and scientists from OpenAI and Google, including the latter's chief scientist, Jeff Dean, also filed a legal brief in support of Anthropic on Monday. "If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness in the field of artificial intelligence and beyond," the group wrote.
[53]
Anthropic sues Trump administration seeking to undo 'supply chain risk' designation
Anthropic is suing the Trump administration, asking federal courts to reverse the Pentagon's decision designating the artificial intelligence company a "supply chain risk" over its refusal to allow unrestricted military use of its technology. Anthropic filed two separate lawsuits Monday, one in California federal court and another in the federal appeals court in Washington, D.C., each challenging different aspects of the Pentagon's actions against the company. The Pentagon last week formally designated San Francisco tech company a supply chain risk after an unusually public dispute over how its AI chatbot Claude could be used in warfare. The lawsuits aim to undo the designation and block its enforcement.
[54]
Anthropic sues Pentagon, Trump administration over "supply chain risk" designation
Washington -- Anthropic sued the Defense Department and other federal agencies on Monday over the Trump administration's move to designate it a supply chain risk and eliminate its use across the government, the latest chapter in a bitter dispute over the firm's powerful artificial intelligence model. In a 48-page lawsuit filed in the U.S. District Court for the Northern District of California, Anthropic said efforts by the Pentagon and President Trump to punish the company were "unprecedented and unlawful." "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation," the filing said. The company filed a separate, narrower suit asking the U.S. Court of Appeals for the District of Columbia to review the Pentagon's determination that it poses a risk to the supply chain. Federal law gives that court jurisdiction to review the finding. The dispute stems from guardrails that Anthropic sought to impose on the military's use of Claude, the only AI model authorized for use on classified networks. The company sought assurances from the Pentagon that Claude would not be used for mass surveillance of U.S. citizens or to power lethal autonomous weapons. The Pentagon insisted that Claude be available for "all lawful use." The two sides failed to resolve the conflict before a deadline of Feb. 27. Mr. Trump announced that he was ordering all federal agencies "to IMMEDIATELY CEASE all use of Anthropic's technology." Defense Secretary Pete Hegseth said Anthropic would be designated a supply chain risk and cut off from defense contracts, phasing out the technology over the course of six months. The Pentagon has continued to use Claude during the U.S. and Israeli war with Iran, CBS News reported last week. Hegseth formally issued the supply chain risk designation last week. Anthropic's lawsuit asks the court to block Hegseth's order and declare it as "arbitrary, capricious, an abuse of discretion, and contrary to law." The company also asked the court to find that the president did not have the authority to order the rest of the government to cut ties with Anthropic. "Anthropic's contracts with the federal government are already being canceled. Current and future contracts with private parties are also in doubt, jeopardizing hundreds of millions of dollars in the near-term," the company's lawsuit said. "On top of those immediate economic harms, Anthropic's reputation and core First Amendment freedoms are under attack. Absent judicial relief, those harms will only compound in the weeks and months ahead." The lawsuit accused the administration of "seeking to destroy the economic value created by one of the world's fastest-growing private companies, which is a leader in responsibly developing an emergent technology of vital significance to our Nation." "The Challenged Actions inflict immediate and irreparable harm on Anthropic; on others whose speech will be chilled; on those benefiting from the economic value the company can continue to create; and on a global public that deserves robust dialogue and debate on what AI means for warfare and surveillance," the suit continued. "There is no valid justification for the Challenged Actions. The Court should declare them unlawful and enjoin Defendants from taking any steps to implement them." A Pentagon spokesperson declined to comment on pending litigation. In a statement, White House spokeswoman Liz Huston said the president "will never allow a radical left, woke company to jeopardize our national security by dictating how the greatest and most powerful military in the world operates." "Under the Trump Administration, our military will obey the United States Constitution -- not any woke AI company's terms of service," she said.
[55]
Anthropic clash with Pentagon fuels government surveillance fears
Anthropic's clash with the Pentagon is reigniting fears of government surveillance, as experts warn the capabilities of artificial intelligence, paired with the Trump administration's sweeping data collections, pose new threats to individual privacy. Just over a year after President Trump welcomed AI firms into government, theWhite House's unprecedented reach for personal data has left some technology leaders at odds with the administration. Anthropic and the Department of Defense (DOD) butted heads over the extent to which the company's AI tools could be used to conduct surveillance and compile information about U.S. citizens and residents -- a redline for the company's CEO, Dario Amodei. The dispute cost Anthropic its government contract and spurred a legal battle over the company's designation as a national security threat. "Froniter AI fundamentally changes the surveillance calculus," David Bader, a professor at the New Jersey Institute of Technology, told The Hill. "Analyzing billions of data points to build profiles on millions of Americans used to be computationally impractical, but now it's trivia with AI, and the law hasn't caught up to that reality." From the start of negotiations, Amodei said AI-driven mass surveillance is "incompatible" with democratic values, warning it presents "serious, novel risks to our fundamental liberties." Anthropic, which worked with the Pentagon as a subcontractor of data analytics firm Palantir since 2024, pressed for specific restrictions on mass domestic surveillance, with the company suggesting some users are "outside the bounds" of what current technology can "safely and reliability." The DOD insisted on using an "all lawful purposes" standard and leaders alleged Anthropic sought to "personally control" the U.S. military and jeopardize national security. Failing to come to an agreement, President Trump ordered federal agencies to stop using Anthropic products and Defense Secretary Pete Hegseth issued a rare supply chain risk designation for the company. Oliver Stephenson, the associate director for AI and Emerging Technology Policy at the think tank Federation of American Scientists, explained that the data collected by the government can be inputted into AI tools and produce "incredibly detailed inferences about people." He pointed to recent research showing how large language models can be used to identify the authors of purportedly anonymous online posts, "matching what would take hours for a dedicated human-investigator." "It's not just data that's showing anonymous patterns of life," Stephenson added, "We have transitioned from a world in which the limitation used to be on collection, and is now on analysis capabilities." Today, the government can buy bulks of publicly available data on individuals from commercial data brokers. The information available spans location histories, purchase history, demographic details and more. Civil liberties groups have long argued the practice violates the Fourth Amendment, while the government has argued in the past that it does not break any laws. Much of the publicly reported surveillance has often involved intelligence agencies like the National Security Agency, and Department of Homeland Security, not necessarily the broader Pentagon. Emil Michael, undersecretary of Defense for research and engineering, pushed back on these concerns regarding domestic surveillance, telling the tech publication Pirate Wires that the Pentagon is "not the FBI" or "DHS" "The notion that we would get painted as wanting to do that is crazy," Michael reportedly said, "We don't want censorship, we don't want people's privacy intruded." Michael's comments allude to the worst-case scenario fears the public often holds of so-called government "spying." OpenAI, the maker of the popular ChatGPT chatbot, reached a deal with the Pentagon just hours after the agency's talks with Anthropic fell apart earlier this month. OpenAI faced immediate backlash and days later, CEO Sam Altman announced an amended version of the deal with new assurances, including a declaration that "AI systems shall not be intentionally used for domestic surveillance of U.S. persons and nationals." This included a stipulation that OpenAI's services will not be used by the department's intelligence agencies, including NSA. Altman was flooded with requests on social media to share the exact terms of the contract as a way to "regain" the trust they saw was broken after OpenAI's fast deal. "What we're seeing right now is the culmination of decades of cultivated mistrust on the part of both Big Tech and the government," Matthew Guariglia, a senior policy analyst at the Electronic Frontier Foundation, told The Hill, adding, "In a more perfect world, you have entities that you trust with your data." The wavering trust of the government comes as the White House faces separate scrutiny over how agencies share personal data about U.S. residents and deploy new technologies to carry out its immigration enforcement efforts. Much of this data information swapping has occurred with Immigration and Customs Enforcement, which is receiving data on taxpayers from Internal Revenue Services, Medicaid data from the Department of Health and Human Services and passenger information from the Transportation Security Administration. At Customs and Border Protection, the agency purchased data from the online advertising industry, 404 Media reported earlier this month. Guariglia, whose research focuses on surveillance policing, noted the full breadth of how technology could be used for surveillance is not yet known. "People need to look at the laying of infrastructure...use cases tend to come later, but what we need to see is what the government is actually physically capable of right now," he said. "And they are more than capable of scraping the entire internet, siphoning data from many different branches of government [and] buying commercially available data from data brokers." Owen Daniels, associate director of analysis and Andrew W. Marshall fellow at Georgetown University's Center for Security and Emerging Technology, argued it "seems unlikely" the Pentagon is looking to use Anthropic for mass surveillance given restrictions in U.S. law. Still, Daniels said the situation speaks to a larger concern over the ability of AI tools to aggregate from data, "whether for intelligence collection abroad or to build profiles of consumers at home." The saga between Anthropic and the Pentagon is renewing calls for Congress to do more about government surveillance and AI regulation as a whole. "This isn't just about whether you trust today's military leadership, it's about building durable technical and legal frameworks for AI," Bader said. "Capabilities will only grow more powerful." While the Trump administration has cast the issue as political, Bader said there is a legitimate policy debate to be had over how AI should be deployed and where responsibility sits for ensuring legal and responsible use. Amodei similarly said he believes it is "Congress's job" to address how AI is changing the possibilities of domestic mass surveillance. "The judicial interpretation of the Fourth Amendment has not caught up. Or the laws passed by Congress have not caught up," Amodei told CBS News earlier this month. "So in the long run, we think Congress should catch up with where the technology is going." Most major attempts to regulate or add guardrails to AI have made little progress in Congress, let alone on surveillance-specific bills. Guariglia pointed out the House overwhelmingly passed the Fourth Amendment is Not For Sale Act in 2024, but it did not move in the Senate. The bipartisan bill would limit how the government can purchase data from third parties and require law enforcement and other government entities to get a warrant before buying information from third-party data brokers. "The Senate could take that up tomorrow and close one of the biggest loopholes that's allowing warrantless surveillance in the United States," Guariglia said. Anthropic filed two lawsuits in federal courts Monday over the supply chain risk designation, warning of the "enormous" consequences of the case. The AI firm alleged the federal government "relegated" against the firm for its protected viewpoint."
[56]
Factbox-Anthropic's Legal Claims Against Trump's Blanket Government Ban on AI Startup
March 9 (Reuters) - Anthropic filed a lawsuit on Monday โ to โ block the Pentagon from placing โ it on a national security blacklist, escalating the artificial intelligence lab's high-stakes battle with the U.S. military over usage restrictions on its technology. The AI startup's lawsuit against the U.S. government, President Donald Trump and Defense Secretary Pete Hegseth, among others, makes the following claims: FIRST โ AMENDMENT VIOLATION The โ startup claimed that the Pentagon retaliated against Anthropic for protected activities in violation of the First Amendment, which ensures free speech rights. Anthropic said in its lawsuit that the Constitution confers upon it "the right to express its views -- both publicly and to the government -- about the limitations of its own AI services and important issues of AI safety." The โ lawsuit โ claims that the U.S. โ government's blacklisting of the company constitutes retaliation against Anthropic's expressive activities, including protected speech, viewpoints and petitioning of the government. PRESIDENTIAL ACTION โ BEYOND LEGAL AUTHORITY Anthropic claimed that President Donald Trump directing the U.S. government to stop work with Anthropic -- announced in a post on his social media platform Truth Social late last month -- was "ultra vires," or went beyond his legal power and authority. FIFTH AMENDMENT VIOLATIONS Anthropic alleges โ the U.S. government violated its Fifth Amendment rights to due process by effectively โ blacklisting the company without following required legal protocols. According to the lawsuit, the government bypassed mandatory legal procedures by terminating contracts and blocking future work without providing prior notice or a meaningful opportunity to respond. ADMINISTRATIVE PROCEDURE ACT VIOLATION Anthropic alleged that the DOD designating the company as a supply-chain risk, and prohibiting the department's contractors, suppliers and partners from conducting any commercial activity with it, violates the Administrative Procedure Act. The APA sets out procedures agencies must follow when making โ decisions and permits courts to override agency actions that are arbitrary, an abuse of discretion or otherwise unlawful. Anthropic said Defense Secretary Pete Hegseth's decision to deem the company a supply-chain risk overstepped his authority, did not follow proper legal procedures and lacked supporting evidence. (Reporting by Arsheeya Bajwa and Anhata Rooprai in Bengaluru; Editing by Alan Barona)
[57]
Anthropic Sues Hegseth's Pentagon, Alleging Retaliation Over National Security Label
The dispute began after Anthropic refused to give the Department of Defense unrestricted access to its AI systems. CEO Dario Amodei has drawn firm red lines around how the company's technology can be used, saying Anthropic will not allow its models to power mass surveillance of Americans or lethal autonomous weapons. The 'Red Lines' Behind the Dispute Defense Secretary Pete Hegseth pushed back forcefully. In negotiations with the company, he demanded the Pentagon receive access to Anthropic's AI models for "all lawful purposes," arguing the military cannot allow private companies to dictate how critical technology is deployed during a national security emergency. When Anthropic refused to remove its restrictions, the Trump administration moved quickly. On February 27, federal agencies and defense contractors were ordered to halt business with the company. The same day, Hegseth warned Anthropic could be designated a supply chain risk, a classification typically reserved for companies tied to foreign adversaries.
[58]
What the Anthropic Lawsuit Means for the Future of AI in Warfare
What Former Prince Andrew's Arrest Says About the Royal Family: 'It's the Stuff of Nightmares' Tension has been building between the Trump administration and the tech firm Anthropic amid a heated debate over how and when the government should use their AI in war. The company and the government are at an impasse: the Department of Defense wants carte blanche on how they can use Claude, Anthropic's large language model, at home and on the battlefield; Anthropic says it doesn't think Claude is ready to be used for mass surveillance of Americans or to develop fully autonomous weapons. When Anthropic refused to give in on these usages, President Donald Trump and Secretary of Defense Pete Hegseth responded by restricting their usage within federal agencies, ultimately labeling Anthropic a supply chain risk to national security. The move was extreme, to say the least: The designation bans Anthropic from working on government contracts, marking the first time an American-owned company has been publicly marked as a supply chain risk, a designation intended to address concerns that adversaries to the United States may be maliciously introducing vulnerabilities into U.S. military systems. In the past, this has been reserved for Chinese and Russian companies suspected of espionage or sabotage. On Monday, Anthropic filed two lawsuits against several federal agencies, including the Department of Defense and the Executive Office of the President, claiming that the government's actions are "unprecedented and unlawful," arguing that the designation was a retaliation following the breakdown of negotiations. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic's attorneys argue in the lawsuit. Experts were similarly surprised by the government's actions. "The way that the statute defines supply chain risk is that it must be an adversary that can end up compromising or introducing a malicious function in defense and national security systems," says Amos Toh, a senior counsel at the Brennan Center's Liberty and National Security program who is not connected to the Anthropic case. "In this case, there is no evidence that by including Claude in your defense and military systems, adversaries will seek to compromise it. If anything, the opposite argument could be made: The fact Anthropic is drawing these red lines and not letting DoD use this technology when it's not ready for prime time, increases the safety of the system." And the problem goes far beyond one company's relationship with the government. "There's an extraordinarily deep and important issue lurking somewhere here, which is, what should the relationship be between the government and the AI industry?" says University of Minnesota Law School professor Alan Rozenshtein, who is also not involved with the lawsuit. "Should the military ever enter into contracts where the private company purports to limit what the military can do beyond just obeying the laws?" In July 2025, the Trump administration released its AI Action Plan, announcing the government's intent in increasing the way it uses AI technology. That same month, Anthropic signed a deal with the Pentagon to incorporate Claude into military operations. The government has used Claude in the raid and capture of Venezuelan dictator Nicolas Maduro, and in its current attacks on Iran. At the end of February, after months of negotiations, conversations to update the contract between Anthropic and the Dept. of Defense stalled when Anthropic reiterated that Claude was not yet ready to be used for lethal autonomous warfare or the mass surveillance of Americans. The Pentagon argued they wanted full control over how it was used, so long as the uses were "lawful." Toh explains that the word lawful is important. "What the Defense Department is essentially doing is exploiting legal gray areas around its ability to conduct surveillance on Americans and to develop and field fully autonomous weapons," says Toh. "To push for uses of AI models in ways that are either constitutionally suspect or kind of dubious under its obligations under the laws of war." Meaning, the Supreme Court has not directly grappled with what constitutional protections Americans have when the government uses AI. Because AI regulations and laws have not been able to keep up with advances in technology, something that is technically lawful today could still exist in a moral and ethical gray area. Experts argue these are complicated challenges that require deeper consideration, and would benefit from an informed Congress weighing in. For mass surveillance, Toh explains, we have seen scenarios where law enforcement, intelligence agencies, and the military are used for joint missions, like in immigration enforcement. One of the red lines Anthropic drew was not allowing Claude to be used to analyze bulk commercial data sets that contain information about Americans. Toh says that while this is technically legal, an AI model like Claude can put together seemingly unrelated data points that would provide extremely sensitive insights into people's lives, habits, and movements that could be used to surveil or target them. "You can see scenarios where the military may be tasked to perform certain types of surveillance in immigration enforcement activities and that's a situation in which the use of models like Claude supercharges the potential for abuse." When it comes to weapons, Toh adds, Anthropic is not saying they are opposed to autonomous weapons, they are saying they are opposed to the use of their current model in developing fully autonomous weapons because the model is not there yet in terms of the technology. "We do not believe, and have never believed, that it is the role of Anthropic or any private company to be involved in operational decision-making -- that is the role of the military," Anthropic CEO Dario Amodei said in a statement on March 5. "Our only concerns have been our exceptions on fully autonomous weapons and mass domestic surveillance, which relate to high-level usage areas, and not operational decision-making." In response to Anthropic's refusal to back down on these two specific red lines, Secretary of Defense Pete Hegseth threatened two seemingly contradictory actions against Anthropic: Hegseth first said he may invoke a law called the Defense Production Act, which would require Anthropic to sell Claude to the Dept. of Defense without the restrictions that Anthropic wanted. Then, he said he'd deem Anthropic a "supply chain risk," which was followed by a Truth Social post by President Donald Trump expanding the threat, saying he was directing every federal agency to "immediately cease all use of Anthropic's technology." The president added that there would be a six month phase out period for agencies like the Department of Defense who are using Anthropic's products. For example, the government is currently using Anthropic's Claude in the military operation against Iran. On March 4, Anthropic was officially labeled as a supply chain risk, in a more narrow reading than Trump's Truth Social post. Rozenshtein explains how Hegseth's initial threat could have decimated Anthropic. "The DOD supply chain designation only applies to DOD contracts, but Hegseth was also saying no one who does business with the DOD can do any business with Anthropic at all, which would have ended Anthropic as a company," he says. The latter is not currently on the table, and if Trump follows through on his threat to stop Anthropic from selling to anyone in the government, Rozenshtein says that this is still only a fraction of Anthropic's overall business. "But the vibes are bad," says Rozenshtein. "It's bad to walk around being designated a supply chain risk by the U.S. government." Anthropic filed lawsuits in California and Washington D.C. making multiple arguments against both the Department of Defense's actions and Trump's threats. "The first argument they're making is that this supply chain issue makes no sense, this is not what the law was for," says Rozenshtein. "The law was for foreign companies who are trying to smuggle in threats, not a U.S. company that has a contract dispute." "You can't simultaneously say, 'We're going to force you to sell to us and we're going to use you to bomb Iran and you're really scary," he adds. "This seems very obvious to me and I think they will just win on these grounds." Anthropic's suit is also alleging that the government is retaliating against them and infringing on their First Amendment rights, and that they're not getting due process. The last part of their lawsuit specifically refers to Trump. "They're saying Trump can't just do this, you can't ban an entire company for no reason from the entire federal government," adds Rozenshtein. "Congress wrote a whole, very elaborate procurement system and there are rules for this." He sums the lawsuit up: "If the government can't come to some agreement with a company, do they shake hands and walk away like gentlemen? Or do they burn the company to the ground?" As Anthropic and the Pentagon continue their back and forth, the reality is that the government is still actively using Claude, and Anthropic is still in dialogue with them about the terms of their contract. "Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners," Anthropic said in a statement to Rolling Stone. "We will continue to pursue every path toward resolution, including dialogue with the government." This discourse has reinvigorated calls for stronger AI regulations and congressional laws to weigh in, rather than leaving massive ethical decisions and red lines to be drawn by private companies whom the public has little influence on. "These are consequential policymaking decisions that have life and death implications," says Toh. "You can't just leave it up to the military to decide which kinds of weapons satisfy the U.S.'s obligations under the law of war, that is something Congress should investigate and set restrictions on." And, Toh adds, the First Amendment implications in this entire debate are extremely serious, whether we are talking about weapons, mass surveillance, or what text is outputted from an AI model. "If even the threat of canceling government contracts could be used to force companies to align their technology in ways that prioritize certain facts over others, or characterize things in a certain way, or suppress content, that has a serious impact on our access to information," says Toh. "What we are seeing with the DOD and Anthropic is that kind of pressure tactic." Rozenshtein says that while the feud between Anthropic and the government might be interesting to look at, he hopes people don't lose sight of the bigger, underlying issues surrounding our relationship with technology, who is making these decisions, and how it impacts all of our lives. "These are going to be the main issues that we're gonna be debating for the next several years," he says. "They are really complicated."
[59]
Anthropic sues to block Pentagon blacklisting over AI use restrictions - The Economic Times
The Pentagon on Thursday slapped a formal supply-chain risk designation on Anthropic, limiting use of a technology that a source said was being used for military operations in Iran.Anthropic on Monday filed a lawsuit to block the Pentagon from placing it on a national security blacklist, escalating the artificial intelligence lab's high-stakes battle with the U.S. military over usage restrictions on its technology. The Pentagon on Thursday slapped a formal supply-chain risk designation on Anthropic, limiting use of a technology that a source said was being used for military operations in Iran. Anthropic said in its lawsuit that the designation was unlawful and violated its free speech and due process rights. The filing in federal court in California asked a judge to undo the โ designation and โ block federal agencies from enforcing it. "These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic said. Defense Secretary Pete Hegseth designated Anthropic a national security supply-chain risk last week after the startup refused to remove guardrails against using its AI for autonomous weapons or domestic surveillance. The designation poses a big threat to Anthropic's business with the government, and the outcome could shape how other AI companies negotiate restrictions on military use of their technology, though the company's CEO Dario Amodei clarified on Thursday that the designation had "a narrow scope" and businesses could still โ use its tools in projects unrelated to the Pentagon. President Donald Trump has also directed the government to stop working with Anthropic, whose financial backers include Alphabet's Google and Amazon.com. Trump and Hegseth said there would be a six-month phase-out. Reuters โ has reported that Anthropic's investors were racing to contain the damage caused by the fallout with the Pentagon. Trump and Hegseth's actions on February 27 came after months of talks with Anthropic over whether the company's policies could constrain military action and shortly after Amodei met with Hegseth in hopes of reaching a deal. The Pentagon said U.S. law, not a private company, would determine how to defend the country and insisted on having full flexibility in using AI for "any lawful use," asserting that Anthropic's restrictions could endanger American lives. Anthropic said even the best AI models were not reliable enough for fully autonomous weapons and that using them for that purpose would be dangerous. The company also drew a red line on domestic surveillance of Americans, calling that a violation of fundamental rights. After Hegseth's announcement, Anthropic said in a statement that the designation would be legally unsound and set a dangerous precedent for companies that negotiate with the government. The company โ said it would not be swayed by "intimidation or punishment," and on Thursday Amodei reiterated that Anthropic would challenge the designation in court. He also apologized for an internal memo published on Wednesday by tech news site The Information. In the memo, which was written last Friday, Amodei said Pentagon officials did not like the company in part because "we haven't given dictator-style praise to Trump." The Defense Department signed agreements worth up to $200 million each with major AI labs in the past year, including Anthropic, OpenAI and Google. Microsoft-backed OpenAI announced a deal to use its technology in the Defense Department network shortly after Hegseth moved to blacklist Anthropic. CEO Sam Altman said the Pentagon shared OpenAI's principles of ensuring human oversight of weapon systems and opposing mass U.S. surveillance.
[60]
Anthropic Sues To Block Pentagon Blacklisting Over AI Use Restrictions
Anthropic said in its lawsuit that the designation was unlawful and violated its free speech and due process rights. The filing in federal court in California asked a judge to undo the designation and block federal agencies from enforcing it. "These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic said. Trump and Hegseth's actions on February 27 came after months of talks with Anthropic over whether the company's policies could constrain military action and shortly after Amodei met with Hegseth in hopes of reaching a deal. The Pentagon said U.S. law, not a private company, would determine how to defend the country and insisted on having full flexibility in using AI for "any lawful use," asserting that Anthropic's restrictions could endanger American lives. Anthropic said even the best AI models were not reliable enough for fully autonomous weapons and that using them for that purpose would be dangerous. The company also drew a red line on domestic surveillance of Americans, calling that a violation of fundamental rights. After Hegseth's announcement, Anthropic said in a statement that the designation would be legally unsound and set a dangerous precedent for companies that negotiate with the government. The company said it would not be swayed by "intimidation or punishment," and on Thursday Amodei reiterated that Anthropic would challenge the designation in court. He also apologized for an internal memo published on Wednesday by tech news site The Information. In the memo, which was written last Friday, Amodei said Pentagon officials did not like the company in part because "we haven't given dictator-style praise to Trump." (Reporting by Jack Queen in New York; Editing by Noeleen Walder, Lisa Shumaker, Daniel Wallis and Nick Zieminski)
[61]
Anthropic sues Trump administration over supply chain designation
Artificial intelligence company Anthropic filed a lawsuit against the Trump administration Monday challenging the Pentagon's decision to label the company and its products as a "supply chain risk" after negotiations over safety guardrails fell apart earlier this month. The suit, filed in federal court in California Monday, argues the designation as well as President Trump's order for all federal agencies to cease the use of Anthropic are "unprecedented and unlawful." The AI firm asked the court to reverse the Pentagon's decision, warning the "consequences of this case are enormous." The supply chain risk designation has typically been reserved for foreign adversaries and restricts defense contractors from using the company's products. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," the suit states. "No federal statue authorizes the actions taken here." Anthropic has taken to court as a "last resort," the company's lawyers said, alleging the federal government "retaliated" against the firm for "adhering to its protected viewpoint on a subject of great public significance -- AI safety and the limitations of its own AI models." The lawyers also accused the Trump administration of trying to "destroy" the economic value of the Anthropic. Anthropic, which was founded with a particular focus on safety, has provided the Pentagon and intelligence agencies with its technology since late 2024 through a partnership with Palantir. The company has sought to set itself apart from AI competitors, calling for transparency and basic guardrails on the technology's development. During negotiations with the Department of Defense (DOD), Anthropic set two red lines on domestic mass surveillance and autonomous weapons, arguing AI is not reliable enough to make life-or-death decisions while changing what is possible with government surveillance. The Pentagon rejected Anthropic's arguement and insisted it should be able to use AI for whatever it deems to fall under "all lawful purposes." Negotiations fell apart earlier this month and Anthropic CEO Dario Amodei said the company could not "in good conscience" agree to the Pentagon's terms. Pentagon Secretary Pete Hegseth notified the company last week of the designation while Trump ordered all federal agencies to immediately stop using Anthropic products, including its flagship Claude models. The DOD, along with numerous federal agencies and the executive office of the president, are named in the suit. The Hill reached out to the White House, Pentagon and the General Services Administration for comment. The DOD said it does not comment on litigation as a matter of policy.
[62]
Anthropic Sues Trump Administration Seeking to Undo "Supply Chain Risk" Designation
Anthropic is suing the Trump administration, asking federal courts to reverse the Pentagon's decision designating the artificial intelligence company a "supply chain risk" over its refusal to allow unrestricted military use of its technology. Anthropic filed two separate lawsuits Monday, one in California federal court and another in the federal appeals court in Washington, D.C., each challenging different aspects of the Pentagon's actions against the company. The Pentagon last week formally designated San Francisco tech company a supply chain risk after an unusually public dispute over how its AI chatbot Claude could be used in warfare. The lawsuits aim to undo the designation and block its enforcement.
[63]
The Pentagon Designated Anthropic as a 'Supply Chain Risk.' Here's What the Label Actually Means
The move represented a major escalation in the negotiations between Anthropic and the Pentagon regarding the military's use of Claude, the company's family of AI models (and Inc.'s Co-Founder of The Year). But Anthropic now says this designation actually has a very narrow scope, and will only affect companies that use Claude in their specific dealings with the Department of War. Anthropic is now the first American company to ever be designated a supply chain risk. It's typically reserved for foreign companies, usually ones that have close relationships with their home country's government. For example, in 2020, the Federal Communications Commission designated Huawei as a supply-chain risk due to its ties to the Chinese government. That designation was partially executed to prevent Huawei's equipment from being used in the building of the United States' 5G cellular infrastructure. According to federal law, being designated a supply chain risk by the U.S. military means that military leadership has determined that an "adversary" could potentially "sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, manufacturing, production, distribution, installation, operation, or maintenance" of a national security system in order to "surveil, deny, disrupt, or otherwise degrade the function, use, or operation of such system."
[64]
Anthropic sues to block Pentagon blacklisting over AI use restrictions
Anthropic on Monday filed a lawsuit to block the Pentagon from placing it on a national security blacklist, escalating the artificial intelligence developer's high-stakes battle with the U.S. military over usage restrictions on its technology. Anthropic said in its lawsuit that the designation was unlawful and violated its free speech and due process rights. The filing in federal court in California asked a judge to undo the designation and block federal agencies from enforcing it. "These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic said.
[65]
Anthropic Sues US Government: 'Claude AI Is Not For Autonomous Warfare'
Artificial intelligence company Anthropic filed a lawsuit against the U.S. Department of Defense, escalating its ongoing clash with the Trump administration. The Pentagon formally notified Anthropic last week that its AI products pose a risk to the U.S. supply chain. Anthropic CEO Dario Amodei clapped back on Feb. 27, claiming it would "challenge any supply chain risk designation in court." Anthropic argues that Trump's directive is unconstitutional, and the company is now asking the court to block federal agencies from taking punitive action against it. Anthropic noted that their usage policy states that Claude should not be used for "lethal autonomous warfare" or for the "surveillance of Americans en masse," as they do not believe Claude is capable of functioning reliably or safely if it were to be used for those purposes. Despite this, the Defense Department under Secretary Pete Hegseth began "demanding" that Anthropic discard its usage restrictions altogether and replace them with a general policy, which would cover "all lawful use," the lawsuit states. Anthropic noted that the most capable AI systems should also be the safest and most responsible. The company claims they are bringing this lawsuit because "the federal government has retaliated against it for expressing that principle." "These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation," the lawsuit states. Anthropic Claims US Government Violated Its Rights The lawsuit also outlined several distinct grounds on which Anthropic alleges that the U.S. government's actions unlawfully infringe upon the company's rights. According to the complaint: Anthropic's Claude Gov platform was the only AI system capable of running in the Pentagon's classified cloud. The platform has been gaining popularity in the defense department, given its user-friendly design. Wedbush analyst Dan Ives noted in a white paper entitled "Disruptive Technology" that this lawsuit will have "a ripple impact for technology partners and customers" depending on how this "soap opera" proceeds. "In a nutshell, Claude is a major player in the AI Revolution and US Tech sector, and this 'supply chain designation risk' needs to be resolved quickly for the tech sector and ongoing enterprise deployments/ pilots," Ives wrote. Ives cited technology disruption, economic weakness, and competition as risks to its price targets and ratings. Photo: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[66]
Anthropic sues the Pentagon after blacklisting
Anthropic, the US-based artificial intelligence company, has filed a federal lawsuit in California to block the Pentagon from placing it on a national security supply-chain blacklist. The company claims the designation is unlawful and violates its free speech and due process rights. As they state (via Reuters): "These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech." The dispute began after Anthropic refused to remove guardrails that prevent its AI models from being used for autonomous weapons or domestic surveillance. Defense Secretary Pete Hegseth designated Anthropic a national security risk. President Donald Trump also directed a six-month phase-out of Anthropic from government contracts. CEO Dario Amodei emphasized that Anthropic's AI is not reliable enough for autonomous weapons and warned that using it for domestic surveillance would violate fundamental rights. The outcome of the lawsuit could set a precedent for how AI companies negotiate restrictions on military applications of their technology. We'll have to wait and see how the situation unfolds...
[67]
Why the US government labelled Anthropic a 'supply chain risk': A timeline
The Department of War has demanded that Anthropic's AI models be made available to it for "any lawful use." Anthropic has, however, refused to cross its strict ethical red lines -- no fully autonomous weapons and no mass domestic surveillance. Anthropic has drawn headlines over its recent standoff with the US Department of Defense, pushing the artificial intelligence (AI) company into an increasingly difficult position. The US government has designated Anthropic a "supply chain risk," a move that could restrict federal contractors from working with the firm. The designation appears to have halted recent discussions between Anthropic and the Pentagon on how its AI models could be deployed in defence settings. Anthropic's flagship model Claude, is used across US national security agencies for applications such as intelligence analysis, modelling and simulation, operational planning, and cyber operations. The Department of War is demanding that AI models be available for "any lawful use," while Anthropic refuses to cross its strict ethical red lines, specifically, no fully autonomous weapons and no mass domestic surveillance. March 6, 2026: The US government has formally designated AI company Anthropic a "supply chain risk," a move that could prevent federal contractors from working with the firm. Microsoft, however, said it would continue deploying Anthropic's AI models in its products despite the company's dispute with the Pentagon including Anthropic's investors, such as Amazon and Nvidia. Anthropic in response said it would challenge the "supply chain risk" designation, calling it a legally unsound action never before applied to an American company. The Pentagon, meanwhile, has begun preparing to transition its AI services to other providers within six months. Amodei later told The Economist that the episode had been one of the most "disorienting" periods in the company's history while apologising for "the way he handled a recent crisis." Also Read: Anthropic vs US govt: Amodei the last holdout against Trump regime's demand for unrestricted access to AI
[68]
Anthropic Sues US Government Over Supply Chain Risk Designation | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The company filed the lawsuit in federal court in California, arguing that the designation violates its rights to free speech and due process and asking the court to reverse the designation and block the federal government from enforcing it, according to the report. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic said in its lawsuit, per the report. The Wall Street Journal reported Monday that Anthropic's lawsuit names the Department of War -- the Trump administration's preferred name for the department -- Defense Secretary Pete Hegseth, several federal agencies and other administration officials as defendants. An Anthropic spokeswoman told the WSJ: "Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers and our partners. We will continue to pursue every path toward resolution, including dialogue with the government." Reached by PYMNTS, the Department of Defense declined to comment on the report, saying its policy is to not comment on litigation. The White House told federal agencies Feb. 27 to stop using Anthropic's AI products after the company refused a Pentagon demand that Anthropic agree that the military can use its models in "all lawful use cases." Anthropic wanted contract language that would prohibit use of its models for autonomous weapons and mass domestic surveillance, while Pentagon officials took the position that the military should retain final authority on use cases, as long as they are lawful. President Donald Trump said federal agencies using Anthropic's models will get a six-month phaseout period. Hegseth said of Anthropic in a Feb. 27 post on X: "Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable." Anthropic said Feb. 27 that it was planning legal action after the Pentagon designated the company a security risk. "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons," Anthropic wrote in a blog post. "We will challenge any supply chain risk designation in court."
[69]
Tech giants say Anthropic tools will remain available for non-defense work
Three major tech companies -- Microsoft, Google and Amazon -- have said Anthropic's AI tools will remain available on their platforms for work that does not involve the Pentagon after the company was labeled a supply chain risk. A Google spokesperson said in a statement Friday that they "understand that the Determination does not preclude us from working with Anthropic on non-defense related projects, and their products remain available through our platforms, like Google Cloud." Customers and partners with Amazon's cloud computing arm, Amazon Web Services (AWS), can also continue to use Anthropic's Claude for work not associated with the Defense Department, a company spokesperson said Friday. "For all DoW workloads which use Anthropic technologies, we are supporting customers and partners as they transition to alternatives running on AWS," they added. Their assessments line up with Microsoft, which said Thursday that its lawyers had "concluded that Anthropic products, including Claude, can remain available to our customers -- other than the Department of War" and that it can continue working with the company on non-defense projects. Defense Secretary Pete Hegseth announced last Friday that he was designating Anthropic a supply chain risk after negotiations broke down over the terms of service for the company's AI models. The Pentagon officially notified Anthropic of the designation Wednesday. The AI firm's CEO, Dario Amodei, said Thursday that it would challenge the label in court, arguing the move is not "legally sound." While Hegseth initially suggested that this would bar any company who does business with the military from doing business with Anthropic, Amodei noted that the language the Pentagon used in its formal notification matches the firm's view that "the vast majority of our customers are unaffected by a supply chain risk designation." "With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts," the Anthropic CEO said. Amodei also said the company has "been having productive conversations" with the Pentagon in recent days and offered an apology for an internal memo critical of the Trump administration that was leaked to the media. "Anthropic did not leak this post nor direct anyone else to do so -- it is not in our interest to escalate this situation," he said. "It was a difficult day for the company, and I apologize for the tone of the post," he added. "It does not reflect my careful or considered views. It was also written six days ago, and is an out-of-date assessment of the current situation." Despite Amodei's reference to conservations, Emil Michael, under secretary of Defense for research and engineering, said late Thursday that there was no "active" negotiation between the Pentagon and the company.
[70]
Anthropic Sues to Block Pentagon Blacklisting Over AI Use Restrictions
NEW YORK, March 9 (Reuters) - Anthropic on Monday filed a lawsuit to โ block โ the Pentagon from placing it on a โ national security blacklist, escalating the artificial intelligence lab's high-stakes battle with the U.S. military over usage restrictions on its technology. The Pentagon on Thursday slapped a formal supply-chain risk designation on Anthropic, limiting use of a technology that a source said was being used for military operations in Iran. Anthropic said in its lawsuit that the designation was unlawful and violated its free speech and due process rights. The filing in federal court in California asked a judge โ to undo the โ designation and block federal agencies from enforcing it. "These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic said. Defense Secretary Pete Hegseth designated Anthropic a national security supply-chain risk last week after the startup refused to remove guardrails against using its AI for autonomous weapons or domestic surveillance. The designation poses a big threat to Anthropic's business with the government, and the outcome could shape how other AI companies negotiate restrictions on military use of their technology, โ though โ the company's CEO Dario Amodei clarified โ on Thursday that the designation had "a narrow scope" and businesses could still use its tools in projects unrelated to the Pentagon. President Donald Trump has also directed the government to stop working with Anthropic, whose โ financial backers include Alphabet's Google and Amazon.com. Trump and Hegseth said there would be a six-month phase-out. Reuters has reported that Anthropic's investors were racing to contain the damage caused by the fallout with the Pentagon. Trump and Hegseth's actions on February 27 came after months of talks with Anthropic over whether the company's policies could constrain military action and shortly after Amodei met with Hegseth in hopes of reaching a deal. The Pentagon said U.S. law, not a private โ company, would determine how to defend the country and insisted on having full flexibility in using AI for "any lawful โ use," asserting that Anthropic's restrictions could endanger American lives. Anthropic said even the best AI models were not reliable enough for fully autonomous weapons and that using them for that purpose would be dangerous. The company also drew a red line on domestic surveillance of Americans, calling that a violation of fundamental rights. After Hegseth's announcement, Anthropic said in a statement that the designation would be legally unsound and set a dangerous precedent for companies that negotiate with the government. The company said it would not be swayed by "intimidation or punishment," and on Thursday Amodei reiterated that Anthropic would challenge the designation in court. He also apologized for an internal memo published on Wednesday by tech news site The Information. In the memo, which was written last Friday, Amodei said Pentagon officials โ did not like the company in part because "we haven't given dictator-style praise to Trump." The Defense Department signed agreements worth up to $200 million each with major AI labs in the past year, including Anthropic, OpenAI and Google. Microsoft-backed OpenAI announced a deal to use its technology in the Defense Department network shortly after Hegseth moved to blacklist Anthropic. CEO Sam Altman said the Pentagon shared OpenAI's principles of ensuring human oversight of weapon systems and opposing mass U.S. surveillance. (Reporting by Jack Queen in New York; Editing by Noeleen Walder, Lisa Shumaker, Daniel Wallis and Nick Zieminski)
[71]
Amazon, Google And Microsoft Keep Anthropic AI For Clients Despite Pentagon Risk Label - Amazon.com (NASDAQ:AMZN), Alphabet (NASDAQ:GOOG)
Cloud Giants Shield Commercial Revenue Amid Pentagon Fallout CNBC reported that Amazon decided on Friday. Google and Microsoft (NASDAQ:MSFT) confirmed it to TechCrunch. It follows the Pentagon's formal designation, which requires defense vendors to certify that they are not using Anthropic's chatbot Claude in Department of Defense work. The three companies are the leading providers of cloud infrastructure. Since 2023, Amazon has invested $8 billion in Anthropic, whose Claude AI runs on AWS Bedrock. Amazon, Google, and Microsoft did not immediately respond to Benzinga's requests for comment. Alphabet, the parent company of the search giant, holds a $3 billion stake and hosts Claude on Vertex AI. has expanded its partnership with anthropic by providing access to up to 1 million of Google's custom tensor processing units (TPUs). Federal Ban Contradicted by Continued Military Use After Anthropic refused to agree to the DOD's requested terms of use, President Donald Trump instructed federal agencies to cease using the company's technology. Despite this, Anthropic's models were used by the U.S. in its latest attack on Iran. Earlier, Anthropic CEO Dario Amodei stated that the company has "no choice" but to challenge the supply chain risk designation in court. Tech Giants Stock Metrics Photo: Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[72]
Defense contractors move to drop Anthropic AI after Trump ban
The administration last week ordered a six-month phase-out of Anthropic's technology across federal agencies. Defense Secretary Pete Hegseth went further, stating that contractors, suppliers or partners doing business with the US military could not engage in commercial activity with Anthropic, citing national security concerns. Legal experts have questioned whether the Pentagon has the authority to impose such a sweeping restriction on private contractors, suggesting the move could face court challenges. Anthropic has said it will contest the ban, arguing that the Defense Department lacks statutory authority to bar contractors from using its tools outside government work. Despite the uncertain legal footing, government contracting attorneys say firms dependent on Pentagon contracts are likely to comply swiftly to avoid jeopardizing access to the federal government's trillion-dollar annual budget. Lockheed Martin said it would follow presidential and Defense Department directives and expects minimal operational impact, noting it does not rely on any single AI vendor...
[73]
Trump Administration Orders Defense Contractors to Remove Anthropic's AI
US defense contractors, like Lockheed Martin, are expected to follow the Pentagon's order to purge Anthropic's prized AI tools from their supply chains, government contracting and technology attorneys said, even though the Trump administration's ban on their use may fail in court. US defense contractors, like Lockheed Martin, are expected to follow the Pentagon's order to purge Anthropic's prized AI tools from their supply chains, government contracting and technology attorneys said, even though the Trump administration's ban on their use may fail in court. The expected exodus from Anthropic was a sign of how quickly firms adjust to the Trump administration's preferences, as they seek to win pieces of its trillion-dollar annual budget, government attorneys said. Last Friday, capping off a heated weeks-long dispute with Anthropic over technology guardrails on Claude tools used by the military, President Donald Trump announced a federal agency-wide ban on the company with a six-month phase out period. Defense Secretary Pete โ Hegseth went โ further, promising to designate Anthropic as a supply chain risk to national security and posting: "Effective immediately, no contractor, supplier or partner that does business with the United States military may conduct any commercial activity" with the company. Anthropic said it would challenge the ban in court. The move raised immediate legal questions, since none of the authorities that the Trump administration could use to ban Anthropic allow it to also bar its general use by defense contractors, according to lawyers who specialize in technology and contracting laws. But the shaky legal basis for the prohibition won't stop companies that depend on the Pentagon from complying with it, the attorneys said, as Lockheed Martin has pledged to do. "We will follow the president's and the Department of War's direction," Lockheed Martin said in a statement, referring to the Department of Defense when asked about its Anthropic use following the moves by the Trump administration. "We expect minimal impacts," the company said, adding that it doesn't depend on any single AI vendor "for any portion โ of our work." With huge government contracts at stake, defense contractors would be quick to comply with the Pentagon's ban, lawyers said. "Most companies that do significant business with the government are hyper-aware of what the U.S. government wants and they're likely already taking steps to cleanse their supply chains of Anthropic," said Franklin Turner, an attorney who specializes in government โ contracts. "Regardless of the legal justification, I think the threat is the point ... it has already done harm, significant harm to the company," he added, referring to Anthropic. When asked whether they would comply with Trump's order on Anthropic, General Dynamics, Raytheon parent RTX, and L3Harris declined to comment. The Defense Department did not immediately respond to a request for comment. Anthropic declined to comment but referred Reuters to its Friday statement, in which it asserted that the Pentagon does not have the statutory authority to bar its contractors from using Claude. Quick to follow administration bans Defense contractors have complied in the past year with Trump's other directives regarding their agreements with the government, according to the news outlet Breaking Defense. According to the site, they speedily removed references to Diversity, Equity and Inclusion initiatives last year after President Trump signed an executive order mandating all agencies include language in contracts and grant awards requiring any winner to "certify that it does not operate any programs promoting DEI that violate any applicable federal anti-discrimination laws." Under the authority that the Defense Department is most likely to use, known as the DOD Supply Chain Risk Authority, the agency could bar would-be contractors from using Anthropic in their work for the government, government contracting attorneys said. However, it would not have the power to ban them from using it in their business entirely. Jason Workmaster, a contract lawyer at Miller Chevalier, described the decision to bar Pentagon contractors from using Anthropic as a "highly aggressive position." "If โ and when challenged, there would be a high likelihood that DOD would be found not to have the authority to do this, unless there are facts that we do not know about," he said. It is not even clear if the US military has the authority to designate Anthropic as a supply chain risk to bar its own use of the technology. The Supply Chain Risk Authority has specific requirements for what constitutes a supply chain risk, such as the threat that an adversary may sabotage, introduce unwanted capabilities, or otherwise "subvert" the technology in order to "surveil, deny or disrupt" its use. Meanwhile the Federal Acquisition Supply Chain Security Act (FASCSA), which creates a similar authority, requires the agency to follow several steps prior to a ban, such as giving the business the opportunity to respond and notifying Congress, among others. The US government so far hasn't shown publicly that it satisfied the requirements, said Alan Rozenshtein, a University of Minnesota law professor who specializes in technology regulation. "Capitalism and free markets rely on the rule of law," he said. "This is the opposite of that." The Trump administration used FASCSA last year to bar intelligence agencies from buying products from Acronis AG, a Swiss cybersecurity and data protection company.
[74]
AI Firm Anthropic Claims Pentagon Ban Was 'Ideological'
You can access the official complaint made by Anthropic here: Complaint PDF US Pentagon officials admitted on record that the supply chain designation against AI company Anthropic is "ideological" with "no evidence of supply-chain risk," according to quotes cited in Anthropic's March 9 lawsuit against the US Department of War and more than 15 federal agencies. One official told Defense One the goal was to "make sure they pay a price" for refusing the Pentagon's demands to drop AI safety restrictions. The Pentagon has not publicly detailed the specific security concerns underlying the designation. Why this matters: The lawsuit documents how a government can announce punitive action via social media before completing required legal procedures. The approach could be replicated by other governments against companies refusing their demands, raising questions about procedural safeguards in regulatory enforcement. What security approvals does Anthropic have? Anthropic maintains active US government security approvals that contradict the supply chain risk designation: The complaint argues that these clearances involve security vetting that should identify major supply chain risks. The government has not explained how both the active clearances and the supply chain designation can coexist. What procedural violations does the complaint allege? US Defense Secretary Pete Hegseth tweeted the designation February 27 but sent the formal letter on March 4. The complaint alleges he violated 10 U.S.C. ยง 3252 (the law governing supply chain designations) by announcing publicly before completing: Does the designation cover future AI models? Yes. The formal letter Anthropic received states the designation applies to all Anthropic products, including any that "become available for procurement" in the future. This means the designation covers AI models that Anthropic hasn't even built yet. How did the Pentagon contradict itself? The complaint points to contradictions in the government's actions: What does the vague language mean for contractors? The government's order bans defense contractors from conducting "any commercial activity" with Anthropic, but the order doesn't define several critical terms: What's at stake legally? The complaint argues President Trump's order punishes one named company without trial, which the US Constitution forbids. However, courts often defer to the executive branch on national security determinations. The case will test whether supply chain designations can be used for policy disagreements rather than security concerns, and whether executive orders can function as corporate punishment tools. Why are competitors backing Anthropic? More than 30 employees from OpenAI and Google DeepMind, including chief scientist Jeff Dean filed an amicus brief (legal filing from non-parties with expertise) arguing the designation "chills professional debate" about AI risks and "undermines American innovation." The employees signed in personal capacity and don't represent their companies' views. OpenAI signed a Pentagon deal hours after Trump's order, though some of its employees still backed Anthropic's lawsuit. What did other agencies do? Multiple federal agencies acted on Trump's directive:
[75]
Lockheed Pledges To Ax Anthropic's Claude AI After Trump Ban: Company Says Will Follow President's 'Direction' - Lockheed Martin (NYSE:LMT)
Lockheed Martin Corp. (NYSE:LMT) has pledged to remove Anthropic's Claude AI tools from its operations after President Donald Trump imposed a federal agency-wide ban on the company, with a six-month phase-out period. According to a Reuters report, Lockheed said, "We will follow the president's and the Department of Defense's direction," adding it expects "minimal impacts" and does not depend on any single AI vendor "for any portion of our work." Legal Authority Questioned Attorney Franklin Turner, an innovative business lawyer who resolves complex government contracts issues, told Reuters that firms are "already taking steps to cleanse their supply chains," adding the threat has "already done significant harm to the company." According to the report, Attorney Jason Workmaster, whose practice focuses on government contracts-related litigation, called the Pentagon's move "highly aggressive," saying the Department of Defense would likely be found lacking authority if challenged. The attorney said the Pentagon's main tool, the DOD Supply Chain Risk Authority, limits use to government contracts and does not completely restrict commercial activity. Anthropic Vows Legal Fight as Claude Ban Triggers App Store Surge Anthropic earlier said it would challenge the ban in court. Dario Amodei, CEO of the California-based AI safety and research company, earlier said the company refused unrestricted Pentagon use over concerns about domestic surveillance and autonomous weapons. University of Minnesota law professor Alan Rozenshtein told Reuters that capitalism and free markets depend on the rule of law, and that what's being proposed is the opposite of that. Lockheed Market Standing Lockheed Martin has a market capitalization of $153.65 billion, with a 52-week high of $692.00 and a 52-week low of $410.11. The Relative Strength Index (RSI) of LMT stands at 64.24. With a strong Momentum in the 85th percentile, Benzinga's Edge Stock Rankings indicate that LMT has a positive price trend across all time frames. Photo Courtesy: JHVEPhoto on Shutterstock.com Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[76]
Anthropic Sues US Defence Department Over 'Supply-Chain Risk' Label
Anthropic Takes Pentagon to Court After Being Labelled AI Supply-Chain Risk Anthropic has filed lawsuits against the US Department of Defence (DoD) after the agency labelled the firm a national security 'supply-chain risk.' The company claims this allegation followed its refusal to grant the Pentagon access to its AI systems. Anthropic, which develops the Claude AI chatbot, filed two complaints on Monday, one in a federal court in California and another in Washington. The lawsuits challenge the and look to overturn the designation.
[77]
Anthropic sues Trump administration seeking to undo 'supply chain risk' designation
Anthropic is suing the Trump administration, asking federal courts to reverse the Pentagon's decision designating the artificial intelligence company a " supply chain risk " over its refusal to allow unrestricted military use of its technology. Anthropic filed two separate lawsuits Monday, one in California federal court and another in the federal appeals court in Washington, D.C., each challenging different aspects of the Pentagon's actions against the company. The Pentagon last week formally designated the San Francisco tech company a supply chain risk after an unusually public dispute over how its AI chatbot Claude could be used in warfare. "These actions are unprecedented and unlawful," Anthropic's lawsuit says. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation." The Defense Department declined to comment Monday, citing a policy of not commenting on matters in litigation. Anthropic said it sought to restrict its technology from being used for two high-level usages: mass surveillance of Americans and fully autonomous weapons. Defense Secretary Pete Hegseth and other officials publicly insisted the company must accept "all lawful uses" of Claude and threatened punishment if Anthropic did not comply. Designating the company a supply chain risk cuts off Anthropic's defense work using an authority that was designed to prevent foreign adversaries from harming national security systems. It was the first time the federal government is known to have used the designation against a U.S. company. President Donald Trump also said he would order federal agencies to stop using Claude, though he gave the Pentagon six months to phase out a product that's deeply embedded in classified military systems, including those used in the Iran war. Anthropic's lawsuit also names other federal agencies, including the departments of Treasury and State, after officials ordered employees to stop using Anthropic's services. Even as it fights the Pentagon's actions, Anthropic has sought to convince businesses and other government agencies that the Trump administration's penalty is a narrow one that only affects military contractors when they are using Claude in work for the Department of Defense. Making that distinction clear is crucial for the privately held Anthropic because most of its projected US$14 billion in revenue this year comes from businesses and government agencies that are using Claude for computer coding and other tasks. More than 500 customers are paying Anthropic at least $1 million annually for Claude, according to a recent investment announcement valued the company at $380 billion. Anthropic said in a statement Monday that "seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners."
[78]
Anthropic sues Trump admin for blacklisting after clash on using AI...
Anthropic on Monday sued the Trump administration for effectively blacklisting the AI firm after it sought to block the Pentagon from using its chatbot for mass surveillance and weaponry. The San Francisco-based tech firm accused War Secretary Pete Hegseth of designating Anthropic a supply-chain risk - making it the first US company to bear that label - as retaliation for trying to limit the Pentagon's use of its Claude chatbot. "The actions are unprecedented and unlawful," the company said in a complaint filed Monday in San Francisco federal court. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech." The Pentagon declined to comment, saying it does not address ongoing litigation. Anthropic's lawsuit came days after its CEO Dario Amodei apologized for a leaked 1,600-word missive bashing the Trump administration - though he added that the company had "no choice" but to challenge the supply-chain risk label in court. The exec apologized for "the tone" of his fiery letter to staffers, which accused the Department of War for targeting Anthropic for not giving "dictator-style praise to Trump." "I also want to apologize directly for a post internal to the company that was leaked to the press yesterday," Amodei wrote in a note last Thursday. "Anthropic did not leak this post nor direct anyone else to do so -- it is not in our interest to escalate this situation." Amodei said his inflammatory comments came hours after Trump blasted Anthropic staff as "Leftwing nut jobs" and Hegseth announced his plans to label the company a supply-chain risk. "It was a difficult day for the company, and I apologize for the tone of the post," Amodei wrote. "It does not reflect my careful or considered views." The Pentagon's supply-chain risk label -- previously used only for foreign firms that present national security threats, like Chinese tech firm Huawei Technologies -- is a "scarlet letter designation for Anthropic," Wedbush Securities analyst Dan Ives wrote in a Monday note. It will force defense contractors to certify that they do not use Anthropic's AI models in their work with the government. It's unclear if the business will face broader restrictions after Hegseth previously said Anthropic would be barred from "any commercial activity" with any company that works with the feds - including customers like Lockheed Martin, Amazon and Google. Anthropic, however, claimed the "vast majority" of its customers will not be impacted by the designation, according to Amodei's note last week. The company signed a $200 million contract with the Pentagon in July that made it the sole provider of AI models on the government's classified networks. But Hegseth blasted the firm for seeking exemptions during contract negotiations on the use of its models for mass surveillance of citizens and weaponry, insisting that the Pentagon should be able to use AI tools for "all lawful purposes." OpenAI then swooped in with a deal to provide AI services to the Pentagon. In his memo to staffers later that day, Amodei said Anthropic was being punished because he didn't "donate to Trump" - while "OpenAI/Greg have donated a lot," referring to OpenAI president Greg Brockman, the Information reported. Amodei - who donated to Democratic former Vice President Kamala Harris' failed presidential campaign - blasted OpenAI and the Pentagon for allegedly smearing his company's name. He said that "a lot of OpenAI and [Department of War] messaging just straight up lies about these issues or tries to confuse them," insisting that OpenAI's contract terms, for example, were never offered to Anthropic. Altman was "presenting himself as someone who wants to 'set the same contract for everyone in the industry,'" while "behind the scenes" working with the Department of War to replace Anthropic "the instant we are designated a supply chain risk," Amodei wrote. OpenAI's deal includes safeguards that are "maybe 20% real and 80% safety theater," he added. During a Morgan Stanley technology conference on Thursday, Altman pushed back on the criticism - and took a few jabs at Anthropic. "The government is supposed to be more powerful than private companies," he said, adding that it's "bad for society" if companies start abandoning their commitment to the democratic process because "some people don't like the person or people currently in charge." Altman acknowledged, however, that the timing of OpenAI's deal - which came just hours after talks with Anthropic fell apart - "looked opportunistic and sloppy."
[79]
Anthropic's case against the government: what the AI company says happened
March 9 (Reuters) - Anthropic sued the U.S. government on Monday, escalating a dispute the AI company frames as retaliation for refusing to remove safety limits on its Claude model. The Amazon-backed company said it was willing to work with the military. Just not on any terms. It has also filed a related case in the U.S. Court of Appeals for the D.C. Circuit challenging a separate legal authority the government invoked. The following account is based on allegations made by Anthropic in its lawsuit. WHAT ANTHROPIC SAYS THE DISPUTE IS ABOUT Anthropic said it spent years building Claude into the government's most widely deployed frontier AI model, including on classified military networks, developing a specialized "Claude Gov" version and loosening many of its standard restrictions to accommodate national security work. The conflict began in the fall of 2025 during negotiations over the Pentagon's GenAI.mil platform, when the Department of Defense demanded Anthropic abandon its usage policy entirely and allow Claude to be used for, in the government's words, "all lawful uses". Anthropic said it largely agreed, except on two points it considered non-negotiable: it would not allow Claude to be used for lethal autonomous warfare without human oversight or for mass surveillance of Americans. The company says Claude has not been tested for those uses and cannot perform them safely. It said it also offered to help transition the work to another provider if no agreement could be reached. Pentagon officials have offered a different account of how the dispute began. The department's chief technology officer said publicly that tensions escalated after a U.S. raid in Venezuela, when an Anthropic executive called a counterpart at Palantir to ask whether Claude had been used in the operation. That account does not appear in Anthropic's complaint. FROM ULTIMATUM TO ALL-OUT BAN Secretary of Defense Pete Hegseth met with Anthropic CEO Dario Amodei on February 24, presenting an ultimatum: comply within four days or face one of two punishments - compulsion under the Defense Production Act, or expulsion from the defense supply chain as a 'national security risk'. Amodei rejected the demand publicly on February 26. The next day, before a 5:01 p.m. Eastern deadline had expired, President Donald Trump posted a directive on Truth Social ordering every federal agency to immediately cease all use of Anthropic's technology. In the social media post, the president characterized Anthropic as a "RADICAL LEFT, WOKE COMPANY". Hours later, Hegseth announced on X that Anthropic was a "Supply-Chain Risk to National Security" and that no military contractor or supplier could do commercial business with the company. Agencies fell in line quickly. The General Services Administration terminated Anthropic's government-wide contract. Treasury, State, and the Federal Housing Finance Agency publicly cut ties. The Anthropic complaint alleges the Pentagon launched a major air attack on Iran using Anthropic's tools hours after the ban. White House spokeswoman Liz Huston said the administration would not allow a company to "jeopardize our national security by dictating how the greatest and most powerful military in the world operates," adding that U.S. forces would "never be held hostage by the ideological whims of Big Tech leaders" and would follow the Constitution, "not any woke AI company's terms of service". WHY ANTHROPIC DECIDED TO SUE Anthropic argues the supply chain designation has no factual basis. The company points to its FedRAMP authorization, active security clearances, and years of government praise, including from Hegseth, who called Claude's capabilities "exquisite" at the February 24 meeting. Two senior Pentagon officials subsequently told reporters there was "no evidence of supply-chain risk" and that the designation was "ideologically driven". Anthropic raises five legal claims, arguing the actions violated the Administrative Procedure Act, the First Amendment, the Fifth Amendment, the president's statutory authority, and the APA's prohibition on unauthorized agency sanctions.
[80]
Anthropic's legal claims against Trump's blanket government ban on AI startup
March 9 (Reuters) - Anthropic filed a lawsuit on Monday to block the Pentagon from placing it on a national security blacklist, escalating the artificial intelligence lab's high-stakes battle with the U.S. military over usage restrictions on its technology. The AI startup's lawsuit against the U.S. government, President Donald Trump and Defense Secretary Pete Hegseth, among others, makes the following claims: FIRST AMENDMENT VIOLATION The startup claimed that the Pentagon retaliated against Anthropic for protected activities in violation of the First Amendment, which ensures free speech rights. Anthropic said in its lawsuit that the Constitution confers upon it "the right to express its views -- both publicly and to the government -- about the limitations of its own AI services and important issues of AI safety." The lawsuit claims that the U.S. government's blacklisting of the company constitutes retaliation against Anthropic's expressive activities, including protected speech, viewpoints and petitioning of the government. PRESIDENTIAL ACTION BEYOND LEGAL AUTHORITY Anthropic claimed that President Donald Trump directing the U.S. government to stop work with Anthropic -- announced in a post on his social media platform Truth Social late last month -- was "ultra vires," or went beyond his legal power and authority. FIFTH AMENDMENT VIOLATIONS Anthropic alleges the U.S. government violated its Fifth Amendment rights to due process by effectively blacklisting the company without following required legal protocols. According to the lawsuit, the government bypassed mandatory legal procedures by terminating contracts and blocking future work without providing prior notice or a meaningful opportunity to respond. ADMINISTRATIVE PROCEDURE ACT VIOLATION Anthropic alleged that the DOD designating the company as a supply-chain risk, and prohibiting the department's contractors, suppliers and partners from conducting any commercial activity with it, violates the Administrative Procedure Act. The APA sets out procedures agencies must follow when making decisions and permits courts to override agency actions that are arbitrary, an abuse of discretion or otherwise unlawful. Anthropic said Defense Secretary Pete Hegseth's decision to deem the company a supply-chain risk overstepped his authority, did not follow proper legal procedures and lacked supporting evidence. (Reporting by Arsheeya Bajwa and Anhata Rooprai in Bengaluru; Editing by Alan Barona)
[81]
Anthropic sues to block Pentagon blacklisting over AI use restrictions
NEW YORK, March 9 (Reuters) - Anthropic on Monday filed a lawsuit to block the Pentagon from placing it on a national security blacklist, escalating the artificial intelligence lab's high-stakes battle with the U.S. military over usage restrictions on its technology. The Pentagon on Thursday slapped a formal supply-chain risk designation on Anthropic, limiting use of a technology that a source said was being used for military operations in Iran. Anthropic said in its lawsuit that the designation was unlawful and violated its free speech and due process rights. The filing in federal court in California asked a judge to undo the designation and block federal agencies from enforcing it. "These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic said. Defense Secretary Pete Hegseth designated Anthropic a national security supply-chain risk last week after the startup refused to remove guardrails against using its AI for autonomous weapons or domestic surveillance. The designation poses a big threat to Anthropic's business with the government, and the outcome could shape how other AI companies negotiate restrictions on military use of their technology, though the company's CEO Dario Amodei clarified on Thursday that the designation had "a narrow scope" and businesses could still use its tools in projects unrelated to the Pentagon. President Donald Trump has also directed the government to stop working with Anthropic, whose financial backers include Alphabet's Google and Amazon.com. Trump and Hegseth said there would be a six-month phase-out. Reuters has reported that Anthropic's investors were racing to contain the damage caused by the fallout with the Pentagon. Trump and Hegseth's actions on February 27 came after months of talks with Anthropic over whether the company's policies could constrain military action and shortly after Amodei met with Hegseth in hopes of reaching a deal. The Pentagon said U.S. law, not a private company, would determine how to defend the country and insisted on having full flexibility in using AI for "any lawful use," asserting that Anthropic's restrictions could endanger American lives. Anthropic said even the best AI models were not reliable enough for fully autonomous weapons and that using them for that purpose would be dangerous. The company also drew a red line on domestic surveillance of Americans, calling that a violation of fundamental rights. After Hegseth's announcement, Anthropic said in a statement that the designation would be legally unsound and set a dangerous precedent for companies that negotiate with the government. The company said it would not be swayed by "intimidation or punishment," and on Thursday Amodei reiterated that Anthropic would challenge the designation in court. He also apologized for an internal memo published on Wednesday by tech news site The Information. In the memo, which was written last Friday, Amodei said Pentagon officials did not like the company in part because "we haven't given dictator-style praise to Trump." The Defense Department signed agreements worth up to $200 million each with major AI labs in the past year, including Anthropic, OpenAI and Google. Microsoft-backed OpenAI announced a deal to use its technology in the Defense Department network shortly after Hegseth moved to blacklist Anthropic. CEO Sam Altman said the Pentagon shared OpenAI's principles of ensuring human oversight of weapon systems and opposing mass U.S. surveillance. (Reporting by Jack Queen in New York; Editing by Noeleen Walder, Lisa Shumaker, Daniel Wallis and Nick Zieminski)
[82]
Amazon keeps access to Anthropic's Claude models on AWS, excluding military projects
Amazon said the Claude artificial intelligence models developed by Anthropic will remain available on its AWS platform for civilian use, except for projects tied to the US Department of Defense. The move follows the Pentagon's designation of Anthropic as a "supply chain risk". AWS said customers can continue using Claude for any workloads not associated with the Department of Defense, while defense-related projects will have to be moved to other solutions available on the platform. Amazon thus joins Microsoft and Google, which have also confirmed continued access to Anthropic technologies for non-military uses. Amazon is one of the start-up's leading investors, with $8bn committed since 2023. Anthropic also relies on AWS infrastructure to train its models and plans to use 500,000 Trainium 2 chips as part of its "Project Rainier" data-center initiative, valued at $11bn. Despite the Pentagon's decision, the Claude models remain available on the Bedrock platform for businesses and civilian organizations.
[83]
Google to keep Anthropic technology access available outside military projects
Google said it would continue to offer Anthropic's artificial intelligence technologies to its customers for uses not linked to defence. The decision follows the US Department of Defense classifying the start-up as a supply-chain risk and after Microsoft took a similar stance. Anthropic's Claude models will remain available on Google Cloud via the Vertex AI platform. Google is one of the start-up's main investors, with total commitments of $3bn after an additional $1bn investment announced in January 2025. Anthropic also uses Google Cloud infrastructure to train its models and has access to up to 1 million tensor processing units (TPUs). The controversy erupted after Anthropic refused to accept certain terms of use imposed by the Pentagon. The Trump administration then ordered federal agencies to stop using its technologies and began a phased withdrawal over six months. Chief executive Dario Amodei said the company will challenge the decision in court, while several defence-sector companies are now turning to alternatives such as OpenAI's.
[84]
Defense contractors, like Lockheed, seen removing Anthropic's AI after Trump ban
WASHINGTON, March 3 (Reuters) - U.S. defense contractors, like Lockheed Martin, are expected to follow the Pentagon's order to purge Anthropic's prized AI tools from their supply chains, government contracting and technology attorneys said, even though the Trump administration's ban on their use may fail in court. The expected exodus from Anthropic was a sign of how quickly firms adjust to the Trump administration's preferences, as they seek to win pieces of its trillion-dollar annual budget, government attorneys said. Last Friday, capping off a heated weeks-long dispute with Anthropic over technology guardrails on Claude tools used by the military, President Donald Trump announced a federal agency-wide ban on the company with a six-month phase out period. Defense Secretary Pete Hegseth went further, promising to designate Anthropic as a supply chain risk to national security and posting: "Effective immediately, no contractor, supplier or partner that does business with the United States military may conduct any commercial activity" with the company. Anthropic said it would challenge the ban in court. The move raised immediate legal questions, since none of the authorities that the Trump administration could use to ban Anthropic allow it to also bar its general use by defense contractors, according to lawyers who specialize in technology and contracting laws. But the shaky legal basis for the prohibition won't stop companies that depend on the Pentagon from complying with it, the attorneys said, as Lockheed Martin has pledged to do. "We will follow the president's and the Department of War's direction," Lockheed Martin said in a statement, referring to the Department of Defense when asked about its Anthropic use following the moves by the Trump administration. "We expect minimal impacts," the company said, adding that it doesn't depend on any single AI vendor "for any portion of our work." With huge government contracts at stake, defense contractors would be quick to comply with the Pentagon's ban, lawyers said. "Most companies that do significant business with the government are hyper-aware of what the U.S. government wants and they're likely already taking steps to cleanse their supply chains of Anthropic," said Franklin Turner, an attorney who specializes in government contracts. "Regardless of the legal justification, I think the threat is the point ... it has already done harm, significant harm to the company," he added, referring to Anthropic. When asked whether they would comply with Trump's order on Anthropic, General Dynamics, Raytheon parent RTX, and L3Harris declined to comment. The Defense Department did not immediately respond to a request for comment. Anthropic declined to comment but referred Reuters to its Friday statement, in which it asserted that the Pentagon does not have the statutory authority to bar its contractors from using Claude. QUICK TO FOLLOW ADMINISTRATION BANS Defense contractors have complied in the past year with Trump's other directives regarding their agreements with the government, according to the news outlet Breaking Defense. According to the site, they speedily removed references to Diversity, Equity and Inclusion initiatives last year after President Trump signed an executive order mandating all agencies include language in contracts and grant awards requiring any winner to "certify that it does not operate any programs promoting DEI that violate any applicable federal anti-discrimination laws." Under the authority that the Defense Department is most likely to use, known as the DOD Supply Chain Risk Authority, the agency could bar would-be contractors from using Anthropic in their work for the government, government contracting attorneys said. However, it would not have the power to ban them from using it in their business entirely. Jason Workmaster, a contract lawyer at Miller Chevalier, described the decision to bar Pentagon contractors from using Anthropic as a "highly aggressive position." "If and when challenged, there would be a high likelihood that DOD would be found not to have the authority to do this, unless there are facts that we do not know about," he said. It is not even clear if the U.S. military has the authority to designate Anthropic as a supply chain risk to bar its own use of the technology. The Supply Chain Risk Authority has specific requirements for what constitutes a supply chain risk, such as the threat that an adversary may sabotage, introduce unwanted capabilities, or otherwise "subvert" the technology in order to "surveil, deny or disrupt" its use. Meanwhile the Federal Acquisition Supply Chain Security Act (FASCSA), which creates a similar authority, requires the agency to follow several steps prior to a ban, such as giving the business the opportunity to respond and notifying Congress, among others. The U.S. government so far hasn't shown publicly that it satisfied the requirements, said Alan Rozenshtein, a University of Minnesota law professor who specializes in technology regulation. "Capitalism and free markets rely on the rule of law," he said. "This is the opposite of that." The Trump administration used FASCSA last year to bar intelligence agencies from buying products from Acronis AG, a Swiss cybersecurity and data protection company. (Reporting by Mike Stone, Alexandra Alper and Courtney Rozen in Washington; Editing by Chris Sanders and Sonali Paul) By Mike Stone, Alexandra Alper and Courtney Rozen
[85]
Anthropic sues US defence department, gets support from OpenAI and Google employees
Anthropic's lawsuit has received support from employees at other AI companies. Claude-maker Anthropic has filed lawsuits against the US Department of Defence (DOD) after the agency recently labelled the company a supply-chain risk. The AI company filed two complaints on Monday, one in California and another in Washington. This step comes after weeks of disagreement between Anthropic and the DOD over the military's request for full access to Anthropic's AI systems. Keep reading for all the details. The conflict began when the Pentagon asked for broader access to Anthropic's AI systems. Defence Secretary Pete Hegseth argued that the military should be able to use AI for 'any lawful purpose' and that private companies should not limit how the government uses such technology. However, Anthropic had two clear red lines. The company said it did not want its technology used for mass surveillance of Americans and believes current AI systems are not ready to power fully autonomous weapons without humans making targeting and firing decisions. Soon after the disagreement, the Pentagon signed an agreement with OpenAI and labelled Anthropic a supply-chain risk. The label requires companies working with the Defense Department to confirm that they do not use Anthropic's AI models. In its lawsuit filed in San Francisco federal court, Anthropic described the government's action as 'unprecedented and unlawful.' 'The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,' the lawsuit added. The AI company also claims the designation was issued without following the required legal procedures. Anthropic has also filed a separate complaint in the D.C. Circuit Court of Appeals. Under federal procurement law, companies are allowed to challenge supply-chain risk labels in court. Through this petition, Anthropic is asking the court to review the Defence Department's decision and cancel the designation that labelled the company a national security supply-chain risk. Also read: Amazon Electronics Premier League 2026: Vivo X200 Pro available with over Rs 15,000 discount Anthropic's lawsuit has received support from employees at other AI companies. More than 30 employees from OpenAI and Google DeepMind filed a statement backing the company in court, reports TechCrunch. 'The government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry,' the filing states. The employees also argued that if the Pentagon was unhappy with the terms of its contract with Anthropic, it could have simply canceled the agreement and worked with another AI provider. 'If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness in the field of artificial intelligence and beyond,' the brief reads. 'And it will chill open deliberation in our field about the risks and benefits of today's AI systems.'
Share
Share
Copy Link
Anthropic filed two federal lawsuits against the Trump administration after the Pentagon designated it a supply chain riskโa label typically reserved for foreign adversaries. The AI company refused to allow its Claude AI models to be used for autonomous lethal warfare and mass surveillance of Americans, prompting the government to blacklist it and order all federal agencies to cease using its technology.
Anthropic filed two federal lawsuits on Monday challenging the Trump administration's decision to designate it a supply chain risk after the AI company refused to allow its Claude AI models to be used for autonomous weapons and mass surveillance of Americans
1
. The lawsuit, filed in US District Court for the Northern District of California, argues that the government violated Anthropic's First Amendment rights by retaliating against the company for expressing its views on AI safety1
. A second lawsuit was filed in the US Court of Appeals for the District of Columbia Circuit1
.Source: Market Screener
The dispute began when Anthropic held firm to two red lines during contract negotiations with the Department of Defense: it didn't want its technology used for mass surveillance of Americans and didn't believe its AI models were ready to power fully autonomous lethal weapons with no humans making targeting and firing decisions
4
. Defense Secretary Pete Hegseth argued that the Pentagon should have unrestricted access to AI systems for "any lawful purpose"4
. When Anthropic refused to budge, President Trump directed every federal agency to "IMMEDIATELY CEASE all use of Anthropic's technology," and hours later, Hegseth designated the company a supply chain risk to national security1
.The White House responded to the lawsuit with sharp rhetoric, calling Anthropic a "radical left, woke company." A White House spokesperson stated: "President Trump will never allow a radical left, woke company to jeopardize our national security by dictating how the greatest and most powerful military in the world operates"
1
. The statement added that "our military will obey the United States Constitution -- not any woke AI company's terms of service"1
.Hegseth's directive went beyond the legal requirements of the supply chain risk designation, which by law prevents only a narrow set of companies doing business with the Pentagon from incorporating Anthropic into their systems. Instead, he posted on X that "effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic"
5
. At Tuesday's court hearing, the Trump administration refused to commit to not taking further action against the company, with a Justice Department attorney stating he was "not prepared to offer any commitments on that issue"3
. President Trump is currently finalizing an executive order that would formally ban usage of Anthropic tools across the government3
.
Source: New York Post
More than 30 OpenAI and Google DeepMind employees filed an amicus brief supporting Anthropic's lawsuit, with signatories including Google DeepMind chief scientist Jeff Dean
2
. The brief stated that "the government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry"2
. The filing also affirmed that Anthropic's stated concerns about autonomous weapons and mass surveillance are legitimate issues warranting strong guardrails2
.The Google and OpenAI employees argued that without public law to govern ethical use of AI, the contractual and technical restrictions developers impose on their systems serve as critical safeguards against catastrophic misuse
2
. They wrote that "current AI models are not reliable enough to bear the responsibility of making lethal targeting decisions entirely alone, and the risks of their deployment for that purpose require some kind of response and guardrails"1
.Related Stories
Anthropic's chief financial officer Krishna Rao revealed that hundreds of millions of dollars in expected revenue this year from work tied to the Pentagon is already at risk, and the company could ultimately lose billions of dollars in sales if the government pressures a broad range of companies from doing business with the AI startup
5
. Rao disclosed that Anthropic's all-time sales since commercializing its technology in 2023 exceed $5 billion, though the company has spent over $10 billion to train and deploy its models and remains deeply unprofitable5
.Anthropic chief commercial officer Paul Smith provided concrete examples of the blacklisting's impact: a financial services customer paused negotiations over a $15 million deal, two leading financial services companies refused to close deals valued together at $80 million unless they gained the right to unilaterally cancel their contracts for any reason, and a grocery store chain cancelled a sales meeting, all citing the supply chain risk designation
5
. Smith wrote that "all have taken steps that reflect deep distrust and a growing fear of associating with Anthropic"5
.Several advocacy groups filed a brief supporting Anthropic, including the Foundation for Individual Rights and Expression, the Electronic Frontier Foundation, the Cato Institute, the Chamber of Progress, and the First Amendment Lawyers Association
1
. They argued that Pentagon retaliation against Anthropic will "silence future speech from those who fear the government attempting to harm their business or extinguish it entirely"1
.
Source: Analytics Insight
Legal experts believe the administration's actions against Anthropic continue a pattern of abusing the law to punish perceived political enemies. Yale Law School professor Harold Hongju Koh, who worked in the Barack Obama administration, stated: "If this is a one-off, you might give the president some deference. But now, it's just unmistakable that this is just the latest in a chain of events related to a punitive presidency"
3
. Georgetown University Law Center professor David Super noted that the provisions the Department of Defense used to sanction Anthropic were designed to protect the country from potential sabotage by its enemies3
.Anthropic is seeking a preliminary injunction to suspend the sanctions and prevent further irreparable harm to its business. A hearing is scheduled for March 24 in San Francisco, though the company had requested an even earlier date
3
. The case will test whether the government can weaponize national security designations against domestic companies that refuse government contracts based on free speech concerns about AI safety.Summarized by
Navi
11 Mar 2026โขPolicy and Regulation

18 Mar 2026โขPolicy and Regulation

04 Mar 2026โขPolicy and Regulation

1
Technology

2
Entertainment and Society

3
Policy and Regulation
