10 Sources
10 Sources
[1]
Anthropic Supply-Chain Risk Label Should Stay In Place, Appeals Court Says
Anthropic "has not satisfied the stringent requirements" to temporarily lose the supply-chain risk designation imposed by the Pentagon, a US appeals court in Washington, DC ruled on Wednesday. The decision is at odds with one issued last month by a lower court judge in San Francisco, and it wasn't immediately clear how the conflicting preliminary judgements would be resolved. The government sanctioned Anthropic under two different supply-chain laws with similar effects, and the San Francisco and Washington, DC courts are each ruling on only one of them. Anthropic has said it is the first US company to be designated under the two laws, which are typically used to punish foreign businesses that pose a risk to national security. "Granting a stay would force the United States military to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict," the three-judge appellate panel wrote on Wednesday in what they described as an unprecedented case. The panel said that while Anthropic may suffer financial harm from the ongoing designation, they did not want to risk "a substantial judicial imposition on military operations" or "lightly override" the military's judgements on national security. The San Francisco judge had found that the Department of Defense likely acted in bad faith against Anthropic, driven by frustration over the AI company's proposed limits on how its technology could be used and its public criticism of those restrictions. The judge ordered the supply-chain risk label removed last week, and the Trump administration complied by restoring access to Anthropic AI tools inside the Pentagon and throughout the rest of the federal government. Anthropic spokesperson Danielle Cohen says the company is grateful the Washington, DC court "recognized these issues need to be resolved quickly" and remains confident "the courts will ultimately agree that these supply chain designations were unlawful." The Department of Defense did not immediately respond to a request for comment. The cases are testing how much power the executive branch has over the conduct of tech companies. The battle between Anthropic and the Trump administration is also playing out as the Pentagon deploys AI in its war against Iran. The company has argued it is being illegally punished for insisting that its AI tool Claude lacks the accuracy needed for certain sensitive operations such as carrying out deadly drone strikes without human supervision. Several experts in government contracting and corporate rights have told WIRED that Anthropic has a strong case against the government, but the courts sometimes refuse to overrule the White House on matters related to national security. Some AI researchers have said the Pentagon's actions against Anthropic "chills professional debate" about the performance of AI systems. Anthropic has claimed in court that it lost business because of the designation, which government lawyers contend bars the Pentagon and its contractors from using the company's Claude AI as part of military projects. And as long as Trump remains in power, Anthropic may not be able to regain the significant foothold it held in the federal government. Final decisions in the company's two lawsuits could be months away. The court in Washington, DC is scheduled to hear oral arguments on May 19. The parties have revealed minimal details so far about how exactly the Department of Defense has used Claude or how much progress it has made in transitioning staff to other AI tools from Google DeepMind, OpenAI, or others. The military, which under President Donald Trump calls itself the Department of War, has said it has taken steps to ensure Anthropic can't purposely try to sabotage its AI tools during the transition.
[2]
Anthropic Fails for Now to Halt US Label as a Supply-Chain Risk
A federal appeals court declined for now Anthropic PBC's request to pause a declaration by the Pentagon that the artificial intelligence company poses a risk to the US supply chain, even as plans for a broader government ban on its technology remain blocked by a California judge. In its decision Wednesday, the three-judge panel in Washington, DC, said it would work quickly to review the case because Anthropic is likely to "suffer some irreparable harm" as the legal action plays out. The court set oral arguments in the case for May 19. The ruling leaves in place a US Defense Department declaration that was the legal basis for the Trump administration's plan to sever all ties to the company, which claims it could face billions of dollars in lost revenue. The government announced the ban after a dispute with Anthropic over how its AI technology would be used by the military. "On one side is a relatively contained risk of financial harm to a single private company," the court wrote in explaining why it rejected the request for an immediate pause. "On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict." Anthropic welcomed an expedited schedule for its lawsuit. "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI," the company said in a statement. The Claude chatbot maker has been fighting back on two fronts. It asked the appeals court on March 9 to review the Pentagon's declaration, calling it "arbitrary, capricious and an abuse of discretion." The same day, Anthropic sued in San Francisco federal court to challenge the ban, which could mean billions in lost revenue. On March 27, a judge in that case blocked the government following through while the lawsuit proceeds. The Trump administration is appealing. The Defense Department's declaration that Anthropic posed a supply-chain risk escalated a high-stakes dispute. The company demanded assurances that its AI wouldn't be used for mass surveillance of Americans or autonomous weapons deployment. The government rejected any restrictions, citing national security. In the San Francisco lawsuit, Anthropic claims it is being shut out for disagreeing with the administration. The company said the legal principles at stake affect every federal contractor whose views the government dislikes. The case is Anthropic v. US Department of War, 26-01049, US Court of Appeals, District of Columbia Circuit (Washington).
[3]
US court declines to block Pentagon's Anthropic blacklisting for now
NEW YORK, April 8 (Reuters) - A Washington, D.C., federal appeals court on Wednesday declined to block the Pentagon's national security blacklisting of Anthropic for now, a win for the Trump administration that comes after another appeals court came to the opposite conclusion in a separate legal challenge by Anthropic. Anthropic, developer of the popular Claude AI assistant, alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated the company a national security supply-chain risk, a label that blocks Anthropic from Pentagon contracts and could trigger a government-wide blacklisting. Anthropic executives have said the designation could cost the company billions of dollars in lost business and reputational harm. A panel of judges of the U.S. Court of Appeals for the District of Columbia Circuit denied Anthropic's bid to pause the designation while the case plays out. The decision is not a final ruling. The lawsuit is one of two Anthropic filed over Hegseth's unprecedented move, which came after Anthropic refused to allow the military to use AI chatbot Claude for U.S. surveillance or autonomous weapons due to safety and ethics concerns. Hegseth issued orders designating Anthropic under two different laws, and Anthropic is challenging each of them separately. A California federal judge blocked one of the orders on March 26, saying the Pentagon appeared to have unlawfully retaliated against Anthropic for its views on AI safety. Anthropic's designation was the first time a U.S. company has been publicly designated a supply-chain risk under obscure government-procurement statutes aimed at protecting military systems from enemy sabotage or infiltration. In its lawsuits, Anthropic says the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute its designation, in violation of its Fifth Amendment right to due process. The lawsuits say the designations were unlawful, unsupported by facts and inconsistent with the military's past praise of Claude. The Justice Department says that Anthropic's refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing. The government said its decision stemmed from Anthropic's refusal to accept contractual terms, not its views on AI safety. The D.C. case concerns a law that could lead to the blacklist widening to the broader civilian government following an interagency review process. The California case deals with a narrower statute that excludes Anthropic from Pentagon contracts related to military information systems. (This story has been refiled to correct the dateline to April 8) Reporting by Jack Queen in New York; Editing by Noeleen Walder and Cynthia Osterman Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Government * Constitutional Law * Civil Rights * Appellate * Public Policy Jack Queen Thomson Reuters Jack Queen covers major lawsuits against the Trump administration involving urgent questions of executive power and how their resolution could affect the law and the legal profession in the years to come. Previously, he covered criminal and civil cases against Trump during the interim of his presidential terms, including gavel-to-gavel coverage of his historic hush money trial in New York and his civil fraud trial, which ended in a half-billion-dollar judgment. Jack has also covered high-profile defamation cases including the Dominion Voting Systems' lawsuit against Fox News, which settled for $787 million after intense pretrial litigation. Based in New York, he specializes in breaking news as well as analysis, explainers and other explanatory reporting.
[4]
Anthropic loses appeals court bid to block Pentagon blacklisting temporarily
A federal appeals court in Washington, D.C., on Wednesday denied Anthropic's request for a stay in its lawsuit against the Department of Defense. The artificial intelligence startup sought the action to pause its blacklisting by the Pentagon and prevent further monetary and reputational harm as the case unfolds. The ruling comes after a judge in San Francisco federal court late last month, in a separate case, granted Anthropic a preliminary injunction that bars the Trump administration from enforcing a ban on the use of Claude. The DOD declared Anthropic a supply chain risk in early March, meaning that use of the company's technology purportedly threatens U.S. national security. The label requires defense contractors to certify that they don't use Anthropic's Claude artificial intelligence models in their work with the military. "In our view, the equitable balance here cuts in favor of the government," the appeals court said in its decision. "On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict." Anthropic had asked the appeals court to review the Pentagon's determination and argued that it's a form of retaliation that is unconstitutional, arbitrary, capricious and not in accord with procedures required by law, according to a filing. The DOD relied on two distinct designations - 10 U.S.C. § 3252 and 41 U.S.C. § 4713 - to justify the supply chain risk action, and they have to be challenged in two separate courts. The 41 U.S.C. § 4713 designation falls under the purview of the appeals court in Washington, D.C. Anthropic has filed a separate, more wide-ranging lawsuit in San Francisco federal court, which oversees the 10 U.S.C. § 3252 designation. A judge on Thursday granted Anthropic a preliminary injunction in that case, writing that the government's position is "classic illegal First Amendment retaliation." -- CNBC's Dan Mangan contributed to this report.
[5]
Appeals court rebuffs Anthropic in latest round of its AI battle with the Trump administration
WASHINGTON (AP) -- A federal appeals court on Wednesday refused to block the Pentagon from blacklisting artificial intelligence laboratory Anthropic in a decision that differed from the conclusions reached in another judge's ruling on the same issues. The U.S. Court of Appeals in Washington D.C. rejected Anthropic's request for an order that would shield from the San Francisco company from the fallout stemming from a dispute over how the Pentagon could deploy its Claude chatbot in fully autonomous weapons and potential surveillance of Americans while the panel is still collecting evidence about the case. But the setback in Washington came after Anthropic already had prevailed in separate case focused on the same issues in San Francisco federal court. In that case, a judge forced President Donald Trump's administration to remove a label tainting the company as a national security risk. Anthropic filed the two separate lawsuits in San Francisco and the Washington appeals court last month, asserting the Trump administration was engaging in an "unlawful campaign of retaliation" because of its attempt to impose limits on how its AI technology can be deployed. The Trump administration blasted Anthropic as a liberal-leaning company trying to dictate U.S. military policy. In the San Francisco case, U.S. District Judge Rita Lin ruled that the Trump administration had overstepped its bounds by labeling Anthropic a supply chain risk unqualified to work with military contractors and issuing other directives that could cripple a company locked in a race for AI supremacy against rivals such as ChatGPT maker Open AI and Google. That decision prompted the Trump administration to remove the stigmatizing labels from Anthropic and take other steps clearing the way for government employees and contractors to continue using Claude and other chatbots, according to court filing made in San Francisco earlier this week. The appeals court in Washington didn't see things the same way, even though it conceded the company would "likely suffer some degree of irreparable harm" if it's deemed a supply chain risk. But the appeals court didn't see sufficient reason to issue its own order revoking the Trump administration's actions, partly because "the precise amount of Anthropic's financial harm is not fully clear." Further evidence in the case is scheduled to be presented before the appeals court in a hearing scheduled for May 19. "We're grateful the court recognized these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful," Anthropic said in a statement. Matt Schruers, the CEO of the technology trade group Computer & Communications Industry Association, expressed worries that the conflicting court decisions issued so far in the standoff between Anthropic and the Trump administration will muddle the business landscape at a pivotal time. "The Pentagon's actions and the DC Circuit's ruling create substantial business uncertainty at a time when U.S. companies are competing with global counterparts to lead in AI," Schruers said.
[6]
Federal Court Denies Anthropic's Motion to Lift 'Supply Chain Risk' Label
The ruling was a setback for the artificial intelligence start-up in its battle with the Defense Department over the use of A.I. in warfare. A panel of federal judges on Wednesday denied a motion from Anthropic to stop the Department of Defense from labeling it a security risk, a setback for the artificial intelligence company in its battle with the Trump administration over how A.I. should be used in warfare. In a four-page ruling, the three-judge panel of the U.S. Court of Appeals for the District of Columbia Circuit said Anthropic had "not satisfied the stringent requirements" for a stay of the security risk label. "In our view, the equitable balance here cuts in favor of the government," the judges wrote, representing one of two federal courts where Anthropic sued the Pentagon. The other is in California. The ruling is not a final decision, and the case is set to continue. But it was an early win for the Trump administration, which has been fighting with Anthropic in the aftermath of a failed $200 million contract negotiation over the use of A.I. in classified systems. The Defense Department and Anthropic had disagreed over contract terms related to how the Pentagon could use the company's A.I., which led Defense Secretary Pete Hegseth to label the start-up a "supply chain risk." That designation is typically applied to foreign companies that pose national security concerns. Being labeled a supply chain risk effectively blacklists a company from doing business with U.S. government entities.
[7]
Anthropic suffers setback in Pentagon blacklisting fight
Driving the news: Anthropic sought the stay to fend off financial and reputational harm. The ruling comes after a federal judge in a San Francisco court last month granted the company a preliminary injunction keeping the administration from banning the use of Claude. Why it matters: The split rulings from two different courts mean Anthropic still has battles left to fight over its supply-chain risk designation. * The dispute centers on Pentagon restrictions on use of its Claude technology in classified settings. What they're saying: "We're grateful the court recognized these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful," an Anthropic spokesperson said. * "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI." * A source familiar with the case said the type of emergency relief Anthropic sought from the D.C. appeals court is rarely granted. How it works: The San Francisco injunction means non-Pentagon agencies no longer have to terminate contracts with Anthropic.
[8]
US Court Declines to Block Pentagon's Anthropic Blacklisting for Now
(Corrects dateline to April 8. No changes to text) By Jack Queen NEW YORK, April 8 (Reuters) - A Washington, D.C., federal appeals court on Wednesday declined to block the Pentagon's national security blacklisting of Anthropic for now, a win for the Trump administration that comes after another appeals court came to the opposite conclusion in a separate legal challenge by Anthropic. Anthropic, developer of the popular Claude AI assistant, alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated the company a national security supply-chain risk, a label that blocks Anthropic from Pentagon contracts and could trigger a government-wide blacklisting. Anthropic executives have said the designation could cost the company billions of dollars in lost business and reputational harm. A panel of judges of the U.S. Court of Appeals for the District of Columbia Circuit denied Anthropic's bid to pause the designation while the case plays out. The decision is not a final ruling. The lawsuit is one of two Anthropic filed over Hegseth's unprecedented move, which came after Anthropic refused to allow the military to use AI chatbot Claude for U.S. surveillance or autonomous weapons due to safety and ethics concerns. Hegseth issued orders designating Anthropic under two different laws, and Anthropic is challenging each of them separately. A California federal judge blocked one of the orders on March 26, saying the Pentagon appeared to have unlawfully retaliated against Anthropic for its views on AI safety. Anthropic's designation was the first time a U.S. company has been publicly designated a supply-chain risk under obscure government-procurement statutes aimed at protecting military systems from enemy sabotage or infiltration. In its lawsuits, Anthropic says the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute its designation, in violation of its Fifth Amendment right to due process. The lawsuits say the designations were unlawful, unsupported by facts and inconsistent with the military's past praise of Claude. The Justice Department says that Anthropic's refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing. The government said its decision stemmed from Anthropic's refusal to accept contractual terms, not its views on AI safety. The D.C. case concerns a law that could lead to the blacklist widening to the broader civilian government following an interagency review process. The California case deals with a narrower statute that excludes Anthropic from Pentagon contracts related to military information systems. (Reporting by Jack Queen in New York;Editing by Noeleen Walder and Cynthia Osterman)
[9]
Appeals court rejects Anthropic's bid to temporarily halt Pentagon designation
A federal appeals court has rejected Anthropic's bid to temporarily halt the Pentagon's labelling of the artificial intelligence company as a supply chain risk, finding the firm failed to meet the strict requirements for an emergency stay. The order, issued Wednesday evening by a three-panel judge in Washington, D.C.'s federal appeals court, blocked Anthropic's bid to pause the designation, but granted its request for expedition. Oral arguments are slated to begin May 19. The decision breaks from that of a federal judge in California, who temporarily blocked the supply chain risk designation in a ruling late last month. Anthropic filed suits in both California and D.C. contesting both the Pentagon's designation, which is typically reserved for foreign adversaries, and President Trump's directive for civilian agencies to stop using Anthropic's products. Anthropic demanded its technology not be used in fully autonomous lethal weapons or for the mass surveillance of Americans. The Pentagon has insisted it be allowed to use Claude for "all lawful uses." The appeals panel recognized Anthropic is likely to face "some degree" of irreparable harm without a stay, though the exact amount of Anthropic's financial harm is not "fully clear," the order stated. Anthropic's lawyers have argued the designation could cost the firm billions of dollars in revenue. Further, the judges suggested Anthropic actually benefitted financially given the surge in app store downloads amid its riff with the Pentagon. Meanwhile, the panel also recognized a stay would force the U.S. military to prolong relations with "an unwanted vendor of critical AI services" amid the ongoing military conflict with Iran. "In our view, the equitable balance here cuts in favor of the government," the panel said. "On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict." The panel consisted of two Trump appointees -- Judges Neomi Rao and Gregory Katsas -- along with Karen LeCraft Hendersdon, an appointee of former President George H.W. Bush. Acting Attorney General Todd Blanche lauded the D.C. Circuit order, calling it a "resounding victory for military readiness." "Our position has been clear from the start -- our military needs full access to Anthropic's models if its technology is integrated into our sensitive systems," Blanche wrote in a post on X. "Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company." A spokesperson for Anthropic said the company is "grateful the court recognized these issues need to be resolved quickly and confident the courts will ultimately agree that these supply chain designations were unlawful." "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI," the spokesperson added.
[10]
US court declines to block Pentagon's Anthropic blacklisting for now
NEW YORK, March XX (Reuters) - A Washington, D.C. federal appeals court on Wednesday declined to block the Pentagon's national security blacklisting of Anthropic for now, a win for the Trump administration that comes after another appeals court came to the opposite conclusion in a separate legal challenge by Anthropic. Anthropic, developer of the popular Claude AI assistant, alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated the company a national security supply-chain risk, a label that blocks Anthropic from Pentagon contracts and could trigger a government-wide blacklisting. Anthropic executives have said the designation could cost the company billions of dollars in lost business and reputational harm. A panel of judges of the U.S. Court of Appeals for the District of Columbia Circuit denied Anthropic's bid to pause the designation while the case plays out. The decision is not a final ruling. The lawsuit is one of two Anthropic filed over Hegseth's unprecedented move, which came after Anthropic refused to allow the military to use AI chatbot Claude for U.S. surveillance or autonomous weapons due to safety and ethics concerns. Hegseth issued orders designating Anthropic under two different laws, and Anthropic is challenging each of them separately. A California federal judge blocked one of the orders on March 26, saying the Pentagon appeared to have unlawfully retaliated against Anthropic for its views on AI safety. Anthropic's designation was the first time a U.S. company has been publicly designated a supply-chain risk under obscure government-procurement statutes aimed at protecting military systems from enemy sabotage or infiltration. In its lawsuits, Anthropic says the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute its designation, in violation of its Fifth Amendment right to due process. The lawsuits say the designations were unlawful, unsupported by facts and inconsistent with the military's past praise of Claude. The Justice Department says that Anthropic's refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing. The government said its decision stemmed from Anthropic's refusal to accept contractual terms, not its views on AI safety. The D.C. case concerns a law that could lead to the blacklist widening to the broader civilian government following an interagency review process. The California case deals with a narrower statute that excludes Anthropic from Pentagon contracts related to military information systems. (Reporting by Jack Queen in New York;Editing by Noeleen Walder and Cynthia Osterman)
Share
Share
Copy Link
A federal appeals court in Washington, DC refused to block the Pentagon's designation of Anthropic as a supply-chain risk, creating a legal conflict with a San Francisco judge's opposite ruling. The AI company, caught in a dispute over autonomous weapons and surveillance restrictions, faces potential billions in lost revenue as two separate lawsuits unfold under different supply-chain laws.
A US Court of Appeals in Washington, DC on Wednesday denied Anthropic's request to pause the Pentagon's national security supply-chain risk designation, delivering a setback to the AI company in its escalating legal dispute with the Trump administration
1
2
. The three-judge appellate panel ruled that Anthropic "has not satisfied the stringent requirements" to temporarily lose the label, even while acknowledging the company would "likely suffer some degree of irreparable harm" from the ongoing designation1
5
. The appeals court ruling creates immediate business uncertainty for the Claude AI developer, which has claimed it could face billions of dollars in lost revenue from the Anthropic blacklisting2
3
.
Source: AP
The Washington, DC appeals court ruling directly conflicts with a decision issued last month by US District Judge Rita Lin in San Francisco federal court, who found the Pentagon's decision appeared to constitute "classic illegal First Amendment retaliation"
4
5
. The San Francisco judge ordered the supply-chain risk label removed, and the Trump administration initially complied by restoring access to Anthropic AI tools inside the Pentagon and throughout the federal government1
. The conflicting preliminary judgments stem from the government sanctioning Anthropic under two different supply-chain laws with similar effects, with each court ruling on only one of them1
. Anthropic has said it is the first US company to be designated under the two laws, which are typically used to punish foreign businesses that pose a risk to national security1
.The legal dispute erupted after Anthropic refused to allow the military to use Claude AI for US surveillance or autonomous weapons deployment due to AI safety and ethics concerns
3
. The company has argued it is being illegally punished for insisting that Claude lacks the accuracy needed for certain sensitive operations such as carrying out deadly drone strikes without human supervision1
. Defense Secretary Pete Hegseth issued orders designating Anthropic under 10 USC § 3252 and 41 USC § 4713, requiring defense contractors to certify they don't use Claude in their work with the military4
. The Justice Department argues that Anthropic's refusal to lift restrictions could cause uncertainty over how the Pentagon could use Claude and risk disabling military systems during operations3
.
Source: NYT
In explaining the appeals court ruling, the panel emphasized military priorities during active conflict. "On one side is a relatively contained risk of financial harm to a single private company," the court wrote. "On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict"
2
4
. The panel stated they did not want to risk "a substantial judicial imposition on military operations" or "lightly override" the military's judgments on national security1
. The battle between Anthropic and the Trump administration is playing out as the Pentagon deploys AI in military contexts, including its war against Iran1
.Related Stories
In its lawsuits, Anthropic claims the government violated its right to free speech under the First Amendment by retaliating against its views on AI safety
3
. The company also alleges it was not given a chance to dispute its designation, in violation of its Fifth Amendment right to due process3
. The San Francisco judge had found that the Department of Defense likely acted in bad faith against Anthropic, driven by frustration over the AI company's proposed limits on how its technology could be used and its public criticism of those restrictions1
. Several experts in government contracting and corporate rights have indicated Anthropic has a strong case against the government, though courts sometimes refuse to overrule the White House on matters related to national security1
.The conflicting court decisions create substantial business uncertainty at a pivotal time when US companies compete globally to lead in AI development, according to Matt Schruers, CEO of the Computer & Communications Industry Association
5
. Some AI researchers have said the Pentagon's actions against Anthropic "chills professional debate" about the performance of AI systems1
. The court in Washington, DC is scheduled to hear oral arguments on May 19, with final decisions in the company's two lawsuits potentially months away1
2
. Anthropic spokesperson Danielle Cohen says the company remains confident "the courts will ultimately agree that these supply chain designations were unlawful"1
. The parties have revealed minimal details about how the Department of Defense has used Claude or its progress transitioning staff to other AI tools from Google DeepMind, OpenAI, or others1
.
Source: Reuters
Summarized by
Navi
27 Mar 2026•Policy and Regulation

18 Mar 2026•Policy and Regulation

11 Mar 2026•Policy and Regulation

1
Technology

2
Technology

3
Science and Research
