2 Sources
2 Sources
[1]
The guardrail war: what America's AI purge means for the rest of us
When a government blacklists its own AI company for refusing to enable mass surveillance, Europe should be paying very close attention On the afternoon of 27 February 2026, Pete Hegseth picked up his phone and posted to X. The US Secretary of Defense had just designated Anthropic, a San Francisco AI company, a "supply chain risk to national security." The label, under 10 USC 3252, had previously been applied to Huawei and ZTE, Chinese firms accused of embedding surveillance backdoors into their hardware. Now it was being used against an American company founded by former OpenAI researchers, whose crime was this: it refused to let the US military use its AI models for mass domestic surveillance of American citizens, or for fully autonomous lethal weapons. That afternoon, hours after Anthropic was blacklisted, OpenAI CEO Sam Altman announced his company had reached its own deal with the Pentagon. His models, he wrote, would be available for all lawful purposes. The same evening, OpenAI's most senior hardware executive, Caitlin Kalinowski, who had spent 16 months building the company's robotics programme, announced her resignation. "Surveillance of Americans without judicial oversight and lethal autonomy without human authorization," she wrote, "are lines that deserved more deliberation than they got." The lines, as it turned out, had not been deliberated at all. They had been drawn in a contract dispute and erased in a Friday-afternoon press release. This is where the story is usually told as a clash between two American companies and one American administration, a Washington power struggle with AI at its centre. That reading is not wrong. But it is incomplete. What happened between Anthropic, OpenAI, and the Pentagon over the first three months of 2026 is also a story about democratic governance, about who gets to set the terms on which the most consequential technologies of our era are deployed, and about what happens when a government decides that the answer to that question is: whoever complies first. The sequence of events is worth setting out clearly, because the pace at which they unfolded has obscured their significance. Anthropic held a $200 million Pentagon contract, awarded in July 2025, for work on classified systems. The terms included two restrictions: Claude could not be used for mass domestic surveillance of American citizens, and it could not be used to power fully autonomous weapons with no human in the targeting loop. These were not novel demands. They aligned with longstanding prohibitions in international humanitarian law and US constitutional protections. They were, by any reasonable measure, the kind of safeguards a democratic government should want embedded in its AI systems. The Pentagon disagreed. It wanted, in the words of its final ultimatum, "unrestricted access to AI for all lawful purposes." When Anthropic declined to remove its restrictions, Hegseth set a deadline: 5:01pm on 27 February. It passed without agreement. Trump, writing on Truth Social, called the company's leadership "leftwing nut jobs" and ordered every federal agency to immediately cease use of Anthropic's technology. A federal judge in San Francisco, reviewing the designation, was less colourful but more precise. Judge Rita Lin wrote in her March ruling that the supply chain risk designation is "usually reserved for foreign intelligence agencies and terrorists, not for American companies," and described the administration's actions as "classic First Amendment retaliation." She issued a preliminary injunction blocking the ban. None of this stopped a federal appeals court from later denying Anthropic's stay request, concluding that "the equitable balance here cuts in favour of the government." As of this writing, Anthropic is barred from Pentagon contracts, permitted to work with other agencies, and fighting two parallel lawsuits while simultaneously recruiting enterprise partners, launching a $100 million partner programme, and testing its new model, Mythos, with Wall Street banks at the quiet encouragement of the Treasury Secretary and the Federal Reserve chair. The administration that blacklisted the company is also, directing those banks to evaluate it for critical financial infrastructure. The contradiction is not bureaucratic confusion. It is a policy. The more uncomfortable part of this story is OpenAI's role in it. Altman has said his company shares Anthropic's core principles: no domestic mass surveillance, no autonomous weapons. The companies' stated red lines are, on paper, nearly identical. The difference is that OpenAI signed, and Anthropic did not. What exactly is in OpenAI's Pentagon agreement, and how its provisions compare to the assurances Anthropic sought, has not been made public. Pentagon officials have said existing US law already prohibits the uses Anthropic was concerned about. Anthropic's lawyers, and a group of 37 researchers from OpenAI and Google DeepMind who filed an amicus brief supporting the lawsuit, clearly do not share that confidence. What we can say with reasonable certainty is this: a government that wanted to remove enforceable safety restrictions from AI models used in classified military systems found a way to do so. One company held the line and was treated as an adversary. Another accommodated the government's position and was treated as a partner. The market signal this sends to every AI company negotiating a public sector contract, anywhere in the world, could not be clearer. Sam Altman has acknowledged the deal was "definitely rushed." OpenAI's own employees pushed back. ChatGPT uninstalls reportedly surged 295% in the days following the announcement, while Claude climbed to the top of the US App Store. These responses suggest that users, at least, understood something significant had shifted. The question is whether policymakers outside the United States are drawing the same conclusion. Europe has spent the better part of a decade building a regulatory framework for AI premised on a core democratic argument: that powerful technologies must be constrained by law, not merely by the good intentions of the companies that build them. The AI Act, which enters full enforcement in August 2026, encodes that argument in legislation. Prohibited uses, including real-time biometric surveillance in public spaces and social scoring, are not left to corporate discretion. They are banned. What the Anthropic saga demonstrates is what happens in a jurisdiction where that argument has been rejected. In the United States, the Biden administration's AI safety executive order was revoked on Trump's first day. State-level AI legislation has been actively suppressed. And when a company tried to embed the principles of the EU AI Act into its own contractual terms, a government that had previously praised its technology as "exquisite" reached for a statute designed to neutralise foreign saboteurs. The EU's "Digital Omnibus" package, currently under negotiation, proposes to delay and weaken parts of both the AI Act and GDPR in the name of cutting red tape and boosting competitiveness. It is being driven, at least in part, by the argument that European regulation puts the continent at a disadvantage against less constrained American and Chinese competitors. The Anthropic case offers a corrective to that framing. What the US has demonstrated is not a competitive advantage through deregulation. It has demonstrated what it looks like when a government uses procurement power to enforce the removal of safety limits that its own democratic principles would otherwise require. That is not a model Europe should envy. It is a warning, in my humble opinion. Federal agencies are, as of this week, quietly testing Anthropic's Mythos model despite the ban. Congressional staff are seeking briefings on its capabilities. The Commerce Department's Centre for AI Standards and Innovation is actively evaluating its cybersecurity potential. The prohibition is, in practice, already eroding, because the technology is too useful to ignore, even for the government that declared it a national security threat. That, too, is instructive. The AI guardrails Anthropic refused to remove were not protections the US government ultimately wanted to do without. They were protections it wanted to hold without being contractually bound by. The distinction matters. A safety principle written into a contract is enforceable. A safety principle stated in a press release is a communication strategy. In Brussels, as in Washington, the question is not whether AI will be governed. It is whether the governance will be written into law before or after the most consequential decisions have already been made.
[2]
A retired general's warning: America can't fight the AI arms race on tech it doesn't control | Fortune
The United States is entering a new phase of strategic competition -- one where artificial intelligence is no longer an emerging capability, but a decisive element of military power. In this unfolding AI arms race, speed matters. Capability matters. But above all, control matters. That's why the recent standoff between Anthropic and the Pentagon should concern anyone focused on America's national security. At the center of the dispute is a simple but profound disagreement: who gets to decide how advanced AI systems are used in a military context. Anthropic, the developer of Claude and its super-powered model Mythos, sought to impose limits on how its technology could be deployed by drawing red lines around certain applications of its technology. The Pentagon, for its part, insisted that it must retain the ability to use AI tools for all lawful purposes in defense of the nation. When those positions proved irreconcilable, the relationship collapsed. Anthropic was ultimately designated a supply chain risk, and the Department of War was forced to look elsewhere for AI capabilities. Since then, details about its model Mythos -- dubbed "too dangerous" for public release -- were uncovered and add new, alarming concerns. Mythos reportedly is capable of autonomously identifying and weaponizing undiscovered cybersecurity vulnerabilities, and would mean open season for cybercriminals without appropriate guardrails. The new tool is potentially so powerful that Anthropic has limited access to it. This episode should serve as a wake-up call because it demonstrates how the current structure of America's AI ecosystem -- a black box, driven by closed systems that lack transparency -- is fundamentally misaligned with the requirements of national defense. Today, the Pentagon purchases access to AI capabilities, but it does not control them. The training, testing, and ongoing development of these models remain firmly in the hands of private companies that have their own governance frameworks, risk tolerances, and commercial incentives. That reality creates a dangerous dynamic: it gives a small number of unaccountable private firms effective veto power over how the United States can employ one of the most consequential technologies of our time. That is not a sustainable model for a constitutional republic. Nor is it a viable foundation for military dominance. A system constrained by external approval processes, shifting corporate policies, or the risk of sudden disruption is a system that cannot move at the pace modern warfare demands. And in a strategic competition defined by iteration cycles measured in weeks -- not years -- those constraints do more than slow the United States down. They create openings. China and its aligned partners, for example, are moving aggressively to deploy AI capabilities at scale, leveraging open-source models that can be adapted for a wide range of military and intelligence applications. Systems like DeepSeek are not constrained by the same corporate governance structures that shape American firms. They are designed to be modified, extended, and integrated across a broad ecosystem that includes not only China's military, but also a growing network of partner nations at odds with America. That creates an asymmetric threat. While the United States debates the permissible uses of AI through contracts with private vendors, its competitors are building flexible, state-aligned systems that can be rapidly customized for operational needs. If that gap persists, America risks finding itself at a significant military disadvantage. The solution is not to abandon the private sector, which remains a source of extraordinary innovation and technical leadership. Nor is it to discard ethical considerations, which must remain central to how the United States approaches the use of force. But it does mean recognizing that the current model -- where the government rents access to closed, proprietary systems it cannot fully control -- is inadequate for the demands of strategic competition. Washington must begin investing in a different approach: the development of high-performing, secure, and adaptable open-source AI models that the United States government and its closest allies can control, audit, and deploy without external constraint. None of this eliminates the need for careful guardrails. There are important and legitimate debates to be had about the role of AI in warfare; from autonomy and targeting to surveillance and escalation. But those debates should be led by elected officials and military leaders accountable to the American people, not dictated by the acceptable-use policies of private companies. This strategic realignment could take several forms. It may involve government-led model development, partnerships with trusted research institutions, or the creation of open-weight models designed specifically for defense applications. It could include allied frameworks that ensure interoperability while preserving national control, as well as new procurement strategies that prioritize transparency and modifiability over convenience. Regardless of the path chosen, however, success will depend on getting the mechanism right. The United States has long understood that it cannot outsource the foundations of its security. We build our own ships. We design our own weapons. We maintain command of the systems that underpin our military advantage. Artificial intelligence should be no different. Building effective public-private partnerships that serve the national defense will require more than technical capability -- it will require trust, integrity, and sound process. That means establishing clear guardrails, aligning incentives, and ensuring that both government and industry share responsibility for the risks and outcomes of deploying these systems. Done right, such a framework can harness private-sector innovation while preserving the government's authority over how these capabilities are ultimately used. The Anthropic episode risks being not an anomaly, but a preview. Unless we act now to ensure that America -- and its allies -- have access to AI systems they can truly control, it may also prove to be a warning we failed to heed.
Share
Share
Copy Link
The Pentagon designated Anthropic a supply chain risk on February 27, 2026, after the AI company refused to remove restrictions on mass domestic surveillance and autonomous lethal weapons. The same day, OpenAI announced a deal with the Pentagon for unrestricted AI access. The conflict reveals fundamental tensions between AI ethics and control, democratic oversight, and military demands in the AI arms race.
On February 27, 2026, US Secretary of Defense Pete Hegseth designated Anthropic a "supply chain risk to national security" under 10 USC 3252—a label previously reserved for Chinese firms like Huawei and ZTE accused of embedding surveillance backdoors
1
. The San Francisco AI company's offense was refusing to let the US military use its Claude models for mass domestic surveillance of American citizens or for fully autonomous lethal weapons without human oversight1
.Anthropicstood firm on restrictions embedded in its $200 million Pentagon contract awarded in July 2025. These safeguards aligned with international humanitarian law and US constitutional protections, representing the kind of guardrails a democratic government should want in military AI systems
1
. When the Pentagon demanded "unrestricted access to AI for all lawful purposes," Anthropic declined. Hegseth set a deadline of 5:01pm on February 27. When it passed without agreement, President Trump called the company's leadership "leftwing nut jobs" and ordered every federal agency to cease using Anthropic's technology1
.Hours after the blacklisting, OpenAI CEO Sam Altman announced his company had reached a deal with the Pentagon, making his models available for all lawful purposes
1
. The timing raised questions about AI governance and who controls the deployment of advanced AI systems. OpenAI's most senior hardware executive, Caitlin Kalinowski, resigned that same evening after 16 months building the company's robotics programme. "Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got," she wrote1
.What exactly OpenAI's Pentagon agreement contains, and how its provisions compare to the assurances Anthropic sought, has not been made public. Altman has said OpenAI shares Anthropic's core principles against domestic mass surveillance and autonomous weapons, yet OpenAI signed while Anthropic did not
1
. Pentagon officials claim existing US law already prohibits the uses Anthropic was concerned about, though a group of 37 researchers from OpenAI and Google DeepMind filed concerns about the agreement1
.The standoff exposes a fundamental misalignment between US government AI policy and national security requirements in the AI arms race. A retired general warned that the Pentagon currently purchases access to AI capabilities but does not control them—training, testing, and development remain with private tech companies that have their own governance frameworks and commercial incentives
2
. This gives a small number of unaccountable firms effective veto power over how the United States employs critical defense applications.
Source: Fortune
China and aligned partners are moving aggressively to deploy military AI at scale, leveraging open-source AI models like DeepSeek that can be adapted without the corporate governance structures constraining American firms
2
. While the US debates permissible AI uses through contracts with private vendors, competitors build flexible, state-controlled AI systems rapidly customized for operational needs. If this gap persists, America risks significant military disadvantage in a competition where iteration cycles are measured in weeks, not years2
.Details about Anthropic's Mythos model—dubbed "too dangerous" for public release—add concerning dimensions to the dispute. Mythos reportedly can autonomously identify and weaponize undiscovered cybersecurity vulnerabilities, potentially meaning open season for cybercriminals without appropriate guardrails
2
. The tool is so powerful that Anthropic has limited access to it, currently testing it with Wall Street banks at the quiet encouragement of the Treasury Secretary and Federal Reserve chair1
.
Source: The Next Web
A federal judge in San Francisco reviewing the designation noted the supply chain risk label is "usually reserved for foreign intelligence agencies and terrorists, not for American companies," describing the administration's actions as "classic First Amendment retaliation." Judge Rita Lin issued a preliminary injunction blocking the ban
1
. However, a federal appeals court later denied Anthropic's stay request, concluding that "the equitable balance here cuts in favour of the government." As of now, Anthropic is barred from Pentagon contracts but permitted to work with other agencies while fighting two parallel lawsuits1
.The conflict raises questions about democratic governance and who sets terms for deploying consequential technologies. Military experts argue that debates about AI in warfare—from autonomy and targeting to surveillance and escalation—should be led by elected officials and military leaders accountable to the American people, not dictated by acceptable-use policies of private companies
2
. The current model where government rents access to closed, proprietary systems it cannot fully audit or control is inadequate for strategic competition demands.Washington faces pressure to invest in high-performing, secure, and adaptable open-source models that the US government and closest allies can control, audit, and deploy without external constraint through procurement processes that ensure auditability
2
. This strategic realignment could involve government-led model development, partnerships with trusted research institutions, or creation of open-weight models designed specifically for defense applications. The contradiction is stark: the administration that blacklisted Anthropic is simultaneously directing banks to evaluate it for critical financial infrastructure—not bureaucratic confusion, but policy1
.🟡 untrained_code=🟡On February 27, 2026, US Secretary of Defense Pete Hegseth designated Anthropic a "supply chain risk to national security" under 10 USC 3252—a label previously reserved for Chinese firms like Huawei and ZTE accused of embedding surveillance backdoors
1
. The San Francisco AI company's offense was refusing to let the US military use its Claude models for mass domestic surveillance of American citizens or for fully autonomous lethal weapons without human oversight1
.Anthropicstood firm on restrictions embedded in its $200 million Pentagon contract awarded in July 2025. These safeguards aligned with international humanitarian law and US constitutional protections, representing the kind of guardrails a democratic government should want in military AI systems
1
. When the Pentagon demanded "unrestricted access to AI for all lawful purposes," Anthropic declined. Hegseth set a deadline of 5:01pm on February 27. When it passed without agreement, President Trump called the company's leadership "leftwing nut jobs" and ordered every federal agency to cease using Anthropic's technology1
.Related Stories
Hours after the blacklisting, OpenAI CEO Sam Altman announced his company had reached a deal with the Pentagon, making his models available for all lawful purposes
1
. The timing raised questions about AI governance and who controls the deployment of advanced AI systems. OpenAI's most senior hardware executive, Caitlin Kalinowski, resigned that same evening after 16 months building the company's robotics programme. "Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got," she wrote1
.What exactly OpenAI's Pentagon agreement contains, and how its provisions compare to the assurances Anthropic sought, has not been made public. Altman has said OpenAI shares Anthropic's core principles against domestic mass surveillance and autonomous weapons, yet OpenAI signed while Anthropic did not. Pentagon officials claim existing US law already prohibits the uses Anthropic was concerned about, though a group of 37 researchers from OpenAI and Google DeepMind filed concerns about the agreement
1
.The standoff exposes a fundamental misalignment between US government AI policy and national security requirements in the AI arms race. A retired general warned that the Pentagon currently purchases access to AI capabilities but does not control them—training, testing, and development remain with private tech companies that have their own governance frameworks and commercial incentives
2
. This gives a small number of unaccountable firms effective veto power over how the United States employs critical defense applications.
Source: Fortune
China and aligned partners are moving aggressively to deploy military AI at scale, leveraging open-source AI models like DeepSeek that can be adapted without the corporate governance structures constraining American firms
2
. While the US debates permissible AI uses through contracts with private vendors, competitors build flexible, state-controlled AI systems rapidly customized for operational needs. If this gap persists, America risks significant military disadvantage in a competition where iteration cycles are measured in weeks, not years2
.Details about Anthropic's Mythos model—dubbed "too dangerous" for public release—add concerning dimensions to the dispute. Mythos reportedly can autonomously identify and weaponize undiscovered cybersecurity vulnerabilities, potentially meaning open season for cybercriminals without appropriate guardrails
2
. The tool is so powerful that Anthropic has limited access to it, currently testing it with Wall Street banks at the quiet encouragement of the Treasury Secretary and Federal Reserve chair1
.
Source: The Next Web
A federal judge in San Francisco reviewing the designation noted the supply chain risk label is "usually reserved for foreign intelligence agencies and terrorists, not for American companies," describing the administration's actions as "classic First Amendment retaliation." Judge Rita Lin issued a preliminary injunction blocking the ban
1
. However, a federal appeals court later denied Anthropic's stay request, concluding that "the equitable balance here cuts in favour of the government." As of now, Anthropic is barred from Pentagon contracts but permitted to work with other agencies while fighting two parallel lawsuits1
.The conflict raises questions about democratic governance and who sets terms for deploying consequential technologies. Military experts argue that debates about AI in warfare—from autonomy and targeting to surveillance and escalation—should be led by elected officials and military leaders accountable to the American people, not dictated by acceptable-use policies of private companies
2
. The current model where government rents access to closed, proprietary systems it cannot fully audit or control is inadequate for strategic competition demands.Washington faces pressure to invest in high-performing, secure, and adaptable open-source models that the US government and closest allies can control, audit, and deploy without external constraint through procurement processes that ensure auditability
2
. This strategic realignment could involve government-led model development, partnerships with trusted research institutions, or creation of open-weight models designed specifically for defense applications. The contradiction is stark: the administration that blacklisted Anthropic is simultaneously directing banks to evaluate it for critical financial infrastructure—not bureaucratic confusion, but policy1
.Summarized by
Navi
03 Mar 2026•Policy and Regulation

01 Mar 2026•Policy and Regulation

27 May 2025•Science and Research
