2 Sources
2 Sources
[1]
AI contract restrictions could threaten military missions, US official says
WASHINGTON, March 3 (Reuters) - A senior Pentagon official said on Tuesday that commercial AI contracts signed under the Biden administration contained sweeping operational restrictions that threatened to paralyze U.S. military missions in real time, including the ability to plan and execute combat operations. Emil Michael, under secretary of defense for research and engineering, described a moment of alarm when he reviewed the terms governing AI models already embedded in some of the military's most sensitive commands. He did not name the AI provider whose contracts he was reviewing. His comments came at the American Dynamism Summit in Washington, a gathering of technology companies keen on space and national security work. The summit occurred just days after a disagreement over how the Pentagon could use Anthropic's powerful and widely used AI tools, leading President Donald Trump to ban the startup from government business and label it a national security risk. "I had a 'holy, holy cow' moment," Michael said at the American Dynamism Summit in Washington. "There were things ... you couldn't plan an operation ... if it would potentially lead to kinetics" or explosions. He described dozens of restrictions baked in to agreements covering commands responsible for air operations over Iran, China and South America. Michael said the contracts were structured in a way that, if an operator violated the terms of service, the model could theoretically "just stop in the middle of an operation." Anthropic's Claude had been the only AI model available to the Defense Department on its classified systems at the time Michael conducted his review. His concerns sharpened after a senior executive at an unnamed AI company raised questions about whether its software had been used in what Michael called one of the most successful military operations in recent memory. Anthropic's Claude was reported to have been used to help plan the U.S. government raid that captured former Venezuelan President Nicolas Maduro in January. "What we're not going to do is let any one company dictate a new set of policies above and beyond what Congress has passed," Michael said. The disclosures may help explain the dispute between Anthropic and the Department of Defense. Defense Secretary Pete Hegseth declared the company a "supply-chain risk" for its refusal to back down in negotiations over restrictions on autonomous weapons and mass surveillance. Hours later, rival OpenAI struck its own deal with the Pentagon. A statement by OpenAI CEO Sam Altman suggested that the Department had agreed to similar restrictions with OpenAI's models. Reporting by Mike Stone in Washington; Editing by Matthew Lewis Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Artificial Intelligence Mike Stone Thomson Reuters Mike Stone is a Reuters reporter covering the U.S. arms trade and defense industry. Most recently Mike has been focused on the Golden Dome missile defense shield. Mike also spends a lot of his time writing on Ukraine and how industry has adapted, or faltered as it supports that conflict. Mike, a New Yorker, has extensively covered how the U.S. has supplied Ukraine with weapons, the cadence, decisions and milestones that have had battlefield impacts. Before his time in Washington Mike's coverage focused on mergers and acquisitions for oil and gas companies, financial institutions, defense companies, consumer product makers, retailers, real estate giants, and telecommunications companies.
[2]
AI contract restrictions could threaten military missions, US official says
WASHINGTON, March 3 (Reuters) - A senior Pentagon official said on Tuesday that commercial AI contracts signed under the Biden administration contained sweeping operational restrictions that threatened to paralyze U.S. military missions in real time, including the ability to plan and execute combat operations. Emil Michael, under secretary of defense for research and engineering, described a moment of alarm when he reviewed the terms governing AI models already embedded in some of the military's most sensitive commands. He did not name the AI provider whose contracts he was reviewing. His comments came at the American Dynamism Summit in Washington, a gathering of technology companies keen on space and national security work. The summit occurred just days after a disagreement over how the Pentagon could use Anthropic's powerful and widely used AI tools, leading President Donald Trump to ban the startup from government business and label it a national security risk. "I had a 'holy, holy cow' moment," Michael said at the American Dynamism Summit in Washington. "There were things ... you couldn't plan an operation ... if it would potentially lead to kinetics" or explosions. He described dozens of restrictions baked in to agreements covering commands responsible for air operations over Iran, China and South America. Michael said the contracts were structured in a way that, if an operator violated the terms of service, the model could theoretically "just stop in the middle of an operation." Anthropic's Claude had been the only AI model available to the Defense Department on its classified systems at the time Michael conducted his review. His concerns sharpened after a senior executive at an unnamed AI company raised questions about whether its software had been used in what Michael called one of the most successful military operations in recent memory. Anthropic's Claude was reported to have been used to help plan the U.S. government raid that captured former Venezuelan President Nicolas Maduro in January. "What we're not going to do is let any one company dictate a new set of policies above and beyond what Congress has passed," Michael said. The disclosures may help explain the dispute between Anthropic and the Department of Defense. Defense Secretary Pete Hegseth declared the company a "supply-chain risk" for its refusal to back down in negotiations over restrictions on autonomous weapons and mass surveillance. Hours later, rival OpenAI struck its own deal with the Pentagon. A statement by OpenAI CEO Sam Altman suggested that the Department had agreed to similar restrictions with OpenAI's models. (Reporting by Mike Stone in Washington; Editing by Matthew Lewis)
Share
Share
Copy Link
A senior Pentagon official revealed that commercial AI contracts signed under the Biden administration contain sweeping operational restrictions that could halt military missions in real time. The disclosure follows Trump's ban on Anthropic after disputes over AI usage terms, while rival OpenAI quickly secured its own Pentagon deal with similar restrictions.
Emil Michael, under secretary of defense for research and engineering at the Pentagon, disclosed on Tuesday that commercial AI contracts inherited from the Biden administration contain operational restrictions so severe they could threaten military missions and paralyze U.S. military operations in real time
1
. Speaking at the American Dynamism Summit in Washington, Michael described a "holy, holy cow" moment when reviewing terms governing AI models already embedded in some of the military's most sensitive commands2
. The US official explained that restrictive clauses in commercial AI contracts prevented operators from planning operations that could potentially lead to kinetics or explosions, with dozens of restrictions affecting commands responsible for air operations over Iran, China and South America1
.
Source: Reuters
The most alarming aspect of these commercial AI contracts involves their enforcement mechanisms. Michael revealed that if an operator violated the terms of service, the AI model could theoretically "just stop in the middle of an operation"
2
. At the time of Michael's review, Anthropic's Claude had been the only AI model available to the Defense Department on its classified systems1
. The constraints threatened the ability to plan and execute combat operations, raising questions about whether AI tools could be reliably deployed in sensitive national security scenarios. Michael's concerns intensified after a senior executive at an unnamed AI company questioned whether its software had been used in what he called one of the most successful military operations in recent memory—Anthropic's Claude was reportedly used to help plan the U.S. government raid that captured former Venezuelan President Nicolas Maduro in January1
.The revelations help explain the recent dispute between Anthropic and the Department of Defense that culminated just days before the American Dynamism Summit. President Donald Trump banned the startup from government business and labeled it a national security risk following disagreements over how the Pentagon could use Anthropic's AI tools
2
. Defense Secretary Pete Hegseth declared Anthropic a supply-chain risk for its refusal to back down in negotiations over restrictions on autonomous weapons and mass surveillance1
. Michael made clear the Pentagon's position: "What we're not going to do is let any one company dictate a new set of policies above and beyond what Congress has passed"2
.Source: Market Screener
Related Stories
Hours after Anthropic's ban, rival OpenAI struck its own deal with the Pentagon, though a statement by OpenAI CEO Sam Altman suggested that the Department had agreed to similar restrictions with OpenAI's AI models
1
. This development raises questions about how the Pentagon will balance operational flexibility with ethical AI deployment. The gathering of technology companies at the summit, focused on space and national security work, underscores the growing tension between commercial AI providers and military requirements. As AI models become increasingly integrated into classified systems and combat operations, the debate over who sets the rules—Congress, the Pentagon, or private companies—will likely intensify, with implications for how the U.S. military deploys emerging technologies in future conflicts.Summarized by
Navi
[2]
30 Jan 2026•Policy and Regulation

12 Feb 2026•Policy and Regulation

14 Feb 2026•Policy and Regulation

1
Business and Economy

2
Policy and Regulation

3
Health
