Sam Altman urges Anthropic and Pentagon to resolve AI governance standoff over control

Reviewed byNidhi Govil

2 Sources

Share

OpenAI CEO Sam Altman calls for de-escalation between Anthropic and the U.S. government as their dispute over AI safety standards and military applications intensifies. The conflict began when Anthropic refused to remove safeguards against autonomous weapons and mass surveillance, leading to a Pentagon ban labeling the company a supply chain risk. Altman argues governments must hold power over AI and national security decisions.

Sam Altman Calls for De-escalation in AI Governance Battle

OpenAI CEO Sam Altman has urged Anthropic and the US government to "find a way to work together" as their dispute over AI governance escalates into a broader confrontation about who controls powerful AI systems

1

. Speaking in an interview with Laurie Segall, Altman called for both sides to "stop the escalation" and pursue collaboration, reflecting growing concerns that the AI industry cannot simultaneously claim geopolitical significance while refusing government oversight

1

.

Source: TechRadar

Source: TechRadar

The conflict centers on Anthropic's refusal to remove safeguards from its Claude AI model that prevent use in fully autonomous weapons or mass domestic surveillance applications. When negotiations with the U.S. Department of Defense broke down, the Pentagon designated Anthropic as a supply chain risk, effectively barring federal agencies from using the company's technology

1

. President Donald Trump later expanded this restriction through an executive directive banning all federal agencies from doing business with Anthropic

2

.

Pentagon Ban Raises Questions About Corporate Responsibility

A federal judge has since temporarily blocked the Pentagon ban through a preliminary injunction, but the implications continue to reverberate across the AI industry

2

. Gartner Inc. noted in a late March report that the episode highlights how deeply embedded AI models have become in software systems and their vulnerability to policy shocks. "Anthropic's exclusion underscores how quickly embedded model dependencies can convert into structural technical debt," the firm wrote, warning that even minor changes in model behavior can require broad functional revalidation

2

.

The standoff exposes fundamental questions about corporate responsibility in AI development. David Linthicum, a cloud and AI subject matter expert, argued that the Pentagon's framing as a supply chain risk is overstated. "If a company says it does not want its AI used for certain military or domestic surveillance purposes, that is a policy and governance issue," he said

2

. Carlos Montemayor, a philosophy professor at San Francisco State University, went further, suggesting the government is "punishing Anthropic for not following orders"

2

.

Governments Must Control AI Systems, Altman Argues

Altman's position stands out among AI leaders for its emphasis on government authority over AI systems control. "One of the most important questions the world will have to answer in the next year is, are AI companies or are governments more powerful? And I think it's very important that the governments are more powerful," he stated

1

. He argued that decisions about national security and geopolitics should be made through democratically elected processes rather than by CEOs of AI labs.

The OpenAI chief acknowledged the complexity of the situation, recognizing that public trust in government has eroded. "I think we have to work with government, but the intensity of the current mood of mistrust, I was miscalibrated on and I understand something there now," he said

1

. Despite this, Altman maintained he still mostly trusts the system of checks and balances, even while accepting that many people "really don't trust the government to follow the law."

Source: SiliconANGLE

Source: SiliconANGLE

Debate Over Self-Governance and Military Applications Intensifies

The dispute raises critical questions about whether private companies should define ethical boundaries for technologies with societal implications. Anthropic CEO Dario Amodei has defended the company's restrictions, noting that "Frontier AI systems are simply not reliable enough to power fully autonomous weapons"

2

. This technical constraint combines with ethical concerns to justify the company's stance on military applications.

Valence Howden, an advisory fellow at Info-Tech Research Group Inc., supported Anthropic's approach, arguing that organizations "have a responsibility to define the ethical development boundaries and use cases of their technologies," particularly as AI systems take on more autonomous roles

2

. However, Montemayor warned against corporate self-governance, calling it "unacceptable and dangerous" given the scale and impact of AI systems. He advocated for international regulation grounded in human rights principles

2

.

Altman highlighted the fundamental contradiction in the AI industry's position: companies cannot simultaneously claim their technology represents the most powerful force in human history while refusing to share it with democratically elected governments. "I don't think it works for our industry to say, Hey, this is the most powerful technology humanity has ever built. It is going to be the high order bit in geopolitics... And we are not giving it to you," he said

1

. The stakes continue to rise as AI safety standards, mass surveillance concerns, and questions about who controls these transformative technologies remain unresolved.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo