Anthropic drops hallmark safety pledge as AI race intensifies and competitive pressure mounts

Reviewed byNidhi Govil

6 Sources

Share

Anthropic has abandoned its signature commitment to halt AI development when safety can't be guaranteed, marking a dramatic shift in the AI industry. The company behind Claude will now consider competitors' actions before pausing model training, replacing its strict safety pledge with transparency measures. The move underscores the growing tension between caution and competition as regulatory momentum stalls.

Anthropic Rewrites AI Guardrails in Major Policy Shift

Anthropic has formally abandoned the central promise that once defined its approach to AI safety, removing its pledge not to train or release frontier AI models without guaranteed safety mitigations in advance

1

3

. The company behind Claude confirmed the decision in its updated Responsible Scaling Policy version 3.0, marking one of the most dramatic policy shifts in the AI industry yet as startups once focused on helping humanity turn their attention to profit and success

1

.

Source: Bloomberg

Source: Bloomberg

Under the revised AI safety policy, Anthropic will no longer automatically pause model development if it could be considered dangerous. Instead, the company will consider its competitors' actions and whether they release models with similar capabilities before making such decisions

2

. Previously, Anthropic committed to safeguards that would reduce its models' absolute risk, regardless of whether other AI developers did the same.

Self-Imposed Limitations Give Way to Competitive Pressures

The company in 2023 said in its Responsible Scaling Policy that it would delay AI development that might be dangerous

1

. In a Tuesday blog post, Anthropic said it was updating its rules to say it would no longer do so if it believes it lacks a significant lead over a competitor. "We felt that it wouldn't actually help anyone for us to stop training AI models," Anthropic's chief science officer, Jared Kaplan, told TIME. "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments if competitors are blazing ahead"

4

.

Source: PYMNTS

Source: PYMNTS

When Anthropic launched years ago, the company wanted an industry-wide "race to the top" in artificial intelligence, instead of a race to the bottom in pursuit of customers and market dominance that would inadvertently lead to catastrophic safety risks

2

. The company adopted safety principles and policies that it hoped its competitors would also implement. In some instances, companies including Google and OpenAI did, according to Anthropic. Still, Anthropic's hopes didn't "pan out" as the company hoped

2

.

Voluntary AI Safety Commitments Face Reality Check

"The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level," the company wrote

2

. The accelerating AI race and stalled federal regulation have left companies to choose between voluntary restraint and competitive survival. The broader question facing the AI industry is whether voluntary norms can meaningfully shape the trajectory of transformative technologies

3

.

Under the new Responsible Scaling Policy, Anthropic pledges to publish detailed "Frontier Safety Roadmaps" outlining its planned safety milestones, along with regular "Risk Reports" that assess model capabilities and potential threats

3

. The company also says it will match or exceed competitors' safety efforts and delay development if it both believes it leads the field and identifies significant catastrophic risk. What it will no longer do is promise to halt training until all mitigations are guaranteed in advance.

Broader Industry Trend Emerges

Anthropic is not alone in revising its safety language. OpenAI also changed its mission statement in its 2024 IRS filing, removing the word "safely." The company's earlier statement pledged to build general-purpose AI that "safely benefits humanity, unconstrained by a need to generate financial return." The updated version now states its goal is "to ensure that artificial general intelligence benefits all of humanity"

4

.

Source: Decrypt

Source: Decrypt

"The new policy still includes some guardrails, but the core promise, that Anthropic would not release models unless it could guarantee adequate safety mitigations in advance, is gone," said Nik Kairinos, CEO and co-founder of RAIDS AI

3

. "This is precisely why continuous, independent monitoring of AI systems matters. Voluntary commitments can be rewritten. Regulation, backed by real-time oversight, cannot."

Financial Stakes and Defense Department Tensions

Anthropic's policy shifts come as the company raised $30 billion at a valuation of about $380 billion earlier this month

4

. At the same time, OpenAI is finalizing a funding round backed by Amazon, Microsoft, and Nvidia that could reach $100 billion. The company has also been under intense pressure this week by the U.S. Defense Department, which is pressing Anthropic to allow the military to use its AI tools for any purpose, including mass surveillance or the deployment of autonomous weapons without human oversight

2

.

Anthropic has participated in an AI pilot program for military-related imagery analysis, along with Google, OpenAI, and xAI

2

. Though Claude has been the only chatbot working on the government's classified systems, a Pentagon official said Anthropic could be replaced by another firm. Hamza Chaudhry, AI and National Security Lead at the Future of Life Institute, said the policy change reflects shifting political dynamics rather than a bid for Pentagon business. "Anthropic is now saying, 'Look, we can't keep saying safety, we can't unconditionally pause, and we're going to push for much lighter-touch regulation,'" he said

4

.

The tension between caution and competition now defines AI development as self-regulation gives way to market forces and geopolitical urgency. Whether transparency measures can substitute for binding AI safety commitments remains an open question as the industry watches what competitors do next.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo