Trump administration reverses course on AI oversight after Anthropic's Claude Mythos concerns

Reviewed byNidhi Govil

5 Sources

Share

The Trump administration has dramatically shifted its stance on AI regulation, signing agreements with Google DeepMind, Microsoft, and xAI for government safety checks on frontier AI models. This reversal comes after initially dismissing Biden-era policies as overregulation and even removing 'safety' from the US AI Safety Institute's name. The pivot follows Anthropic's decision to withhold its Claude Mythos model over cybersecurity risks.

Trump Administration Embraces Government Safety Checks on AI Models

The Trump administration has executed a sharp reversal on AI policy, signing agreements with Google DeepMind, Microsoft, and xAI to conduct government safety checks on AI models before and after their release

1

. This marks a dramatic shift from the administration's earlier position, which dismissed Biden-era voluntary safety checks as overregulation that would stifle innovation. President Trump had previously rebranded the US AI Safety Institute to the Center for AI Standards and Innovation (CAISI), pointedly removing "safety" from the name

1

.

Source: Fortune

Source: Fortune

The policy reversal came after Anthropic announced it would not release its latest Claude Mythos model, citing concerns that bad actors could exploit its advanced cybersecurity capabilities

1

. White House National Economic Council Director Kevin Hassett indicated that Trump may soon issue an executive order for AI oversight mandating government testing of advanced AI systems prior to release .

CAISI Completes Over 40 Evaluations of Frontier AI Systems

CAISI has already completed approximately 40 evaluations, including assessments of frontier models that have not yet been released

1

. During these tests, CAISI frequently gains access to models with "reduced or removed safeguards," allowing evaluators to more thoroughly examine AI national security concerns and capabilities

1

. CAISI Director Chris Fall emphasized that "independent, rigorous measurement science is essential to understanding frontier AI and its national security implications"

5

.

The agreements with frontier AI developers enable pre-release evaluation of frontier AI models as well as post-deployment assessment and collaborative research

5

. A group of interagency experts has formed a task force focused on AI national security concerns to ensure evaluators understand emerging threats across government

1

. Tom Lue, Google DeepMind's vice president of frontier AI global affairs, expressed confidence in CAISI's testing plans, while Microsoft credited the expertise "uniquely held by institutions like CAISI" for conducting such evaluations

1

.

Source: CXOToday

Source: CXOToday

Critics Question Standards and Expertise for AI Model Vetting

Despite the shift in AI policy, critics have raised concerns about CAISI's capacity to effectively evaluate frontier AI models. Devin Lynch, a former director for cyber policy at the White House Office of the National Cyber Director, noted that "capability assessments are only as good as the threat models behind them" and emphasized that CAISI needs to "define, and publish, what it's testing for, not just who it's testing with"

1

.

Sarah Kreps, director of the Tech Policy Institute at Cornell University, warned that "the definition of 'safe' is contested" and that without clear standards, "the process can be politicized"

1

. Critics have also suggested that CAISI may lack sufficient funding or expertise to properly assess advanced AI systems, and that seeking voluntary commitments from AI firms may not create the transparency needed about frontier AI risks

1

.

Market Competition Drives Shift Away from AI Safety Idealism

The policy reversal occurs against a backdrop of intensifying market competition among AI companies. OpenAI and Anthropic were originally founded on principles prioritizing AI safety and the public good, but those ideals are increasingly giving way to an arms race for market share

2

. When the Pentagon blacklisted Anthropic because it wanted to restrict how its AI could be used—including for mass surveillance and fully autonomous weapons—rivals quickly agreed to "all lawful use" terms that Anthropic had rejected

2

.

The Elon Musk-OpenAI trial has further exposed tensions around control of AI development. OpenAI president Greg Brockman acknowledged that while he helped launch OpenAI as a nonprofit to "benefit humanity as a whole," his stake in OpenAI's for-profit arm may now be worth more than $20 billion, potentially closer to $30 billion

2

. This transformation from nonprofit research lab to commercial powerhouse illustrates the broader industry shift away from the do-good idealism AI founders once championed

2

.

Source: Fortune

Source: Fortune

Government and Industry Collaborations Face Uncertain Future

The administration's consideration of an executive order for AI oversight represents what some experts call "a 180 for the Trump administration, that has very explicitly been anti-any sort of regulation" . Hassett compared the proposed oversight process to FDA drug approval, stating the goal is ensuring "U.S. AI can be the leader in AI and be safe at the same time" .

The current debate carries strong echoes of Biden's November 2023 AI Executive Order, which created the original US AI Safety Institute and invoked the Defense Production Act to require companies training the largest AI models to share safety testing results with government . The administration that once criticized Biden's AI oversight efforts is now considering adopting broadly similar policies, though framed less around existential dangers and more around immediate national security and cybersecurity threats .

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved