Former OpenAI Policy Lead Criticizes Company's Revised AI Safety Narrative

3 Sources

Share

Miles Brundage, ex-OpenAI policy researcher, accuses the company of rewriting its AI safety history, sparking debate on responsible AI development and deployment strategies.

News article

OpenAI's Controversial AI Safety Document

OpenAI, a leading artificial intelligence research company, has found itself at the center of controversy following the release of a document outlining its philosophy on AI safety and alignment. The document, published earlier this week, has drawn sharp criticism from Miles Brundage, a former high-profile policy researcher at OpenAI, who accuses the company of "rewriting the history" of its deployment approach to potentially risky AI systems

1

.

The Contentious GPT-2 Release

At the heart of the controversy is OpenAI's characterization of its approach to releasing GPT-2, a powerful language model unveiled in 2019. In its recent document, OpenAI suggests that the cautious release of GPT-2 was part of a "discontinuous" approach to AI development, which they claim to have moved away from

1

.

However, Brundage, who was involved in the GPT-2 release, strongly disagrees with this narrative. He argues that the incremental release of GPT-2 was "100% consistent" with OpenAI's current philosophy of iterative deployment

2

. Brundage maintains that the cautious approach taken with GPT-2 was necessary and responsible, given the information available at the time.

Concerns Over OpenAI's Current Stance

Brundage's criticism extends beyond the historical narrative to OpenAI's current approach to AI safety. He expresses concern that the company's recent document may be setting up a "burden of proof" where safety concerns are dismissed as alarmist unless there is overwhelming evidence of imminent danger

3

.

The former policy lead warns that this mentality could be "very dangerous" for advanced AI systems, potentially prioritizing rapid development and deployment over thorough safety considerations

1

.

Competitive Pressures and Safety Trade-offs

OpenAI's shift in narrative comes amid intensifying competition in the AI field. The company faces pressure from rivals like DeepSeek, whose open-source R1 model has matched OpenAI's performance on key benchmarks

1

. This competitive landscape has led to concerns that OpenAI may be prioritizing rapid product releases over long-term safety considerations.

Industry-wide Implications

The controversy surrounding OpenAI's document highlights broader issues in the AI industry, including the balance between innovation and safety, transparency in AI development, and the responsible deployment of increasingly powerful AI models

3

.

As AI technology continues to advance rapidly, the debate sparked by Brundage's criticism underscores the critical importance of maintaining a cautious and responsible approach to AI development and deployment. The incident serves as a reminder of the ongoing challenges faced by the AI community in ensuring that progress in artificial intelligence is achieved without compromising on safety and ethical considerations.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo