Curated by THEOUTPOST
On Fri, 7 Mar, 8:01 AM UTC
3 Sources
[1]
OpenAI's ex-policy lead criticizes the company for 'rewriting' its AI safety history | TechCrunch
A high-profile ex-OpenAI policy researcher, Miles Brundage, took to social media on Wednesday to criticize OpenAI for "rewriting the history" of its deployment approach to potentially risky AI systems. Earlier this week, OpenAI published a document outlining its current philosophy on AI safety and alignment, the process of designing AI systems that behave in desirable and explainable ways. In the document, OpenAI said that it sees the development of AGI, broadly defined as AI systems that can perform any task a human can, as a "continuous path" that requires "iteratively deploying and learning" from AI technologies. "In a discontinuous world [...] safety lessons come from treating the systems of today with outsized caution relative to their apparent power, [which] is the approach we took for [our AI model] GPT‑2," OpenAI wrote. "We now view the first AGI as just one point along a series of systems of increasing usefulness [...] In the continuous world, the way to make the next system safe and beneficial is to learn from the current system." But Brundage claims that GPT-2 did, in fact, warrant abundant caution at the time of its release, and that this was "100% consistent" with OpenAI's iterative deployment strategy today. "OpenAI's release of GPT-2, which I was involved in, was 100% consistent [with and] foreshadowed OpenAI's current philosophy of iterative deployment," Brundage wrote in a post on X. "The model was released incrementally, with lessons shared at each step. Many security experts at the time thanked us for this caution." Brundage, who joined OpenAI as a research scientist in 2018, was the company's head of policy research for several years. On OpenAI's "AGI readiness" team, he had a particular focus on the responsible deployment of language generation systems such as OpenAI's AI chatbot platform ChatGPT. GPT-2, which OpenAI announced in 2019, was a progenitor of the AI systems powering ChatGPT. GPT-2 could answer questions about a topic, summarize articles, and generate text on a level sometimes indistinguishable from that of humans. While GPT-2 and its outputs may look basic today, they were cutting-edge at the time. Citing the risk of malicious use, OpenAI initially refused to release GPT-2's source code, opting instead of give selected news outlets limited access to a demo. The decision was met with mixed reviews from the AI industry. Many experts argued that the threat posed by GPT-2 had been exaggerated, and that there wasn't any evidence the model could be abused in the ways OpenAI described. AI-focused publication The Gradient went so far as to publish an open letter requesting that OpenAI release the model, arguing it was too technologically important to hold back. OpenAI eventually did release a partial version of GPT-2 six months after the model's unveiling, followed by the full system several months after that. Brundage thinks this was the right approach. "What part of [the GPT-2 release] was motivated by or premised on thinking of AGI as discontinuous? None of it," he said in a post on X. "What's the evidence this caution was 'disproportionate' ex ante? Ex post, it prob. would have been OK, but that doesn't mean it was responsible to YOLO it [sic] given info at the time." Brundage fears that OpenAI's aim with the document is to set up a burden of proof where "concerns are alarmist" and "you need overwhelming evidence of imminent dangers to act on them." This, he argues, is a "very dangerous" mentality for advanced AI systems. "If I were still working at OpenAI, I would be asking why this [document] was written the way it was, and what exactly OpenAI hopes to achieve by poo-pooing caution in such a lop-sided way," Brundage added. OpenAI has historically been accused of prioritizing "shiny products" at the expense of safety, and of rushing product releases to beat rival companies to market. Last year, OpenAI dissolved its AGI readiness team, and a string of AI safety and policy researchers departed the company for rivals. Competitive pressures have only ramped up. Chinese AI lab DeepSeek captured the world's attention with its openly available R1 model, which matched OpenAI's o1 "reasoning" model on a number of key benchmarks. OpenAI CEO Sam Altman has admitted that DeepSeek has lessened OpenAI's technological lead, and said that OpenAI would "pull up some releases" to better compete. There's a lot of money on the line. OpenAI loses billions annually, and the company has reportedly projected that its annual losses could triple to $14 billion by 2026. A faster product release cycle could benefit OpenAI's bottom line near-term, but possibly at the expense of safety long-term. Experts like Brundage question whether the trade-off is worth it.
[2]
Former OpenAI Policy Lead Accuses The Company Of Altering Its AI Safety Narrative
OpenAI has been aggressively approaching its ambitious vision, especially with DeepSeek's growing popularity and the threat it poses to shake OpenAI's position as one of the leading AI companies. The tech giant has been pushing for the pursuit of AGI development and even shared that super AI agents are the next big thing. While the company has, since its inception, been balancing safety and competition, its recent approach to AI safety has not been well received, especially with a former key member criticizing the direction of the company and questioning whether the company is revising its narrative. OpenAI has recently shared its approach to the careful iterative deployment of its AI models with the community and its users by releasing them step-by-step and using GPT -2's cautious rollout as an example. The example, however, invited criticism from former OpenAI policy researcher Miles Brundage, who called out the company and accused it of rewriting the narrative on AI safety history. The document that OpenAI published highlights its approach to AI safety and the deployment of its models. It emphasized how it takes a cautious approach with the current systems and mentioned GPT -2 as part of its cautious release approach. The company expressed its belief in learning from the tools available to ensure the safety of future systems. In the document, it was stated: In a discontinuous world [...] safety lessons come from treating the systems of today with outsized caution relative to their apparent power, [which] is the approach we took for [our AI model] GPT‑2. We now view the first AGI as just one point along a series of systems of increasing usefulness [...] In the continuous world, the way to make the next system safe and beneficial is to learn from the current system. Miles Brundage, who was the company's head of policy research for several years, insists that the GPT-2 release also followed an incremental approach with OpenAI sharing insights at each stage, and the security experts acknowledged and appreciated the company's cautious approach to handling the model. He argues that the gradual release of the GPT-2 aligned with its current iterative deployment strategy and firmly believes that the past caution was not excessive but necessary and responsible. Brundage also went ahead and expressed concerns over OpenAI's claim that AGI will be developed in gradual steps rather than a sudden breakthrough. He shared how the company misrepresenting the history of GPT-2's release and rewriting the safety history is troubling. He further stated his apprehensions regarding OpenAI releasing the document to create a standard or place safety concerns as overreactions, potentially posing a great risk, especially as more advancement is achieved in AI systems. This is not the first time that OpenAI has been criticized for prioritizing progress and profits over long-term safety. Experts like Brundage highlight whether the trade-off is justified and share concerns regarding what the future may hold if AI safety is not dealt with caution.
[3]
OpenAI showing a 'very dangerous mentality' regarding safety, expert warns
An AI expert has accused OpenAI of rewriting its history and being overly dismissive of safety concerns. Former OpenAI policy researcher Miles Brundage criticized the company's recent safety and alignment document published this week. The document describes OpenAI as striving for artificial general intelligence (AGI) in many small steps, rather than making "one giant leap," saying that the process of iterative deployment will allow it to catch safety issues and examine the potential for misuse of AI at each stage. Recommended Videos Among the many criticisms of AI technology like ChatGPT, experts are concerned that chatbots will give inaccurate information regarding health and safety (like the infamous issue with Google's AI search feature which instructed people to eat rocks) and that they could be used for political manipulation, misinformation, and scams. OpenAI in particular has attracted criticism for lack of transparency in how it develops its AI models, which can contain sensitive personal data. The release of the OpenAI document this week seems to be a response to these concerns, and the document implies that the development of the previous GPT-2 model was "discontinuous" and that it was not initially released due to "concerns about malicious applications," but now the company will be moving toward a principle of iterative development instead. But Brundage contends that the document is altering the narrative and is not an accurate depiction of the history of AI development at OpenAI. "OpenAI's release of GPT-2, which I was involved in, was 100% consistent + foreshadowed OpenAI's current philosophy of iterative deployment," Brundage wrote on X. "The model was released incrementally, with lessons shared at each step. Many security experts at the time thanked us for this caution." Brundage also criticized the company's apparent approach to risk based on this document, writing that, "It feels as if there is a burden of proof being set up in this section where concerns are alarmist + you need overwhelming evidence of imminent dangers to act on them - otherwise, just keep shipping. That is a very dangerous mentality for advanced AI systems." This comes at a time when OpenAI is under increasing scrutiny with accusations that it prioritizes "shiny products" over safety.
Share
Share
Copy Link
Miles Brundage, ex-OpenAI policy researcher, accuses the company of rewriting its AI safety history, sparking debate on responsible AI development and deployment strategies.
OpenAI, a leading artificial intelligence research company, has found itself at the center of controversy following the release of a document outlining its philosophy on AI safety and alignment. The document, published earlier this week, has drawn sharp criticism from Miles Brundage, a former high-profile policy researcher at OpenAI, who accuses the company of "rewriting the history" of its deployment approach to potentially risky AI systems 1.
At the heart of the controversy is OpenAI's characterization of its approach to releasing GPT-2, a powerful language model unveiled in 2019. In its recent document, OpenAI suggests that the cautious release of GPT-2 was part of a "discontinuous" approach to AI development, which they claim to have moved away from 1.
However, Brundage, who was involved in the GPT-2 release, strongly disagrees with this narrative. He argues that the incremental release of GPT-2 was "100% consistent" with OpenAI's current philosophy of iterative deployment 2. Brundage maintains that the cautious approach taken with GPT-2 was necessary and responsible, given the information available at the time.
Brundage's criticism extends beyond the historical narrative to OpenAI's current approach to AI safety. He expresses concern that the company's recent document may be setting up a "burden of proof" where safety concerns are dismissed as alarmist unless there is overwhelming evidence of imminent danger 3.
The former policy lead warns that this mentality could be "very dangerous" for advanced AI systems, potentially prioritizing rapid development and deployment over thorough safety considerations 1.
OpenAI's shift in narrative comes amid intensifying competition in the AI field. The company faces pressure from rivals like DeepSeek, whose open-source R1 model has matched OpenAI's performance on key benchmarks 1. This competitive landscape has led to concerns that OpenAI may be prioritizing rapid product releases over long-term safety considerations.
The controversy surrounding OpenAI's document highlights broader issues in the AI industry, including the balance between innovation and safety, transparency in AI development, and the responsible deployment of increasingly powerful AI models 3.
As AI technology continues to advance rapidly, the debate sparked by Brundage's criticism underscores the critical importance of maintaining a cautious and responsible approach to AI development and deployment. The incident serves as a reminder of the ongoing challenges faced by the AI community in ensuring that progress in artificial intelligence is achieved without compromising on safety and ethical considerations.
Reference
[1]
OpenAI, the leading AI research company, experiences a significant data breach. Simultaneously, the company faces accusations of breaking its promise to allow independent testing of its AI models.
2 Sources
2 Sources
Several senior AI safety researchers have left OpenAI, citing shifts in company culture and concerns about the prioritization of AI safety in the development of advanced AI systems.
3 Sources
3 Sources
OpenAI, the creator of ChatGPT, has announced a partnership with the U.S. AI Safety Institute. The company commits to providing early access to its future AI models and emphasizes its dedication to AI safety in a letter to U.S. lawmakers.
3 Sources
3 Sources
OpenAI has disbanded its AGI Readiness team following the resignation of senior advisor Miles Brundage, who warns that neither the company nor the world is prepared for advanced AI.
15 Sources
15 Sources
The AI Action Summit in Paris marks a significant shift in global attitudes towards AI, emphasizing economic opportunities over safety concerns. This change in focus has sparked debate among industry leaders and experts about the balance between innovation and risk management.
7 Sources
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved