Curated by THEOUTPOST
On Thu, 5 Dec, 4:02 PM UTC
3 Sources
[1]
Shifting corporate priorities, Superalignment, and safeguarding humanity: Why OpenAI's safety researchers keep leaving
A number of senior AI safety research personnel at OpenAI, the organisation behind ChatGPT, have left the company. This wave of resignations often cites shifts within company culture, and a lack of investment in AI safety as reasons for leaving. To put it another way, though the ship may not be taking on water, the safety team are departing in their own little dinghy, and that is likely cause for some concern. The most recent departure is Rosie Campbell, who previously led the Policy Frontiers team. In a post on her personal substack (via Tweak Town) Campbell shared the final message she sent to her colleagues in Slack, writing that though she has "always been strongly driven by the mission of ensuring safe and beneficial [Artificial General Intelligence]," she now believes that she "can pursue this more effectively externally." Campbell highlights "the dissolution of the AGI Readiness team" and the departure of Miles Brundage, another AI safety researcher, as specific factors that informed her decision to leave. Campbell and Brundage had previously worked together at OpenAI on matters of "AI governance, frontier policy issues, and AGI readiness." Brundage himself also shared a few of his reasons for parting ways with OpenAI in a post to his Substack back in October. He writes, "I think AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so." Previously serving as a Senior Advisor for AGI Readiness, he shares, "I think I can be more effective externally." This comes mere months after Jan Leike's resignation as co-lead of OpenAI's Superalignment team. This team was tasked with tackling the problem of ensuring that AI systems potentially more intelligent than humans still act in accordance with human values -- and they were expected to solve this problem within the span of four years. Talk about a deadline. While Miles Brundage has described plans to be one of the "industry-independent voices in the policy conversation," Leike, on the other hand, is now co-lead of the Alignment Science team at AI rival Anthropic, a startup that has recently received $4 billion of financial backing from Amazon. At the time of his departure from OpenAI, Leike took to X to share his thoughts on the state of the company. His comments are direct, to say the least. "Building smarter-than-human machines is an inherently dangerous endeavor," He wrote, before criticising the company directly, "OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products." He goes on to plead, "OpenAI must become a safety-first AGI company." The company's charter details a desire to act "in the best interests of humanity" towards developing "safe and beneficial AGI." However, OpenAI has grown significantly since its founding in late 2015, and recent corporate moves suggest its priorities may be shifting. Just for a start, news broke back in September that the company would be restructuring away from its not-for-profit roots. For another thing, multiple major Canadian media companies are in the process of suing OpenAI for feeding news articles into their Large Language Models. Generally speaking, it's hard to see how plagiarism at that scale could be for the good of humanity, and that's all without more broadly getting into the far-reaching environmental implications of AI. On a similar note, Future PLC, our overseers at PC Gamer, have today announced a 'strategic partnership' with OpenAI which theoretically aims to bring content from the company's brands to ChatGPT as opposed to it just being scraped without the company's consent. However, the wording of the announcement is vague and full details of the partnership have not yet been published, so we still don't know how exactly it's going to roll out. With regards to the continuing development of AI and Large Language Models, I like to think significant course correction is still possible -- but you can also understand why I would much rather abandon the good ship AI altogether.
[2]
OpenAI safety researcher quits amid safety concerns about a human-level AI
TL;DR: An OpenAI safety researcher announced her resignation on Substack, stating she believes she can more effectively implement humanity-protecting AI policies outside the company. An OpenAI safety researcher has shared a message on her Substack saying she is quitting her position at the company as she believes her goal of implementing humanity-protecting policies into the development of AI can be better achieved externally. OpenAI has seen a selection of pivotal staff members leave the company recently, and now another has been added to the list. Rosie Campbell joined OpenAI in 2021 with the goal of implementing safety policies for AI development, and now, according to a Substack post, the AI safety researcher is departing the company, citing several internal changes such as workplace culture and the ability to perform what Campbell believes is the most fundamental part of her job - AI safety. Campbell wrote in the Substack post that she was a member of OpenAI's Policy Research team, where she worked closely with Miles Brundage, a senior staffer who worked at OpenAI's Artificial General Intelligence (AGI) team, a team dedicated to making sure the world is prepared for AGI when it's achieved. Notably, Brundage left OpenAI in October and published a letter on Substack citing concerns with OpenAI's internal policies regarding AGI safety and writing there are "gaps" in the company's readiness policy. Campbell's departure announcement was much vaguer, with the AI safety researcher writing, "While change is inevitable with growth, I've been unsettled by some of the shifts over the last ~year and the loss of so many people who shaped our culture." "I've always been strongly driven by the mission of ensuring safe and beneficial AGI, and after Miles's departure and the dissolution of the AGI Readiness team, I believe I can pursue this more effectively externally," wrote Campbell While Campbell doesn't directly state any problems with OpenAI's AI safety policies, it's clear from the relation with Brundage and the decision AGI readiness and AI safety can be carried out better externally that they simply aren't up to the standards of what Campbell, and all other former OpenAI employees sharing similar concerns, consider satisfactory.
[3]
AI Safety Researcher Quits OpenAI, Saying Its Trajectory Alarms Her
Yet another OpenAI researcher has left the company amid concerns about its safety practices and readiness for potentially human-level AI. In a post on her personal Substack, OpenAI safety researcher Rosie Campbell shared a message she posted on the company Slack days prior announcing her resignation. As she noted in the message, her decision was spurred by the exit of the company's former artificial general intelligence (AGI) czar Miles Brundage, a close colleague whose resignation also led to OpenAI dissolving that team entirely. "After almost three and a half years here, I am leaving OpenAI," Campbell's message reads. "I've always been strongly driven by the mission of ensuring safe and beneficial AGI, and after [Brundage's] departure and the dissolution of the AGI Readiness team, I believe I can pursue this more effectively externally." Though she didn't get into it too much, the former OpenAI-er said that she decided to leave because she was troubled by changes at the company over the past year or so. It's unclear what changes she's referencing exactly, but that timeframe matches the failed coup against CEO and cofounder Sam Altman late last November, which was reportedly undertaken by board members with concerns about the firm's lack of focus on safety under his leadership. Beyond Altman's reinstatement after the ouster attempt, there have been several other noteworthy OpenAI resignations over the past year as the firm grapples with its increasing profile and worth, not to mention its increasingly commercial focus. "While change is inevitable with growth, I've been unsettled by some of the shifts over the last [roughly] year, and the loss of so many people who shaped our culture," the researcher wrote. "I sincerely hope that what made this place so special to me can be strengthened rather than diminished." Campbell went on to add that she hopes her former colleagues remember that the firm's "mission is not simply to 'build AGI,'" but also to keep working to make sure it "benefits humanity." She also wrote that she hopes her former colleagues will "take seriously the prospect that our current approach to safety might not be sufficient for the vastly more powerful systems we think could arrive this decade." It's a salient warning that's made all the more imminent by increasing concerns that AGI will fundamentally change or even harm humankind -- and one, unfortunately, that's been echoed by other people on their way out the door of OpenAI.
Share
Share
Copy Link
Several senior AI safety researchers have left OpenAI, citing shifts in company culture and concerns about the prioritization of AI safety in the development of advanced AI systems.
OpenAI, the organization behind ChatGPT, has experienced a significant exodus of senior AI safety researchers in recent months. These departures have raised concerns about the company's commitment to AI safety and its readiness for potential human-level artificial intelligence 123.
Rosie Campbell, who led the Policy Frontiers team, is the latest to leave OpenAI. In her farewell message, Campbell expressed that she could more effectively pursue the mission of ensuring safe and beneficial Artificial General Intelligence (AGI) externally 13. She cited the dissolution of the AGI Readiness team and the departure of colleague Miles Brundage as factors influencing her decision 1.
Miles Brundage, a former Senior Advisor for AGI Readiness, left OpenAI in October 2023. He emphasized the need for a concerted effort to make AI safe and beneficial, stating that he could be more effective working outside the company 1.
Jan Leike, former co-lead of OpenAI's Superalignment team, resigned earlier in 2023. The Superalignment team was tasked with ensuring that superintelligent AI systems would act in accordance with human values 1.
The departing researchers have expressed several concerns about OpenAI's direction:
Jan Leike was particularly critical, stating that "safety culture and processes have taken a backseat to shiny products" at OpenAI 1.
These departures come amid significant changes at OpenAI:
The exodus of safety researchers from OpenAI raises important questions about the future of AI development:
As the AI industry continues to evolve rapidly, the concerns raised by these departing researchers highlight the ongoing debate about how to ensure the safe and beneficial development of increasingly powerful AI systems.
Reference
Lilian Weng, OpenAI's VP of Research and Safety, announces her departure after seven years, adding to a growing list of safety team exits at the AI company. This move raises questions about OpenAI's commitment to AI safety versus commercial product development.
2 Sources
2 Sources
OpenAI experiences a significant brain drain as key technical leaders depart, raising questions about the company's future direction and ability to maintain its competitive edge in AI research and development.
3 Sources
3 Sources
OpenAI has disbanded its AGI Readiness team following the resignation of senior advisor Miles Brundage, who warns that neither the company nor the world is prepared for advanced AI.
15 Sources
15 Sources
Miles Brundage, ex-OpenAI policy researcher, accuses the company of rewriting its AI safety history, sparking debate on responsible AI development and deployment strategies.
3 Sources
3 Sources
OpenAI, the company behind ChatGPT, faces a significant leadership shakeup as several top executives, including CTO Mira Murati, resign. This comes as the company considers transitioning to a for-profit model and seeks new funding.
7 Sources
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved