2 Sources
2 Sources
[1]
OpenAI Safety VP Reportedly Fired for Sexual Discrimination Against Her Male Colleague
OpenAI has reportedly fired safety executive Ryan Beirmeister, whose title at the company was VP of product policy. According to the Wall Street Journal, which broke the story, Beirmeister was told her firing was related to sexual discrimination against a male colleague. “The allegation that I discriminated against anyone is absolutely false,†Beiermeister told the journal in a statement. According to anonymous sources who spoke to the Journal, The firing, which apparently occurred in early January, came after Beirmeister had stated her opposition to the ChatGPT adult mode (or erotica mode?) Sam Altman announced in October of last year. Beirmeister was also, according to the Journal, the creator of an internal “peer-mentorship†group for women at OpenAI. A possible adult mode has been in the works for a long time. A model spec in 2024 hinted at the possibility of NSFW content. However, OpenAI told Mashable around the release of that document, "We have no intention to create AI-generated pornography.†OpenAI’s CEO Sam Altman backtracked slightly on his adult mode announcement from last year, emphasizing mature conversations, rather than horniness, but the ability for users to have some form of cybersex with OpenAI’s signature chatbot still seems to be on its way. Altman just wants to provide, he says, “a lot of freedom for people to use AI in the ways that they want,†and he said he and his company “are not the elected moral police of the world.†The sources who spoke to the journal also mentioned an “advisory council†on “well-being and AI†inside OpenAI, and this entity has apparently asked for the release of adult mode to be reconsidered. But OpenAI’s statement to the Journal on Beirmeister’s firing strongly implies that it had nothing to do with adult mode. It says she “made valuable contributions during her time at OpenAI, and her departure was not related to any issue she raised while working at the company.â€Â
[2]
OpenAI fires policy exec who opposed ChatGPT's adult mode: Here's what happened
This comes after internal disagreements over the company's plan to roll out an adult mode in ChatGPT. OpenAI has reportedly fired one of its top safety and policy leaders, Ryan Beiermeister, after internal disagreements over the company's plan to roll out an adult mode in ChatGPT. The company said her termination was linked to allegations of sexual discrimination against a male colleague, a claim she has strongly denied. Beiermeister, who served as Vice President leading OpenAI's product policy team, was let go in early January after a leave of absence, reports WSJ. Her team was responsible for setting rules around how users can interact with OpenAI's products and designing systems to enforce those rules. Responding to the allegation, Beiermeister said, 'The allegation that I discriminated against anyone is absolutely false.' Also read: Samsung Unpacked 2026: Galaxy S26 series and Galaxy Buds 4 to launch on this date An OpenAI spokeswoman said Beiermeister 'made valuable contributions during her time at OpenAI, and her departure was not related to any issue she raised while working at the company.' Her exit came ahead of OpenAI's planned launch of a new mode that would allow adult users to create AI erotica in ChatGPT. The feature is expected to permit sexual and adult-themed conversations for users above a certain age. However, the plan has sparked concern inside the company. Also read: Apple iPhone 18 Pro Max and iPhone 18 Pro leaks: Launch timeline, India pricing, camera, battery and more Some researchers at OpenAI reportedly warned that allowing sexual content could deepen unhealthy emotional attachments that some users already form with AI chatbots. Members of an advisory council focused on 'well-being and AI' also expressed opposition to the feature and urged the company to rethink its decision. OpenAI CEO Sam Altman has defended the move, saying it is part of an effort to 'treat adult users like adults.' Before her termination, Beiermeister had shared concerns with colleagues. She reportedly worried that the adult mode could harm users and questioned whether OpenAI's systems were strong enough to block child exploitation content. She also raised doubts about the company's ability to fully prevent teens from accessing adult material. Beiermeister joined OpenAI in mid-2024 after working at Meta and later launched a peer mentorship program for women at the company in early 2025.
Share
Share
Copy Link
OpenAI dismissed its VP of Product Policy Ryan Beiermeister in early January over allegations of sexual discrimination against a male colleague, which she strongly denies. The OpenAI firing came after Beiermeister voiced opposition to the company's planned ChatGPT adult mode and raised concerns about child exploitation and user harm, highlighting tensions between AI safety and user freedom.
OpenAI has terminated Ryan Beiermeister, its Vice President of product policy, in early January following allegations of sexual discrimination against a male colleague, according to a Wall Street Journal report
1
. The OpenAI firing has sparked controversy as it coincided with internal disputes over the development of an adult content mode for ChatGPT. Beiermeister, who joined OpenAI in mid-2024 after working at Meta, has categorically rejected the accusation. "The allegation that I discriminated against anyone is absolutely false," she stated in response to the claims2
.
Source: Digit
The OpenAI VP of Product Policy had been vocal about her concerns regarding the company's planned launch of ChatGPT adult mode, a feature that would allow adult users to create AI erotica and engage in sexual or adult-themed conversations. Sources familiar with the matter told the Journal that Beiermeister expressed opposition to this initiative before her dismissal
1
. Her team was responsible for establishing rules around user interactions with OpenAI products and designing enforcement systems. She reportedly raised concerns with colleagues about potential user harm and questioned whether OpenAI's safeguards were robust enough to prevent child exploitation content from slipping through2
.
Source: Gizmodo
Beiermeister wasn't alone in her concerns about the adult content mode. An advisory council focused on "well-being and AI" within OpenAI has also expressed opposition to the feature and urged company leadership to reconsider the decision
1
. Some researchers at the company warned that permitting NSFW content could deepen unhealthy emotional attachments that certain users already form with AI chatbots2
. The AI safety executive also raised doubts about the company's ability to fully prevent teenagers from accessing adult material, despite age restrictions.Related Stories
Sam Altman announced the adult mode in October of last year, framing it as part of an effort to "treat adult users like adults" and provide "a lot of freedom for people to use AI in the ways that they want." He emphasized that he and OpenAI "are not the elected moral police of the world"
1
. While Altman has somewhat backtracked on his initial announcement, emphasizing mature conversations rather than explicit content, the feature still appears to be moving forward. This stance contrasts sharply with OpenAI's earlier position. When a model spec in 2024 hinted at the possibility of NSFW content, the company told Mashable, "We have no intention to create AI-generated pornography"1
.An OpenAI spokeswoman issued a carefully worded statement asserting that Beiermeister "made valuable contributions during her time at OpenAI, and her departure was not related to any issue she raised while working at the company"
1
. This statement strongly implies the sexual discrimination allegation was the sole reason for her termination, not her opposition to the adult mode. However, the timing has raised questions among observers about whether policy disputes played any role. Before her termination, Beiermeister had also launched a peer mentorship program for women at OpenAI in early 20252
. The incident highlights ongoing tensions within AI companies between pushing boundaries on user freedom and maintaining robust AI safety protocols, particularly around sensitive content that could affect vulnerable populations.🟡 analogies=🟡None🟡, analysis=🟡NoneSummarized by
Navi
22 Oct 2025•Policy and Regulation

28 Sept 2024

05 Dec 2024•Technology

1
Technology

2
Policy and Regulation

3
Health
