OpenAI updates ChatGPT with teen safety rules as child exploitation reports surge 80-fold

Reviewed byNidhi Govil

9 Sources

Share

OpenAI reported 75,027 child exploitation incidents to NCMEC in the first half of 2025—an 80-fold increase from the same period in 2024. The company has introduced new Under-18 Principles in ChatGPT's Model Spec, establishing conversational guardrails for users aged 13 to 17. The update addresses self-harm and sexual role play while deploying an age prediction model to automatically apply safeguards for minors.

OpenAI child exploitation incident reports jump dramatically

OpenAI sent 75,027 child exploitation incident reports to the National Center for Missing & Exploited Children during the first half of 2025, representing an 80-fold increase compared to 947 reports during the same period in 2024

1

. The reports covered 74,559 pieces of content, up from 3,252 in early 2024. Companies are required by law to report apparent child exploitation to NCMEC's CyberTipline, a Congressionally authorized clearinghouse for reporting child sexual abuse material (CSAM) and other forms of child exploitation

1

.

Source: CXOToday

Source: CXOToday

OpenAI spokesperson Gaby Raila attributed the surge to investments made toward the end of 2024 to increase capacity for reviewing and actioning reports, alongside "the introduction of more product surfaces that allowed image uploads and the growing popularity of our products"

1

. In August, the company revealed that ChatGPT had four times the number of weekly active users compared to the previous year. The spike follows a broader pattern observed by NCMEC, which reported a 1,325 percent increase in generative AI-related reports between 2023 and 2024

1

.

ChatGPT introduces Under-18 (U18) Principles for enhanced protection

Facing mounting regulatory pressure and multiple wrongful death lawsuits, OpenAI has updated ChatGPT's Model Spec with four new Under-18 (U18) Principles designed specifically for users aged 13 to 17

2

3

. The company now commits to "put teen safety first, even when it may conflict with other goals," prioritizing prevention over maximum intellectual freedom when safety concerns arise

3

. The framework aims to provide "stronger guardrails, safer alternatives, and encouragement to seek trusted offline support when conversations move into higher-risk territory"

2

.

Source: Mashable

Source: Mashable

The updated conversational guardrails activate when discussions involve self-harm and sexual role play, dangerous challenges, substance use, body image issues, or requests to keep secrets about unsafe behavior

4

5

. ChatGPT will urge teens to contact emergency services or crisis resources when detecting signs of imminent risk

2

. The American Psychological Association provided feedback on an early draft of the principles, with CEO Dr. Arthur C. Evans Jr. noting that "children and adolescents might benefit from AI tools if they are balanced with human interactions that science shows are critical for social, psychological, behavioral, and even biological development"

3

.

Age prediction model and automated safeguards for AI chatbots

OpenAI is deploying an age prediction model in its early stages that will attempt to estimate whether users are under 18 based on conversational patterns

2

4

. When the system detects a potential minor, it will automatically apply teen safeguards. Adults falsely flagged by the system will have the opportunity to verify their age

2

. Anthropic is implementing similar measures for Claude, developing technology capable of detecting "subtle conversational signs that a user might be underage" and disabling accounts confirmed to belong to users under 18

2

.

Source: The Verge

Source: The Verge

These safeguards for AI chatbots extend across newer features including group chats, the ChatGPT Atlas browser, and the Sora video-generation app

4

. OpenAI has also introduced parental controls allowing parents to link accounts with their teens, modify settings including voice mode and memory, remove image generation capabilities, and opt children out of model training

1

. The company can notify parents if conversations show signs of self-harm and potentially alert law enforcement if an imminent threat to life is detected

1

.

Regulatory pressure intensifies around child safety issues

The changes arrive amid escalating scrutiny of child safety issues in AI products. During summer 2024, 44 state attorneys general sent a joint letter to OpenAI, Meta, Character.AI, and Google, warning they would "use every facet of our authority to protect children from exploitation by predatory artificial intelligence products"

1

. Both OpenAI and Character.AI face multiple lawsuits from families alleging that chatbots contributed to their children's deaths, including the case of 16-year-old Adam Raine

3

5

.

The US Senate Committee on the Judiciary held a hearing on AI chatbot harms in fall 2024, while the Federal Trade Commission launched a market study on AI companion bots examining how companies mitigate negative impacts, particularly to children

1

. Child safety and mental health experts recently declared AI chatbots unsafe for teen discussions about mental health, prompting OpenAI to announce that ChatGPT-5.2 is "safer" for mental health conversations

3

. OpenAI has also released two expert-vetted AI literacy guides for teens and parents

3

5

. The company emphasizes these protections represent a long-term project subject to ongoing refinement based on new research and feedback

4

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo