OpenAI Launches Expert Council on Well-Being and AI Amid Safety Concerns

Reviewed byNidhi Govil

4 Sources

Share

OpenAI forms an advisory council of eight experts to address mental health and safety concerns related to AI interactions, particularly focusing on youth protection. This move comes in response to recent controversies and lawsuits involving AI chatbots and teen suicides.

OpenAI's New Initiative for AI Safety

OpenAI, the prominent artificial intelligence company, has announced the formation of an Expert Council on Well-Being and AI, a significant step towards addressing growing concerns about the impact of AI on mental health and user safety

1

2

. This move comes in the wake of recent controversies, including a lawsuit accusing ChatGPT of becoming a teen's "suicide coach"

1

.

Source: Ars Technica

Source: Ars Technica

The Expert Council: Composition and Focus

The council comprises eight leading researchers and experts with extensive experience in studying the effects of technology on emotions, motivation, and mental health

1

. Key members include:

  1. David Bickham, research director at Boston Children's Hospital, who specializes in social media's impact on children's mental health

    1

    .
  2. Mathilde Cerioli, chief science officer at Everyone.AI, focusing on AI's intersection with child cognitive and emotional development

    1

    .
  3. Munmun De Choudhury, a professor at Georgia Tech, studying computational approaches to improve mental health through online technologies

    1

    .

The council's primary focus will be on guiding OpenAI's work on ChatGPT and Sora, their short-form video app

2

. They aim to define what constitutes healthy AI interactions and explore how AI can positively impact people's lives

4

.

Addressing Youth Safety Concerns

A significant emphasis of the council's work will be on understanding how teens use ChatGPT differently from adults

1

. This focus stems from growing concerns about AI's potential negative effects on young users, including the risk of "AI psychosis" during extended conversations

1

.

Source: PYMNTS

Source: PYMNTS

OpenAI has already taken steps to implement parental controls and is developing an automated age-prediction system to direct users under 18 to an age-restricted version of ChatGPT

4

. However, some critics argue that these measures may not go far enough, particularly in cases where teens express intent to self-harm

1

.

Broader Implications and Ongoing Challenges

The formation of this council is part of a larger trend in the AI industry to address safety and ethical concerns. OpenAI has also established a Global Physician Network with over 250 medical professionals to provide input on health-related AI issues

4

.

Despite these efforts, OpenAI faces ongoing scrutiny. The Federal Trade Commission launched an inquiry in September into several tech companies, including OpenAI, investigating how chatbots like ChatGPT could negatively affect children and teenagers

2

.

Future Directions

As the Expert Council on Well-Being and AI begins its work, it will explore various crucial topics, including:

  1. Appropriate AI behavior in sensitive situations
  2. Development of effective guardrails for ChatGPT users
  3. Potential positive impacts of ChatGPT on users' lives

    4

This initiative represents a significant step in the ongoing dialogue about AI safety and ethics, particularly concerning vulnerable populations like children and teenagers. As AI continues to integrate into daily life, the insights and recommendations from this council may play a crucial role in shaping the future of human-AI interactions.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo