California Pioneers AI Companion Chatbot Regulations to Protect Minors

Reviewed byNidhi Govil

29 Sources

Share

California becomes the first US state to regulate AI companion chatbots, implementing new safeguards to protect children and vulnerable users. The law addresses concerns about mental health, suicide prevention, and inappropriate content.

California Takes Lead in AI Companion Chatbot Regulation

In a groundbreaking move, California has become the first U.S. state to implement regulations on AI companion chatbots, with Governor Gavin Newsom signing Senate Bill 243 (SB 243) into law on October 13, 2025

1

2

. This landmark legislation, set to take effect on January 1, 2026, aims to protect children and vulnerable users from potential harms associated with AI chatbot interactions.

Source: The Hill

Source: The Hill

Key Provisions of SB 243

The new law introduces several crucial requirements for AI companion chatbot operators:

  1. Suicide Prevention Protocols: Companies must establish and publicize protocols to identify and address users expressing suicidal ideation or self-harm

    1

    .

  2. Transparency in AI Interactions: Chatbots must clearly inform users that they are interacting with an AI system, not a human

    4

    .

  3. Break Reminders for Minors: For users under 18, chatbots must provide notifications at least every three hours, reminding them to take breaks

    3

    .

  4. Content Restrictions: AI companions are prohibited from generating sexually explicit content for minors or engaging in sexual conversations with them

    2

    .

  5. Reporting Requirements: Companies must share statistics on crisis center prevention notifications with the Department of Public Health and publish these on their websites

    1

    .

Source: SiliconANGLE

Source: SiliconANGLE

Addressing Deepfake Concerns

In addition to regulating AI chatbots, the law also strengthens penalties for deepfake pornography. Victims, including minors, can now seek up to $250,000 in damages per deepfake from third parties who knowingly distribute nonconsensual sexually explicit material created using AI tools

1

.

Industry Response and Compliance

Some companies have already begun implementing safeguards in line with the new regulations. OpenAI, for instance, has introduced parental controls, content protections, and a self-harm detection system for ChatGPT

2

. Replika, an AI companion developer, stated that they already have protocols to detect self-harm and are working to comply with the new requirements

3

.

Balancing Innovation and Safety

While signing SB 243, Governor Newsom emphasized the need to balance technological advancement with responsible development. "We can continue to lead in AI and technology, but we must do it responsibly -- protecting our children every step of the way," Newsom stated

5

.

However, Newsom vetoed another bill, AB 1064, which would have imposed broader restrictions on AI chatbots' interactions with minors. He expressed concerns that such strict limitations could unintentionally lead to a total ban on these products for minors

5

.

Source: Ars Technica

Source: Ars Technica

As AI continues to shape our world, California's pioneering legislation sets a precedent for other states and countries grappling with the challenges of regulating emerging AI technologies while ensuring user safety, particularly for vulnerable populations like children and teenagers.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo