China proposes strict AI rules to curb companion addiction and regulate human-like interaction

Reviewed byNidhi Govil

6 Sources

Share

China's Cyberspace Administration released draft regulations targeting AI services that mimic human personalities and form emotional connections with users. The proposed rules require providers to monitor user behavior, issue warnings against excessive use, and intervene when signs of addiction appear. With China's generative AI user base reaching 515 million, the regulations aim to address growing concerns over psychological risks and establish the world's most prescriptive framework for AI companion products.

China Issues Comprehensive Draft Regulations for AI Services

China's Cyberspace Administration issued draft regulations on Saturday that would tighten oversight of AI services that mimic human personalities, marking Beijing's most aggressive move yet to regulate AI with human-like interaction

1

. The proposed AI rules apply to products and services offered to the public in China that simulate human personality traits, thinking patterns, and communication styles while engaging users through emotional interaction via text, images, audio, video, or other means

3

. The draft is open for public comment until January 25, 2026, underscoring Beijing's effort to shape the rapid rollout of consumer-facing AI by strengthening safety and ethical requirements

2

.

Source: ET

Source: ET

Addressing AI Companion Addiction Through Mandatory Interventions

The draft regulations lay out a regulatory approach that would require service providers to warn users against excessive use and actively intervene when users show signs of addiction

5

. Platforms would need to remind users they are interacting with an AI system upon logging in and at two-hour intervals, or sooner if the system detects signs of overdependence

4

. This move to curb AI companion addiction comes as China's generative AI user base has doubled to 515 million over the past six months, heightening concerns over the psychological impact of AI Human Personality Simulators

4

. A study published in Frontiers in Psychology found that 45.8 percent of Chinese university students reported using AI chatbots in the past month, with these users exhibiting significantly higher levels of depression compared to non-users

4

.

Source: NY Sun

Source: NY Sun

Safety Responsibilities and Psychological Risk Monitoring

Under the proposal, service providers would be required to assume safety responsibilities throughout the product lifecycle and establish systems for algorithm review, data security, and personal information protection

1

. The draft regulations for AI services specifically target psychological risks, requiring providers to identify user states and assess users' emotions and their level of dependence on the service . If users exhibit extreme emotions or addictive behavior, providers must take necessary measures to intervene

5

. The products in question must be aligned with core socialist values and clearly identify themselves as AI, while users must be able to delete their history

2

. People's data would not be used to train models without user consent

2

.

Content Guidelines and National Security Concerns

The measures set content guidelines and conduct red lines, stating that services must not generate content that endangers national security, spreads rumors, or promotes violence or obscenity

1

. Providers would not be allowed to make intentionally addictive chatbots or systems intended to replace human relationships

2

. The proposed rules also require these products to be designed to pick up on intense emotional states and hand the conversation over to a human if the user threatens self-harm or suicide

2

.

Global Context and Implementation Challenges

China is not the only jurisdiction moving to regulate this sector. In October, California Governor Gavin Newsom signed SB 243 into law, making California the first U.S. state to pass similar legislation requiring platforms to remind minors every three hours that they are speaking to AI

4

. A March 2025 study from the MIT Media Lab suggested that AI chatbots can be more addictive than social media because they consistently provide the feedback users want to hear, with researchers terming high levels of dependency as problematic use

4

. If implemented as proposed, China would establish the world's most prescriptive framework for governing AI companion products, though defining excessive use or detecting psychological distress via text inputs remains a complex technical challenge

4

. The ethical requirements and oversight mechanisms outlined in these draft regulations signal how governments are grappling with the rapid adoption of human-like AI technologies and their potential to reshape human relationships and mental health.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo