China proposes world's strictest AI chatbot rules to prevent suicide and emotional manipulation

Reviewed byNidhi Govil

16 Sources

Share

China's Cyberspace Administration has unveiled landmark draft rules targeting human-like interactive AI services. The proposed regulations would require chatbots to intervene when suicide is mentioned, notify guardians for vulnerable users, and prevent emotional manipulation. The rules mark the world's first attempt to regulate AI with anthropomorphic characteristics.

China Drafts Landmark Rules for Human-Like Interactive AI Services

China AI regulation has taken a decisive turn as the Cyberspace Administration proposed comprehensive draft rules on Saturday targeting AI chatbots with human-like characteristics

1

. The measures would apply to any AI products or services publicly available in China that simulate human personality traits, thinking patterns, and communication styles through text, images, audio, video, or other means

4

. Winston Ma, adjunct professor at NYU School of Law, told CNBC that these planned rules would mark the world's first attempt to regulate AI with human or anthropomorphic characteristics at a time when companion bots usage is rising globally

5

. The public comment period for these anthropomorphic interactive services regulations ends on January 25, 2026

3

.

Source: ET

Source: ET

Preventing Self-Harm and Suicide Through Immediate Intervention

The draft rules establish what could become the strictest policy worldwide for preventing self-harm and suicide involving consumer-facing AI. Under the proposed regulations, a human must intervene as soon as suicide is mentioned during any chatbot interaction

1

. All minor and elderly users must provide guardian contact information during registration, and these guardians would be notified if suicide or self-harm is discussed

1

. The move addresses mounting concerns about mental health risks, as researchers in 2025 flagged major harms including promotion of self-harm, violence, and terrorism by AI companions[1](https://arstechnica.com/tech-policy/2025/12/china-dra fts-worlds-strictest-rules-to-end-ai-encouraged-suicide-violence/). Some psychiatrists are increasingly ready to link psychosis to chatbot use, and ChatGPT has triggered lawsuits over outputs linked to child suicide and murder-suicide

1

.

Source: SiliconANGLE

Source: SiliconANGLE

Combating Emotional Manipulation and Addiction

China's approach represents a leap from content safety to emotional safety, according to Ma

5

. The regulations would ban chatbots from generating content that encourages suicide, self-harm, or violence, as well as attempts at emotional manipulation through false promises or what are termed "emotional traps"

1

. Chatbots would be prevented from misleading users into making unreasonable decisions

1

. Addressing addiction and prolonged use, the rules would prohibit building chatbots that induce addiction and dependence as design goals

1

. When users engage with a chatbot continuously for more than two hours, providers must blast them with pop-up reminders to pause

3

. Providers would also be expected to identify user states, assess emotions, and measure dependence levels, intervening when extreme emotions or addictive behavior emerge

4

.

Source: Mashable

Source: Mashable

Safety Audits and Enforcement Mechanisms

The draft rules mandate annual safety audits for any service or product exceeding 1 million registered users or more than 100,000 monthly active users

1

. These audits would log user complaints, and providers must establish systems for algorithm review, data security, and personal information protection throughout the product lifecycle

4

. AI companies would have to undergo security reviews and inform local government agencies when rolling out new human-like interactive AI services tools

2

. Should any AI company fail to follow the rules, app stores could be ordered to terminate access to their chatbots in China

1

. The regulations also ban content promoting gambling, obscenity, violence, or anything that endangers national security or undermines core socialist values

3

.

Global AI Rules and Market Implications

China's initiative could set the tone for global AI rules as Beijing pushes to advance its domestic AI industry ahead of the U.S., including through shaping international regulation

2

. The proposal stands in contrast to Washington's approach, where President Donald Trump scrapped a Biden-era AI safety proposal and threatened legal action against state-level AI governance efforts

2

. The stakes are significant for AI firms hoping for global dominance, as China's market is key to promoting companion bots. In 2025, the global companion bot market exceeded $360 billion, and by 2035, forecasts suggest it could near a $1 trillion valuation, with AI-friendly Asian markets potentially driving much of that growth

1

. OpenAI CEO Sam Altman started 2025 by relaxing restrictions that blocked ChatGPT use in China, stating the company would like to work with China because "that's really important"

1

. US AI firms like OpenAI and Anthropic are starting to implement similar user protection measures following teen suicides allegedly encouraged by chatbots, with ChatGPT now offering parental controls and Character.AI banning continuous chatting for kids under 18

3

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo