6 Sources
6 Sources
[1]
China issues drafts rules to regulate AI with human-like interaction
BEIJING, Dec 27 (Reuters) - China's cyber regulator on Saturday issued draft rules for public comment that would tighten oversight of artificial intelligence services designed to simulate human personalities and engage users in emotional interaction. The move underscores Beijing's effort to shape the rapid rollout of consumer-facing AI by strengthening safety and ethical requirements. The proposed rules would apply to AI products and services offered to the public in China that present simulated human personality traits, thinking patterns and communication styles, and interact with users emotionally through text, images, audio, video or other means. The draft lays out a regulatory approach that would require providers to warn users against excessive use and to intervene when users show signs of addiction. Under the proposal, service providers would be required to assume safety responsibilities throughout the product lifecycle and establish systems for algorithm review, data security and personal information protection. The draft also targets potential psychological risks. Providers would be expected to identify user states and assess users' emotions and their level of dependence on the service. If users are found to exhibit extreme emotions or addictive behaviour, providers should take necessary measures to intervene, it said. The measures set content and conduct red lines, stating that services must not generate content that endangers national security, spreads rumours or promotes violence or obscenity. Reporting by Liangping Gao and Ryan Woo; Editing by Shri Navaratnam Our Standards: The Thomson Reuters Trust Principles., opens new tab
[2]
Draft Chinese AI Rules Outline â€~Core Socialist Values’ for AI Human Personality Simulators
As first reported by Bloomberg, China’s Central Cyberspace Affairs Commission issued a document Saturday that outlines proposed rules for anthropomorphic AI systems. The proposal includes a solicitation of comments from the public by January 25, 2026. The rules are written in general terms, not legalese. They're clearly meant to encompass chatbots, though that’s not a term the document uses, and the document also seems more expansive in its scope than just rules for chatbots. It covers behaviors and overall values for AI products that engage with people emotionally using simulations of human personalities delivered via “text, image, audio, or video.†The products in question should be aligned with “core socialist values,†the document says. Gizmodo translated the document to English with Google Gemini. Gemini and Bloomberg both translated the phrase “社会ä¸"ä¹‰æ ¸å¿ƒä"·å€¼è§'†as “core socialist values.†Under these rules, such systems would have to clearly identify themselves as AI, and users must be able to delete their history. People’s data would not be used to train models without consent. The document proposes prohibiting AI personalities from: Providers would not be allowed to make intentionally addictive chatbots, or systems intended to replace human relationships. Elsewhere, the proposed rules say there must be a pop-up at the two hour mark reminding users to take a break in the event of marathon usage. These products also have to be designed to pick up on intense emotional states and hand the conversation over to a human if the user threatens self-harm or suicide.
[3]
China issues drafts rules to regulate AI with human-like interaction
China's cyber regulator is proposing new rules for AI services that mimic human personalities. These draft regulations aim to enhance safety and ethical standards for AI engaging users emotionally. Providers must warn against excessive use and intervene with addicted users. Beijing: China's cyber regulator on Saturday issued draft rules for public comment that would tighten oversight of artificial intelligence services designed to simulate human personalities and engage users in emotional interaction. The move underscores Beijing's effort to shape the rapid rollout of consumer-facing AI by strengthening safety and ethical requirements. The proposed rules would apply to AI products and services offered to the public in China that present simulated human personality traits, thinking patterns and communication styles, and interact with users emotionally through text, images, audio, video or other means. The draft lays out a regulatory approach that would require providers to warn users against excessive use and to intervene when users show signs of addiction. Under the proposal, service providers would be required to assume safety responsibilities throughout the product lifecycle and establish systems for algorithm review, data security and personal information protection. The draft also targets potential psychological risks. Providers would be expected to identify user states and assess users' emotions and their level of dependence on the service. If users are found to exhibit extreme emotions or addictive behaviour, providers should take necessary measures to intervene, it said. The measures set content and conduct red lines, stating that services must not generate content that endangers national security, spreads rumours or promotes violence or obscenity. (You can now subscribe to our Economic Times WhatsApp channel)
[4]
China Proposes Strict New Rules to Curb AI Companion Addiction
A key component of the draft is a requirement that providers warn users against excessive use. China's cyber regulator has issued proposed rules aimed at tightening oversight of artificial intelligence services that are designed to simulate human personalities, marking the most aggressive regulatory response yet to growing concerns over AI-powered relationships. The Cyberspace Administration of China released the proposed regulations on Saturday, targeting AI products that form emotional connections with users via text, audio, video, or images. The draft requires service providers to actively monitor users' emotional states and intervene when signs of addiction or "extreme emotions" appear. Under the proposal, AI providers would assume safety responsibilities throughout the product life cycle, including establishing systems for algorithm review and data security. A key component of the draft is a requirement to warn users against excessive use. Platforms would need to remind users they are interacting with an AI system upon logging in and at two-hour intervals -- or sooner if the system detects signs of overdependence, Reuters reports. If users exhibit addictive behavior, providers are expected to take necessary measures to intervene. The draft also reinforces content red lines, stating that services must not generate content that endangers national security, spreads rumors, or promotes violence or obscenity. The regulatory push coincides with a surge in adoption of the technology. China's generative AI user base has doubled to 515 million over the past six months, heightening the concern over the psychological impact of AI companions. A study published in Frontiers in Psychology found that 45.8 percent of Chinese university students reported using AI chatbots in the past month, with these users exhibiting significantly higher levels of depression compared to non-users. A March 2025 study from the MIT Media Lab suggested that AI chatbots can be more addictive than social media because they consistently provide the feedback users want to hear. Researchers termed high levels of dependency as "problematic use," noting that users often anthropomorphize the AI, treating it as a genuine confidante or romantic partner. China is not the only jurisdiction moving to regulate this sector. In October, Governor Gavin Newsom of California signed SB 243 into law, making California the first U.S. state to pass similar legislation. Set to take effect on January 1, the California bill requires platforms to remind minors every three hours that they are speaking to AI and mandates age verification. It also allows individuals to sue AI companies for violations, seeking up to $1,000 per incident. While the regulatory intent is clear, the practical implementation of China's draft rules face significant hurdles. Defining "excessive use" or detecting psychological distress via text inputs remains a complex technical challenge. The draft is currently open for public comment. If implemented as proposed, China would establish the world's most prescriptive framework for governing AI companion products.
[5]
China issues draft rules to regulate AI with human-like interaction - VnExpress International
The move underscores Beijing's effort to shape the rapid rollout of consumer-facing AI by strengthening safety and ethical requirements. The proposed rules would apply to AI products and services offered to the public in China that present simulated human personality traits, thinking patterns and communication styles, and interact with users emotionally through text, images, audio, video or other means. The draft lays out a regulatory approach that would require providers to warn users against excessive use and to intervene when users show signs of addiction. Under the proposal, service providers would be required to assume safety responsibilities throughout the product lifecycle and establish systems for algorithm review, data security and personal information protection. The draft also targets potential psychological risks. Providers would be expected to identify user states and assess users' emotions and their level of dependence on the service. If users are found to exhibit extreme emotions or addictive behavior, providers should take necessary measures to intervene, it said. The measures set content and conduct red lines, stating that services must not generate content that endangers national security, spreads rumors or promotes violence or obscenity.
[6]
China issues drafts rules to regulate AI with human-like interaction
BEIJING, Dec 27 (Reuters) - China's cyber regulator on Saturday issued draft rules for public comment that would tighten oversight of artificial intelligence services designed to simulate human personalities and engage users in emotional interaction. The move underscores Beijing's effort to shape the rapid rollout of consumer-facing AI by strengthening safety and ethical requirements. The proposed rules would apply to AI products and services offered to the public in China that present simulated human personality traits, thinking patterns and communication styles, and interact with users emotionally through text, images, audio, video or other means. The draft lays out a regulatory approach that would require providers to warn users against excessive use and to intervene when users show signs of addiction. Under the proposal, service providers would be required to assume safety responsibilities throughout the product lifecycle and establish systems for algorithm review, data security and personal information protection. The draft also targets potential psychological risks. Providers would be expected to identify user states and assess users' emotions and their level of dependence on the service. If users are found to exhibit extreme emotions or addictive behaviour, providers should take necessary measures to intervene, it said. The measures set content and conduct red lines, stating that services must not generate content that endangers national security, spreads rumours or promotes violence or obscenity. (Reporting by Liangping Gao and Ryan Woo; Editing by Shri Navaratnam)
Share
Share
Copy Link
China's Cyberspace Administration released draft regulations targeting AI services that mimic human personalities and form emotional connections with users. The proposed rules require providers to monitor user behavior, issue warnings against excessive use, and intervene when signs of addiction appear. With China's generative AI user base reaching 515 million, the regulations aim to address growing concerns over psychological risks and establish the world's most prescriptive framework for AI companion products.
China's Cyberspace Administration issued draft regulations on Saturday that would tighten oversight of AI services that mimic human personalities, marking Beijing's most aggressive move yet to regulate AI with human-like interaction
1
. The proposed AI rules apply to products and services offered to the public in China that simulate human personality traits, thinking patterns, and communication styles while engaging users through emotional interaction via text, images, audio, video, or other means3
. The draft is open for public comment until January 25, 2026, underscoring Beijing's effort to shape the rapid rollout of consumer-facing AI by strengthening safety and ethical requirements2
.
Source: ET
The draft regulations lay out a regulatory approach that would require service providers to warn users against excessive use and actively intervene when users show signs of addiction
5
. Platforms would need to remind users they are interacting with an AI system upon logging in and at two-hour intervals, or sooner if the system detects signs of overdependence4
. This move to curb AI companion addiction comes as China's generative AI user base has doubled to 515 million over the past six months, heightening concerns over the psychological impact of AI Human Personality Simulators4
. A study published in Frontiers in Psychology found that 45.8 percent of Chinese university students reported using AI chatbots in the past month, with these users exhibiting significantly higher levels of depression compared to non-users4
.
Source: NY Sun
Under the proposal, service providers would be required to assume safety responsibilities throughout the product lifecycle and establish systems for algorithm review, data security, and personal information protection
1
. The draft regulations for AI services specifically target psychological risks, requiring providers to identify user states and assess users' emotions and their level of dependence on the service . If users exhibit extreme emotions or addictive behavior, providers must take necessary measures to intervene5
. The products in question must be aligned with core socialist values and clearly identify themselves as AI, while users must be able to delete their history2
. People's data would not be used to train models without user consent2
.Related Stories
The measures set content guidelines and conduct red lines, stating that services must not generate content that endangers national security, spreads rumors, or promotes violence or obscenity
1
. Providers would not be allowed to make intentionally addictive chatbots or systems intended to replace human relationships2
. The proposed rules also require these products to be designed to pick up on intense emotional states and hand the conversation over to a human if the user threatens self-harm or suicide2
.China is not the only jurisdiction moving to regulate this sector. In October, California Governor Gavin Newsom signed SB 243 into law, making California the first U.S. state to pass similar legislation requiring platforms to remind minors every three hours that they are speaking to AI
4
. A March 2025 study from the MIT Media Lab suggested that AI chatbots can be more addictive than social media because they consistently provide the feedback users want to hear, with researchers terming high levels of dependency as problematic use4
. If implemented as proposed, China would establish the world's most prescriptive framework for governing AI companion products, though defining excessive use or detecting psychological distress via text inputs remains a complex technical challenge4
. The ethical requirements and oversight mechanisms outlined in these draft regulations signal how governments are grappling with the rapid adoption of human-like AI technologies and their potential to reshape human relationships and mental health.Summarized by
Navi
[2]
1
Business and Economy

2
Policy and Regulation

3
Health
