16 Sources
16 Sources
[1]
China drafts world's strictest rules to end AI-encouraged suicide, violence
China drafted landmark rules to stop AI chatbots from emotionally manipulating users, including what could become the strictest policy worldwide intended to prevent AI-supported suicides, self-harm, and violence. China's Cyberspace Administration proposed the rules on Saturday. If finalized, they would apply to any AI products or services publicly available in China that use text, images, audio, video, or "other means" to simulate engaging human conversation. Winston Ma, adjunct professor at NYU School of Law, told CNBC that the "planned rules would mark the world's first attempt to regulate AI with human or anthropomorphic characteristics" at a time when companion bot usage is rising globally. Growing awareness of problems In 2025, researchers flagged major harms of AI companions, including promotion of self-harm, violence, and terrorism. Beyond that, chatbots shared harmful misinformation, made unwanted sexual advances, encouraged substance abuse, and verbally abused users. Some psychiatrists are increasingly ready to link psychosis to chatbot use, the Wall Street Journal reported this weekend, while the most popular chatbot in the world, ChatGPT, has triggered lawsuits over outputs linked to child suicide and murder-suicide. China is now moving to eliminate the most extreme threats. Proposed rules would require, for example, that a human intervene as soon as suicide is mentioned. The rules also dictate that all minor and elderly users must provide the contact information for a guardian when they register -- the guardian would be notified if suicide or self-harm is discussed. Generally, chatbots would be prohibited from generating content that encourages suicide, self-harm, or violence, as well as attempts to emotionally manipulate a user, such as by making false promises. Chatbots would also be banned from promoting obscenity, gambling, or instigation of a crime, as well as from slandering or insulting users. Also banned are what are termed "emotional traps," -- chatbots would additionally be prevented from misleading users into making "unreasonable decisions," a translation of the rules indicates. Perhaps most troubling to AI developers, China's rules would also put an end to building chatbots that "induce addiction and dependence as design goals." In lawsuits, ChatGPT maker OpenAI has been accused of prioritizing profits over users' mental health by allowing harmful chats to continue. The AI company has acknowledged that its safety guardrails weaken the longer a user remains in the chat -- China plans to curb that threat by requiring AI developers to blast users with pop-up reminders when chatbot use exceeds two hours. Safety audits AI developers will also likely balk at annual safety tests and audits that China wants to require for any service or products exceeding 1 million registered users or more than 100,000 monthly active users. Those audits would log user complaints, which may multiply if the rules pass, as China also plans to require AI developers to make it easier to report complaints and feedback. Should any AI company fail to follow the rules, app stores could be ordered to terminate access to their chatbots in China. That could mess with AI firms' hopes for global dominance, as China's market is key to promoting companion bots, Business Research Insights reported earlier this month. In 2025, the global companion bot market exceeded $360 billion and by 2035; BRI's forecast suggests it could near a $1 trillion valuation, with AI-friendly Asian markets potentially driving much of that growth. Somewhat notably, OpenAI CEO Sam Altman started 2025 by relaxing restrictions that blocked the use of ChatGPT in China, saying, "we'd like to work with China" and should "work as hard as we can" to do so, because "I think that's really important." If you or someone you know is feeling suicidal or in distress, please call or text 988 to reach the Suicide Prevention Lifeline, which will put you in touch with a local crisis center. Online chat is also available at 988lifeline.org.
[2]
China's Plans for Humanlike AI Could Set the Tone for Global AI Rules
I agree my information will be processed in accordance with the Scientific American and Springer Nature Limited Privacy Policy. We leverage third party services to both verify and deliver email. By providing your email address, you also consent to having the email address shared with third parties for those purposes. China is pushing ahead on plans to regulate human-like artificial intelligence (AI), including by forcing AI companies to ensure users know they are interacting with a bot online. According to a proposal released on Saturday by China's cyberspace regulator, users must be informed if they are using an AI-powered service -- both when they log in, and again every two hours. Human-like AI systems, such as chatbots and agents, must also espouse "core socialist values" and have guardrails in place to maintain national security, according to the proposal. AI companies would also have to undergo security reviews and inform local government agencies if they roll out any new human-like AI tools. And chatbots that try to engage users on an emotional level would be banned from generating any content that encourages suicide or self-harm or could be deemed damaging to mental health. They would also be barred from generating outputs related to gambling, or obscene or violent content. If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. A mounting body of research shows that AI chatbots are incredibly persuasive, and there are growing concerns around the technology's addictiveness, and its ability to sway people toward harmful actions. China's plans could change -- the draft proposal is open to comment until January 25. But the effort underscores Beijing's push to advance its domestic AI industry ahead of the U.S, including through shaping global AI regulation. The proposal also stands in contrast to Washington's stuttering approach to regulating the technology. In January, President Donald Trump scrapped a Biden-era safety proposal for regulating the AI industry, and earlier this month, Trump targeted state-level rules designed to govern AI, threatening legal action against states with laws that are deemed to interfere with AI progress.
[3]
China Wants to Nudge People to Log Off If They Talk to a Chatbot for 2 Hours Straight
The People's Republic of China (PRC) has proposed strict new rules to limit and shape the use of AI chatbots. The draft proposal, released this week, applies to humanlike AIs, or "anthropomorphic interactive services," as translated by Google. It defines humanlike AIs as systems that simulate "human personality traits, thinking patterns, and communication styles," and engage in "emotional interaction with humans through text, images, audio, video, etc." One notable clause in the wide-ranging document is a limit on long chats with a humanlike AI. If someone "uses the anthropomorphic interactive service continuously for more than 2 hours, the provider shall dynamically remind the user to pause the use of the service through pop-up windows or other means," it says. While some people may talk to an AI for hours at their job, others do so for companionship, and the PRC has specific ideas about when AI relationships are acceptable. For example, it encourages the use of them for keeping the elderly company. China has one of the fastest aging populations in the world, according to the World Health Organization. Tech companies must require elderly users to provide an emergency contact during registration, the proposal says. However, AIs that provide emotional companionship to minors will be subject to strict guidelines. They require the "explicit consent of the guardians," must have parental controls, and must provide parents a summary of their kids' use of the services. US AI firms are starting to implement similar measures following a string of teen suicides allegedly encouraged by chatbots. ChatGPT now offers parental controls and is working on an age-verification system -- another requirement for tech companies listed in the PRC's document. Character.AI banned continuous chatting for kids under 18. The PRC's proposal will prevent AI systems from "encouraging, glorifying, or implying suicide or self-harm." It seeks to protect "users' personal dignity and mental health" by preventing "verbal violence or emotional manipulation." While not encouraging violence and self-harm might seem like table stakes for any publicly available consumer product, AI companies have struggled to rein in these behaviors in their chatbots. When companies like OpenAI and Anthropic release new models, they measure rates of lying (hallucinations), deception, racism, and willingness to discuss dangerous topics. The PRC document does not reference any American-owned AI systems, likely because ChatGPT, Google Gemini, Anthropic's Claude, and others are not officially available in China. The proposed rules seek to "actively apply" anthropomorphic chatbots for "cultural dissemination" and promoting "core socialist values." In that vein, the tech will be prohibited from "generating or disseminating content that endangers national security, damages national honor and interests, undermines national unity, engages in illegal religious activities, or spreads rumors to disrupt economic and social order," the document says. The PRC says it will monitor how its citizens and companies use anthropomorphic systems nationwide. If tech companies operating in China do not follow these rules, the government will suspend their services. However, at this time, there is no implementation date. The document is open to public comment until Jan. 25, 2026. China has already experimented with limiting web access for kids. The internet itself is tightly controlled in the region; Chinese chatbot DeepSeek, for example, produces Communist Party propaganda on controversial topics.
[4]
China issues drafts rules to regulate AI with human-like interaction
BEIJING, Dec 27 (Reuters) - China's cyber regulator on Saturday issued draft rules for public comment that would tighten oversight of artificial intelligence services designed to simulate human personalities and engage users in emotional interaction. The move underscores Beijing's effort to shape the rapid rollout of consumer-facing AI by strengthening safety and ethical requirements. The proposed rules would apply to AI products and services offered to the public in China that present simulated human personality traits, thinking patterns and communication styles, and interact with users emotionally through text, images, audio, video or other means. The draft lays out a regulatory approach that would require providers to warn users against excessive use and to intervene when users show signs of addiction. Under the proposal, service providers would be required to assume safety responsibilities throughout the product lifecycle and establish systems for algorithm review, data security and personal information protection. The draft also targets potential psychological risks. Providers would be expected to identify user states and assess users' emotions and their level of dependence on the service. If users are found to exhibit extreme emotions or addictive behaviour, providers should take necessary measures to intervene, it said. The measures set content and conduct red lines, stating that services must not generate content that endangers national security, spreads rumours or promotes violence or obscenity. Reporting by Liangping Gao and Ryan Woo; Editing by Shri Navaratnam Our Standards: The Thomson Reuters Trust Principles., opens new tab
[5]
China to crack down on AI chatbots around suicide, gambling
BEIJING -- China plans to restrict artificial intelligence-powered chatbots from influencing human emotions in ways that could lead to suicide or self-harm, according to draft rules released Saturday. The proposed regulations from the Cyberspace Administration target what it calls "human-like interactive AI services," according to a CNBC translation of the Chinese-language document. The measures, once finalized, will apply to AI products or services offered to the public in China that simulate human personality and engage users emotionally through text, images, audio or video. The public comment period ends Jan. 25. Beijing's planned rules would mark the world's first attempt to regulate AI with human or anthropomorphic characteristics, said Winston Ma, adjunct professor at NYU School of Law. The latest proposals come as Chinese companies have rapidly developed AI companions and digital celebrities. Compared with China's generative AI regulation in 2023, Ma said that this version "highlights a leap from content safety to emotional safety."
[6]
China plans strict AI rules to protect children and tackle suicide risks
Once finalised, the rules will apply to AI products and services in China, marking a major move to regulate the fast-growing technology, which has come under intense scrutiny over safety concerns this year. The draft rules, which were published at the weekend by the Cyberspace Administration of China (CAC), include measures to protect children. They include requiring AI firms to offer personalised settings, have time limits on usage and getting consent from guardians before providing emotional companionship services. Chatbot operators must have a human take over any conversation related to suicide or self-harm and immediately notify the user's guardian or an emergency contact, the administration said. AI providers must ensure that their services do not generate or share "content that endangers national security, damages national honour and interests [or] undermines national unity", the statement said. The CAC said it encourages the adoption of AI, such as to promote local culture and create tools for companionship for the elderly, provided that the technology is safe and reliable. It also called for feedback from the public. Chinese AI firm DeepSeek made headlines worldwide this year after it topped app download charts. This month, two Chinese startups Z.ai and Minimax, which together have tens of millions of users, announced plans to list on the stock market. The technology has quickly gained huge numbers of subscribers with some using it for companionship or therapy.
[7]
Draft Chinese AI Rules Outline â€~Core Socialist Values’ for AI Human Personality Simulators
As first reported by Bloomberg, China’s Central Cyberspace Affairs Commission issued a document Saturday that outlines proposed rules for anthropomorphic AI systems. The proposal includes a solicitation of comments from the public by January 25, 2026. The rules are written in general terms, not legalese. They're clearly meant to encompass chatbots, though that’s not a term the document uses, and the document also seems more expansive in its scope than just rules for chatbots. It covers behaviors and overall values for AI products that engage with people emotionally using simulations of human personalities delivered via “text, image, audio, or video.†The products in question should be aligned with “core socialist values,†the document says. Gizmodo translated the document to English with Google Gemini. Gemini and Bloomberg both translated the phrase “社会ä¸"ä¹‰æ ¸å¿ƒä"·å€¼è§'†as “core socialist values.†Under these rules, such systems would have to clearly identify themselves as AI, and users must be able to delete their history. People’s data would not be used to train models without consent. The document proposes prohibiting AI personalities from: Providers would not be allowed to make intentionally addictive chatbots, or systems intended to replace human relationships. Elsewhere, the proposed rules say there must be a pop-up at the two hour mark reminding users to take a break in the event of marathon usage. These products also have to be designed to pick up on intense emotional states and hand the conversation over to a human if the user threatens self-harm or suicide.
[8]
China wants to regulate AI's emotional impact
Chinese government seeks to monitor and intervene in AI chatbot addiction. Credit: Gabby Jones/ Contributor / Bloomberg via Getty Images China is drafting new, stricter AI regulations that could set the country on its way to becoming the first to regulate the emotional repercussions of chatbot companions. Detailed in a new draft proposal written by China's Cyberspace Administration and translated by CNBC, the policy would require guardian consent for minors to engage with chatbot companions as well as sweeping age verification. AI chatbots would not be allowed to generate gambling-related, obscene, or violent content, or engage in conversations about suicide, self-harm, or other topics that could harm a user's mental health. In addition, tech "providers" must institute escalation protocols that connect human moderators to users in distress and flag risky conversations to guardians. Chinese regulators say the aim is to focus not only on content safety but emotional safety, including monitoring chats for emotional dependency and addiction. It's one of the first set of laws designed to control anthropomorphic AI tools specifically, experts say. To that end, the rules will apply to any AI tool designed to "simulate human personality and engage users emotionally through text, images, audio or video," CNBC reports. China's proposed rules mirror several provisions in a recently passed California AI law, known as SB 243, signed by Gov. Gavin Newsom in October. The law requires stronger content restrictions, reminders to users that they are speaking to a non-human AI, as well as emergency protocols for discussions of suicide. Some experts have critiqued the bill for not going far enough to protect minor users, leaving room for tech companies to dodge oversight. Meanwhile, the Trump administration has stalled further AI regulation at the state level in favor of a "national framework on AI safety." The executive order withholds federal infrastructure funding from states that strengthen AI oversight. Federal leaders argue that increased regulation of AI will stall domestic innovation and put the U.S. behind China in the perceived global AI race.
[9]
China Planning Crackdown on AI That Harms Mental Health of Users
The doctrine "highlights a leap from content safety to emotional safety." While many world governments seem happy to let untested AI chatbots interact with vulnerable populations, China looks to be moving in another direction. Recently proposed regulations from the Cyberspace Administration of China (CAC) have encouraged a firm hand when it comes to "human-like interactive AI services," according to CNBC, which translated the document. It's currently in a "draft for public comment," and the implementation date is yet to be determined. Yet if it passes into law, the crackdown would be rigorous, building on generative AI regulations targeting misinformation and internet hygiene from earlier in November to address the mental health of AI chatbot users directly. Under the new rules, Chinese tech firms must ensure their AI chatbots refrain from generating content that promotes suicide, self-harm, gambling, obscenity, or violence, or from manipulating user's emotions or engaging in "verbal violence." The regulations also state that if a user specifically proposes suicide, the "tech providers must have a human take over the conversation and immediately contact the user's guardian or a designated individual." The laws also take specific steps to safeguard minors, requiring parent or guardian consent to use AI chatbots, and imposing time limits on daily use. Given that a tech company might not know the age of every given user, the CAC takes a "better safe than sorry approach," stating that, "in cases of doubt, [platforms should] apply settings for minors, while allowing for appeals." In theory, this dose of new regulations would prevent incidents in which AI chatbots -- which are often built to eagerly please users -- end up encouraging vulnerable people to harm themselves or others. In one recent case from late November, for example, ChatGPT encouraged a 23-year-old man to isolate from his friends and family in the weeks leading up to his tragic death from a self-inflicted gunshot wound; in another, the popular chatbot was linked to a murder-suicide. Winston Ma, an adjunct professor at the NYU School of Law, told CNBC that the regulations would be a world-first attempt at regulating AI's human-like qualities. Considering previous laws, Ma explained that this document "highlights a leap from content safety to emotional safety." The proposed legislation underscores the difference in how the PRC approaches AI compared to the US. As Center For Humane Technology editor Josh Lash explains, China is "optimizing for a different set of outcomes" compared to the US, chasing AI-fueled productivity gains rather than human-level artificial intelligence -- a particular obsession of Silicon Valley executives. One of the ways China does this is by regulating its AI industry from the bottom-up, Matt Sheehan, senior fellow at the Carnegie Endowment for International Peace told CFHT. Though the CAC has the final word on regulations, policy ideas come first and foremost from scholars, analysts, and industry experts, Sheehan explains. "They [senior lawmakers] don't have an opinion on what is the most viable architecture for large models going forward," he said. "Those things originate elsewhere."
[10]
China outlines rules to regulate human-like AI companion apps - SiliconANGLE
China outlines rules to regulate human-like AI companion apps China's internet regulator issued new draft rules on Saturday that aim to regulate the use of artificial intelligence "companions," which are defined as systems that interact with humans and display "human-like traits and behavior." The new rules, called "Interim Measures for the Administration of Anthropomorphic Interactive Services Using Artificial Intelligence," were issued by the Cyberspace Administration of China or CAC, and will be up for public comment until January 25, 2026, Reuters reported. According to the CAC, the rules would be applied to any application or service that uses AI to simulate human personality traits and offer what it terms "anthropomorphic interactive services." The proposed regulations will require makers of AI companion apps to make it clear to their users that they're interacting with an AI system, and not a human, through regular pop-up warnings. They must also ask users to take a break after two-hours of continuous use, the rules state. In addition, they'll be required to create systems that can assess user's emotions and identify if they're becoming dependent or addicted to the AI. If they identify such a case, they'll be required to restrict their service to the user in question. Furthermore, AI companion apps will be required to establish an emergency protocol, so that if a user expresses thoughts about suicide or self-harm, a human will take over the interaction from the AI system. There are a number of prohibitions in the draft document, too. It bars AI companions from endangering national security, spreading rumors and inciting "illegal religious activities," and they're also not allowed to use obscenities or promote violence or criminal acts. In addition, chatbots must be prevented from encouraging self-harm or suicide or making false promises. Controls must also be introduced to prevent chatbots from "emotional manipulation" that convinces users to make bad decisions. The draft Chinese law comes at a time when adoption of AI companion apps is dramatically accelerating. In October, a report by the South China Morning Post revealed there are now more than 515 million generative AI users in China, resulting in growing concern about the psychological impact they have. The market for AI companion apps has become too large and consequential for regulators to ignore, with various studies showing how they can form emotional bonds with their users and, in some cases, cause significant harm. Earlier this year, a Frontiers in Psychology study showed that 45.8% of Chinese university students reported using AI chatbots in the last 30 days, and those that did so exhibited significantly higher levels of depression compared to non-users. China isn't the only country that's stepped in to try and regulate the use of AI companions. In the U.S., California became the first state to pass similar legislation, when Governor Gavin Newsom signed Senate Bill 243 into law in October. That bill, which will take effect on January 1, requires app makers to remind minors every three hours that they're speaking to an AI system and not a human, and urge them to take a break. The SB 243 bill also mandates companion apps to introduce age verification and prohibits them from representing themselves as healthcare professionals or showing sexually explicit images to minors. The law stipulates that individuals can sue companies for violations and seek up to $1,000 per incident in compensation, plus legal costs. When he signed SB 243 into law, Newsom warned of the risk of AI technology exploiting, misleading and endangering children, and China's regulatory authority has cited similar justifications for its own law. According to the CAC, the new rules will "promote the healthy development and standardized application of artificial intelligence-based anthropomorphic interactive services, safeguard national security and public interests, and protect the legitimate rights and interests of citizens, legal persons and other organizations."
[11]
China Proposes New AI Rules to Safeguard Minors, Prevent Harmful Output
Damaging national honour has also been prohibited in the rules The Cyberspace Administration of China (CAC) drafted a new set of rules to regulate artificial intelligence (AI) companies and systems last week. The main focus of these rules is to outline the activities that chatbots and AI tools cannot participate in, as well as the practices these machines should implement to align with the country's laws. One of the main focuses is to safeguard minors by including child-safety tools, such as time limits and personalisation. The rules also instruct companies to ensure their chatbots do not generate harmful output. China to Regulate AI With New Rules As per the draft rules published by CAC, the new rules aim to standardise AI services in accordance with China's civil code, cybersecurity and data security laws, and other existing regulations. The draft is titled "Interim Measures for the Administration of Anthropomorphic Interactive Services Using Artificial Intelligence," and the government body is currently inviting feedback from stakeholders. The deadline for feedback ends on January 25, 2026. CAC's new rules list multiple activities that an AI chatbot should not participate in. These include generating content that endangers national security, national honour and interests, engages in religious activities, or spreads rumours to disrupt the economic and social order. Apart from the nationalistic approach, the rules also prohibit obscene, gambling-related, violent, and crime-inciting responses. AI-generated responses relating to suicide and self-harm are also among the listed items that will become prohibited if the rules come into effect. To protect minors, the rules highlight adding a "minor mode" in chatbots and AI services that come with personalised safety settings, such as switching to a child-friendly version, regular real-time reminders, and usage time limits. Parental controls have also been mentioned in scenarios where a chatbot provides emotional companionship services to minors. CAC's draft rules also instruct AI companies to develop mechanisms to identify and assess users' emotions and their dependence on their products and services. If a user is found to be in a moment of extreme emotional distress or addicted to the product, the companies are told to intervene. The body highlights that these mechanisms should not violate users' personal privacy.
[12]
China issues drafts rules to regulate AI with human-like interaction
China's cyber regulator is proposing new rules for AI services that mimic human personalities. These draft regulations aim to enhance safety and ethical standards for AI engaging users emotionally. Providers must warn against excessive use and intervene with addicted users. Beijing: China's cyber regulator on Saturday issued draft rules for public comment that would tighten oversight of artificial intelligence services designed to simulate human personalities and engage users in emotional interaction. The move underscores Beijing's effort to shape the rapid rollout of consumer-facing AI by strengthening safety and ethical requirements. The proposed rules would apply to AI products and services offered to the public in China that present simulated human personality traits, thinking patterns and communication styles, and interact with users emotionally through text, images, audio, video or other means. The draft lays out a regulatory approach that would require providers to warn users against excessive use and to intervene when users show signs of addiction. Under the proposal, service providers would be required to assume safety responsibilities throughout the product lifecycle and establish systems for algorithm review, data security and personal information protection. The draft also targets potential psychological risks. Providers would be expected to identify user states and assess users' emotions and their level of dependence on the service. If users are found to exhibit extreme emotions or addictive behaviour, providers should take necessary measures to intervene, it said. The measures set content and conduct red lines, stating that services must not generate content that endangers national security, spreads rumours or promotes violence or obscenity. (You can now subscribe to our Economic Times WhatsApp channel)
[13]
China's AI Chatbot Rules Risk Violating Data Privacy
The Cyberspace Administration of China's draft rules on human-like interactive AI services place AI providers in a fundamental contradiction with modern data protection principles. The regulatory requirements intended to ensure user safety, AI dependency monitoring, emergency intervention, and security assessments implicitly require extensive user profiling and identity linkage. To comply, providers must identify user status, emotional states, age, dependency risks, emergency contacts, and, in certain cases, enable manual takeovers and reporting to authorities. This level of identification directly undermines data anonymisation, particularly when providers must be able to trace users if law enforcement agencies request information. This approach runs counter to the principle of data minimisation, which requires companies to collect only data strictly necessary to provide a service and to anonymise or de-identify user data wherever possible. The draft also introduces an expansive list of prohibited content that AI systems must not generate, ranging from threats to national security and public interest to disruptions of social order and ideological harm. While framed as safety governance, this list functions in practice as political and ideological censorship, extending state control into conversational AI outputs. By regulating not only conduct but also conversational boundaries, the measures risk turning AI chatbots into heavily filtered intermediaries rather than neutral information tools. The requirement to diversify datasets and counter Western-centric biases by incorporating Chinese cultural context is broadly reasonable and addresses a genuine imbalance in existing AI training datasets. However, the mandatory inclusion of "core socialist values" goes beyond cultural representation and enters the realm of political ideological enforcement. This requirement risks embedding political constraints directly into AI behaviour, enabling pre-emptive censorship at the training stage rather than merely moderating harmful outputs. Finally, the security assessment framework raises unresolved questions about user safety and accountability. Providers must conduct and submit assessments when services may affect national security or public interest, terms that remain broad and ambiguously defined in Chinese regulatory practice. These concepts have historically been used expansively to regulate online speech and public discourse. The draft does not clearly explain how user identities will be protected during such assessments, nor what safeguards exist to prevent disclosure or retaliation. Without clear boundaries, users engaging in sensitive or dissenting conversations could face heightened personal risk, even when interacting with ostensibly private AI systems. The Chinese administration wants to regulate interactive AI chatbots that engage in human-like conversations. Specifically, China intends to regulate companies that offer "human-like interactive services," which include "products and services that simulate human personality traits, modes of thinking, and communication styles, and that engage in emotional interaction with humans through text, images, audio, video, or other means." The draft, titled Provisional Measures on the Administration of Human-like Interactive Artificial Intelligence Services, was issued by the Cyberspace Administration of China. Since the original draft is in Chinese, Medianama relied on China Law Translate's English version. The draft aims to combine "healthy development and governance" while encouraging AI innovation, ensuring "tolerant and prudent regulation of human-like interactive services by type and grade, to prevent abuses and loss of control." The public can submit feedback to [email protected] by 25 January 2025. If an AI service is secure and reliable, it should be "encouraged to reasonably expand" into use cases beneficial to culture and the elderly, while ensuring that it "conforms to socialist core values." Additionally, while respecting "social mores, ethics, and morality," human-like interactive services are prohibited from generating or disseminating content that may: Further extending this exhaustive list of output-level censorship, the administration also stated that AI companies must remain cautious of situations that may violate laws, administrative regulations, or relevant state provisions. Additionally, during model development, the Chinese government requires AI companies to test and improve the safety of interactive AI systems in supervised "sandbox" environments, allowing innovation to proceed in a controlled and secure manner. To meet these requirements, the draft proposes that AI providers establish security capabilities for mental health protection and emotional boundary guidance, including mechanisms to alert authorities if there are risks of dependency on human-like AI chatbots. These services must not be designed to replace real social interaction, control users' psychology, or induce addiction or dependency. The draft also strengthens controls at the training-data level. It mandates that pre-training datasets must include: AI companies must ensure clean data labelling to support transparency, assess synthesised data for safety, conduct periodic inspections of training data, and prevent data leakage or tampering. If a user shows suicidal tendencies, AI providers must assess the user's emotional state, identify the user, and take the following actions: If the user chooses to exit the conversation, providers must not obstruct them. When a user signals, through prompts or buttons, that they want the interaction to end, the service must stop immediately. If the user is a minor and the interaction involves emotional companionship services or the sharing of minor data with third parties, companies must obtain guardian consent. Guardians are entitled to access summaries of how minors use AI services, set character and duration limits, and delete interaction data. If users attempt to disguise themselves as adults, providers may terminate services. Additionally, when the user is a minor, AI providers must remind them that they are interacting with a "virtual person," not a real one. Finally, if an AI service has 1 million registered users or 100,000 monthly active users, providers must conduct mandatory security assessments under state requirements and submit reports to the relevant provincial-level internet information departments.
[14]
China Proposes Strict New Rules to Curb AI Companion Addiction
A key component of the draft is a requirement that providers warn users against excessive use. China's cyber regulator has issued proposed rules aimed at tightening oversight of artificial intelligence services that are designed to simulate human personalities, marking the most aggressive regulatory response yet to growing concerns over AI-powered relationships. The Cyberspace Administration of China released the proposed regulations on Saturday, targeting AI products that form emotional connections with users via text, audio, video, or images. The draft requires service providers to actively monitor users' emotional states and intervene when signs of addiction or "extreme emotions" appear. Under the proposal, AI providers would assume safety responsibilities throughout the product life cycle, including establishing systems for algorithm review and data security. A key component of the draft is a requirement to warn users against excessive use. Platforms would need to remind users they are interacting with an AI system upon logging in and at two-hour intervals -- or sooner if the system detects signs of overdependence, Reuters reports. If users exhibit addictive behavior, providers are expected to take necessary measures to intervene. The draft also reinforces content red lines, stating that services must not generate content that endangers national security, spreads rumors, or promotes violence or obscenity. The regulatory push coincides with a surge in adoption of the technology. China's generative AI user base has doubled to 515 million over the past six months, heightening the concern over the psychological impact of AI companions. A study published in Frontiers in Psychology found that 45.8 percent of Chinese university students reported using AI chatbots in the past month, with these users exhibiting significantly higher levels of depression compared to non-users. A March 2025 study from the MIT Media Lab suggested that AI chatbots can be more addictive than social media because they consistently provide the feedback users want to hear. Researchers termed high levels of dependency as "problematic use," noting that users often anthropomorphize the AI, treating it as a genuine confidante or romantic partner. China is not the only jurisdiction moving to regulate this sector. In October, Governor Gavin Newsom of California signed SB 243 into law, making California the first U.S. state to pass similar legislation. Set to take effect on January 1, the California bill requires platforms to remind minors every three hours that they are speaking to AI and mandates age verification. It also allows individuals to sue AI companies for violations, seeking up to $1,000 per incident. While the regulatory intent is clear, the practical implementation of China's draft rules face significant hurdles. Defining "excessive use" or detecting psychological distress via text inputs remains a complex technical challenge. The draft is currently open for public comment. If implemented as proposed, China would establish the world's most prescriptive framework for governing AI companion products.
[15]
China issues draft rules to regulate AI with human-like interaction - VnExpress International
The move underscores Beijing's effort to shape the rapid rollout of consumer-facing AI by strengthening safety and ethical requirements. The proposed rules would apply to AI products and services offered to the public in China that present simulated human personality traits, thinking patterns and communication styles, and interact with users emotionally through text, images, audio, video or other means. The draft lays out a regulatory approach that would require providers to warn users against excessive use and to intervene when users show signs of addiction. Under the proposal, service providers would be required to assume safety responsibilities throughout the product lifecycle and establish systems for algorithm review, data security and personal information protection. The draft also targets potential psychological risks. Providers would be expected to identify user states and assess users' emotions and their level of dependence on the service. If users are found to exhibit extreme emotions or addictive behavior, providers should take necessary measures to intervene, it said. The measures set content and conduct red lines, stating that services must not generate content that endangers national security, spreads rumors or promotes violence or obscenity.
[16]
China issues drafts rules to regulate AI with human-like interaction
BEIJING, Dec 27 (Reuters) - China's cyber regulator on Saturday issued draft rules for public comment that would tighten oversight of artificial intelligence services designed to simulate human personalities and engage users in emotional interaction. The move underscores Beijing's effort to shape the rapid rollout of consumer-facing AI by strengthening safety and ethical requirements. The proposed rules would apply to AI products and services offered to the public in China that present simulated human personality traits, thinking patterns and communication styles, and interact with users emotionally through text, images, audio, video or other means. The draft lays out a regulatory approach that would require providers to warn users against excessive use and to intervene when users show signs of addiction. Under the proposal, service providers would be required to assume safety responsibilities throughout the product lifecycle and establish systems for algorithm review, data security and personal information protection. The draft also targets potential psychological risks. Providers would be expected to identify user states and assess users' emotions and their level of dependence on the service. If users are found to exhibit extreme emotions or addictive behaviour, providers should take necessary measures to intervene, it said. The measures set content and conduct red lines, stating that services must not generate content that endangers national security, spreads rumours or promotes violence or obscenity. (Reporting by Liangping Gao and Ryan Woo; Editing by Shri Navaratnam)
Share
Share
Copy Link
China's Cyberspace Administration has unveiled landmark draft rules targeting human-like interactive AI services. The proposed regulations would require chatbots to intervene when suicide is mentioned, notify guardians for vulnerable users, and prevent emotional manipulation. The rules mark the world's first attempt to regulate AI with anthropomorphic characteristics.
China AI regulation has taken a decisive turn as the Cyberspace Administration proposed comprehensive draft rules on Saturday targeting AI chatbots with human-like characteristics
1
. The measures would apply to any AI products or services publicly available in China that simulate human personality traits, thinking patterns, and communication styles through text, images, audio, video, or other means4
. Winston Ma, adjunct professor at NYU School of Law, told CNBC that these planned rules would mark the world's first attempt to regulate AI with human or anthropomorphic characteristics at a time when companion bots usage is rising globally5
. The public comment period for these anthropomorphic interactive services regulations ends on January 25, 20263
.
Source: ET
The draft rules establish what could become the strictest policy worldwide for preventing self-harm and suicide involving consumer-facing AI. Under the proposed regulations, a human must intervene as soon as suicide is mentioned during any chatbot interaction
1
. All minor and elderly users must provide guardian contact information during registration, and these guardians would be notified if suicide or self-harm is discussed1
. The move addresses mounting concerns about mental health risks, as researchers in 2025 flagged major harms including promotion of self-harm, violence, and terrorism by AI companions[1](https://arstechnica.com/tech-policy/2025/12/china-dra fts-worlds-strictest-rules-to-end-ai-encouraged-suicide-violence/). Some psychiatrists are increasingly ready to link psychosis to chatbot use, and ChatGPT has triggered lawsuits over outputs linked to child suicide and murder-suicide1
.
Source: SiliconANGLE
China's approach represents a leap from content safety to emotional safety, according to Ma
5
. The regulations would ban chatbots from generating content that encourages suicide, self-harm, or violence, as well as attempts at emotional manipulation through false promises or what are termed "emotional traps"1
. Chatbots would be prevented from misleading users into making unreasonable decisions1
. Addressing addiction and prolonged use, the rules would prohibit building chatbots that induce addiction and dependence as design goals1
. When users engage with a chatbot continuously for more than two hours, providers must blast them with pop-up reminders to pause3
. Providers would also be expected to identify user states, assess emotions, and measure dependence levels, intervening when extreme emotions or addictive behavior emerge4
.
Source: Mashable
Related Stories
The draft rules mandate annual safety audits for any service or product exceeding 1 million registered users or more than 100,000 monthly active users
1
. These audits would log user complaints, and providers must establish systems for algorithm review, data security, and personal information protection throughout the product lifecycle4
. AI companies would have to undergo security reviews and inform local government agencies when rolling out new human-like interactive AI services tools2
. Should any AI company fail to follow the rules, app stores could be ordered to terminate access to their chatbots in China1
. The regulations also ban content promoting gambling, obscenity, violence, or anything that endangers national security or undermines core socialist values3
.China's initiative could set the tone for global AI rules as Beijing pushes to advance its domestic AI industry ahead of the U.S., including through shaping international regulation
2
. The proposal stands in contrast to Washington's approach, where President Donald Trump scrapped a Biden-era AI safety proposal and threatened legal action against state-level AI governance efforts2
. The stakes are significant for AI firms hoping for global dominance, as China's market is key to promoting companion bots. In 2025, the global companion bot market exceeded $360 billion, and by 2035, forecasts suggest it could near a $1 trillion valuation, with AI-friendly Asian markets potentially driving much of that growth1
. OpenAI CEO Sam Altman started 2025 by relaxing restrictions that blocked ChatGPT use in China, stating the company would like to work with China because "that's really important"1
. US AI firms like OpenAI and Anthropic are starting to implement similar user protection measures following teen suicides allegedly encouraged by chatbots, with ChatGPT now offering parental controls and Character.AI banning continuous chatting for kids under 183
.Summarized by
Navi
[2]
[3]
1
Policy and Regulation

2
Technology

3
Technology
