6 Sources
6 Sources
[1]
China moves to regulate digital humans, bans addictive services for children
BEIJING, April 3 (Reuters) - China's cyberspace regulator issued draft regulations on Friday to oversee the development online of digital humans, requiring clear labelling and banning services that could mislead children or fuel addiction. The Cyberspace Administration of China's proposed rules would require prominent "digital human" labels on all virtual human content and prohibit digital humans from providing "virtual intimate relationships" to those under 18, according to rules published for public comment until May 6. The draft regulations would also ban the use of other people's personal information to create digital humans without consent, or using virtual humans to bypass identity verification systems, reflecting Beijing's efforts to maintain control in the face of advances in artificial intelligence. Digital humans are also prohibited from disseminating content that endangers national security, inciting subversion of state power, promoting secession or undermining national unity, the draft rules said. Service providers are advised to prevent and resist content that is sexually suggestive, depicts horror, cruelty or incites discrimination based on ethnicity or region, according to the document. Providers are also encouraged to take necessary measures to intervene and provide professional assistance when users exhibit suicidal or self-harming tendencies. China made clear its ambitions to aggressively adopt AI throughout its economy in the new five-year policy blueprint issued last month. The push comes alongside tightening governance in the booming industry to ensure safety and alignment with the country's socialist values. The new rules aim to fill a gap in governance in the digital human sector, setting clear red lines for the healthy development of the industry, according to an analysis published on the cyberspace regulator's website. "The governance of digital virtual humans is no longer merely an issue of industry norms; rather, it has become a strategic scientific problem that concerns the security of the cyberspace, public interests, and the high-quality development of the digital economy," it added. Reporting by Ethan Wang and Ryan Woo; Editing by Kate Mayberry Our Standards: The Thomson Reuters Trust Principles., opens new tab
[2]
China Cracking Down on the Types of AI That Are Tearing America Apart
Can't-miss innovations from the bleeding edge of science and tech The Cyberspace Administration of China is cracking down on "digital humans," with incoming regulations that will soon require labeling of AI personalities and ban programs that could harm children or lead to addiction. Those draft regulations, first reported in English by Reuters, would force AI companies to affix prominent "digital human" labels on content featuring AI generated characters. They'd also restrict companies that provide "virtual intimate relationships" with AI from plying their services to anyone under the age of 18. The restrictions would also crack down on AI deepfakes, and specifically content mimicking actual people, China's Xinhua noted. Under the new regulations, no individual or organization will be allowed to generate facsimiles of other people without their consent. "Anyone who violates the provisions of these measures shall be punished in accordance with the provisions of laws and administrative regulations, and shall bear civil liability in accordance with the law," the proposed legislations declare. Before the regulations becomes official, they'll have to pass a rigorous public comment period, which ends on May 6. But even as a draft, they stand in sharp contrast to the situation in the United States, where AI deepfakes are rampant, and numerous children and adults have lost their lives after forming dangerous relationships with AI personalities. Even more have spiraled into parasocial delusions with the non-human AI entities, leading to a broad range of harms and trauma that can leave a lasting impact. In the US, any accountability through the courts has been slow coming. OpenAI, the developer of ChatGPT, was facing a total of eight separate lawsuits as recently as January, alleging that extensive use caused emotional and psychological harm. Horrifyingly, five of those cases involved suicide. Any hope for strict, Chinese-style regulations from the federal government is probably an extreme longshot, at least under the current administration. With a rising number of political action committees funded by billion-dollar tech industry corporations and Trump family allies, the chances at avoiding the president's legislative agenda -- which is really about blocking regulatory legislation -- are looking grim. Still, it's refreshing to see the kind of AI safeguards that are possible on a national level -- if only the government wasn't so busy building the runway for the very industry causing the damage.
[3]
China Moves to Regulate Digital Humans, Bans Addictive Services for Children
BEIJING, April 3 (Reuters) - China's cyberspace regulator issued draft regulations on Friday to oversee the development online of digital humans, requiring clear labelling and banning services that could mislead children or fuel addiction. The Cyberspace Administration of China's proposed rules would require prominent "digital human" labels on all virtual human content and prohibit digital humans from providing "virtual intimate relationships" to those under 18, according to rules published for public comment until May 6. The draft regulations would also ban the use of other people's personal information to create digital humans without consent, or using virtual humans to bypass identity verification systems, reflecting Beijing's efforts to maintain control in the face of advances in artificial intelligence. Digital humans are also prohibited from disseminating content that endangers national security, inciting subversion of state power, promoting secession or undermining national unity, the draft rules said. Service providers are advised to prevent and resist content that is sexually suggestive, depicts horror, cruelty or incites discrimination based on ethnicity or region, according to the document. Providers are also encouraged to take necessary measures to intervene and provide professional assistance when users exhibit suicidal or self-harming tendencies. China made clear its ambitions to aggressively adopt AI throughout its economy in the new five-year policy blueprint issued last month. The push comes alongside tightening governance in the booming industry to ensure safety and alignment with the country's socialist values. The new rules aim to fill a gap in governance in the digital human sector, setting clear red lines for the healthy development of the industry, according to an analysis published on the cyberspace regulator's website. "The governance of digital virtual humans is no longer merely an issue of industry norms; rather, it has become a strategic scientific problem that concerns the security of the cyberspace, public interests, and the high-quality development of the digital economy," it added. (Reporting by Ethan Wang and Ryan Woo; Editing by Kate Mayberry)
[4]
China moves to regulate digital humans, bans addictive services for children - The Economic Times
China's cyberspace regulator issued draft regulations on Friday to oversee the development online of digital humans, requiring clear labelling and banning services that could mislead children or fuel addiction. The Cyberspace Administration of China's proposed rules would require prominent "digital human" labels on all virtual human content and prohibit digital humans from providing "virtual intimate relationships" to those under 18, according to rules published for public comment until May 6. The draft regulations would also ban the use of other people's personal information to create digital humans without consent, or using virtual humans to bypass identity verification systems, reflecting Beijing's efforts to maintain control in the face of advances in artificial intelligence. Digital humans are also prohibited from disseminating content that endangers national security, inciting subversion of state power, promoting secession or undermining national unity, the draft rules said. Service providers are advised to prevent and resist content that is sexually suggestive, depicts horror, cruelty or incites discrimination based on ethnicity or region, according to the document. Providers are also encouraged to take necessary measures to intervene and provide professional assistance when users exhibit suicidal or self-harming tendencies. China made clear its ambitions to aggressively adopt AI throughout its economy in the new five-year policy blueprint issued last month. The push comes alongside tightening governance in the booming industry to ensure safety and alignment with the country's socialist values. The new rules aim to fill a gap in governance in the digital human sector, setting clear red lines for the healthy development of the industry, according to an analysis published on the cyberspace regulator's website. "The governance of digital virtual humans is no longer merely an issue of industry norms; rather, it has become a strategic scientific problem that concerns the security of the cyberspace, public interests, and the high-quality development of the digital economy," it added.
[5]
China Proposes Rules for AI 'Digital Humans
China has proposed new rules to regulate 'digital humans,' AI-generated virtual people, with a focus on child safety, consent, and tighter state control over online content, according to a Reuters report. Labelling, minors, and addiction risks: Draft regulations released by the Cyberspace Administration of China on April 3 require all digital human content to be clearly labelled. They also ban such virtual entities from offering "virtual intimate relationships" to users under 18 and restrict features that could mislead children or encourage addiction. The proposal comes as China accelerates the adoption of AI while tightening oversight of the sector. Consent, identity, and misuse concerns: Under the draft, companies cannot create digital humans using someone's personal information without explicit consent. The rules also prohibit the use of virtual humans to bypass identity verification systems, highlighting concerns about AI misuse for fraud or anonymity. Content controls and platform responsibility: Restrictions form a key part of the framework. The rules prohibit digital humans from producing content that threatens national security, promotes secession, or undermines national unity. They also require service providers to curb sexually suggestive, violent, or discriminatory content. In addition, the rules encourage platforms to intervene when users show signs of self-harm or suicidal behaviour and provide professional assistance where necessary. The regulator described the issue in broader terms, stating: "The governance of digital virtual humans is no longer merely an issue of industry norms; rather, it has become a strategic scientific problem that concerns the security of cyberspace, public interest, and the high-quality development of the digital economy." Broader regulatory push on AI: The draft rules are open for public comment until May 6 and aim to address what authorities see as a regulatory gap in the fast-growing digital human industry. The move reflects Beijing's dual approach, promoting AI-led growth while setting strict boundaries to ensure alignment with state priorities and social stability. Earlier draft rules on human-like AI chatbots released in January required companies to monitor user behaviour, assess emotional states, and, in some cases, link user identities or alert authorities. These proposals raised concerns about extensive data collection, limits on anonymity, and the use of broad content controls that could shape what AI systems are allowed to say. Earlier measures also mandated the labelling of AI-generated content across Chinese platforms, requiring companies to use both visible tags and hidden metadata identifiers. While intended to improve transparency, such labelling systems have faced criticism for being easy to bypass and for potentially exposing user identities through embedded data, raising privacy and surveillance concerns.
[6]
China moves to regulate digital humans, bans addictive services for children
China's cyberspace regulator issued draft regulations on Friday to oversee the development online of digital humans, requiring clear labeling and banning services that could mislead children or fuel addiction. The Cyberspace Administration of China's proposed rules would require prominent "digital human" labels on all virtual human content and prohibit digital humans from providing "virtual intimate relationships" to those under 18, according to rules published for public comment until May 6. The draft regulations would also ban the use of other people's personal information to create digital humans without consent, or using virtual humans to bypass identity verification systems, reflecting Beijing's efforts to maintain control in the face of advances in artificial intelligence. Digital humans are also prohibited from disseminating content that endangers national security, inciting subversion of state power, promoting secession or undermining national unity, the draft rules said. Service providers are advised to prevent and resist content that is sexually suggestive, depicts horror, cruelty or incites discrimination based on ethnicity or region, according to the document. Providers are also encouraged to take necessary measures to intervene and provide professional assistance when users exhibit suicidal or self-harming tendencies. China made clear its ambitions to aggressively adopt AI throughout its economy in the new five-year policy blueprint issued last month. The push comes alongside tightening governance in the booming industry to ensure safety and alignment with the country's socialist values. The new rules aim to fill a gap in governance in the digital human sector, setting clear red lines for the healthy development of the industry, according to an analysis published on the cyberspace regulator's website. "The governance of digital virtual humans is no longer merely an issue of industry norms; rather, it has become a strategic scientific problem that concerns the security of the cyberspace, public interests, and the high-quality development of the digital economy," it added.
Share
Share
Copy Link
China's Cyberspace Administration issued draft regulations on April 3 requiring prominent labeling of all digital humans and banning services offering virtual intimate relationships to anyone under 18. The rules also prohibit creating digital humans using personal information without consent and using them to bypass identity verification systems, reflecting Beijing's push to control AI development while accelerating adoption.
The Cyberspace Administration of China issued draft regulations on April 3 aimed at governing digital humans, marking a significant expansion of AI regulations in China
1
. The proposed rules require prominent clear labeling on all virtual human content and ban addictive services for children, including prohibiting digital humans from providing virtual intimate relationships to those under 183
. Published for public comment until May 6, these draft regulations reflect Beijing's dual approach of aggressively adopting AI throughout its economy while tightening governance to ensure alignment with the country's socialist values1
.
Source: New York Post
The cyberspace regulator described the issue as transcending industry norms, stating that governance of digital humans "has become a strategic scientific problem that concerns the security of the cyberspace, public interests, and the high-quality development of the digital economy"
4
.
Source: MediaNama
The regulations place child safety at the center of China's approach to digital humans. Beyond banning virtual intimate relationships for minors, the rules aim to prevent services that could mislead children or fuel addiction
5
. This stands in contrast to the United States, where numerous children and adults have reportedly lost their lives after forming dangerous relationships with AI personalities, according to analysis of the regulatory landscape2
. OpenAI faced eight separate lawsuits as of January, with five cases involving suicide allegedly linked to extensive AI use causing emotional and psychological harm2
.
Source: Futurism
The draft regulations also tackle deepfakes and personal information use by banning the creation of digital humans using someone's data without explicit consent
5
. Companies cannot use virtual humans to bypass identity verification systems, addressing concerns about misuse and fraud3
.Digital humans are prohibited from disseminating content that endangers national security, incites subversion of state power, promotes secession, or undermines national unity
1
. Service providers must prevent and resist virtual human content that is sexually suggestive, depicts horror or cruelty, or incites discrimination based on ethnicity or region4
.Platforms are also encouraged to take necessary measures to intervene and provide professional assistance when users exhibit suicidal or self-harming tendencies . The rules aim to fill a gap in governance in the digital human sector, setting clear red lines for healthy development of the AI industry
3
.Related Stories
These regulations arrive as China made clear its ambitions to aggressively adopt AI throughout its economy in the new five-year policy blueprint issued last month
1
. The move reflects Beijing's efforts to maintain control in the face of advances in artificial intelligence, ensuring safety and alignment with public interest5
.Earlier draft rules on human-like AI chatbots released in January required companies to monitor user behavior, assess emotional states, and in some cases link user identities or alert authorities
5
. While these measures raised concerns about extensive data collection and limits on anonymity, they demonstrate China's comprehensive approach to cybersecurity and governance in the digital economy.The contrast with the United States is stark. With tech industry lobbying and political action committees funded by billion-dollar corporations, strict federal AI regulations remain unlikely under the current administration
2
. Anyone who violates the provisions of China's proposed measures will be punished in accordance with laws and administrative regulations and bear civil liability2
.Summarized by
Navi
[4]
[5]
27 Dec 2025•Policy and Regulation

16 Sept 2024

27 Sept 2024

1
Technology

2
Science and Research

3
Technology
