China regulation targets digital humans with strict AI rules banning child addiction services

2 Sources

Share

China's cyberspace regulator unveiled draft regulations for digital humans on Friday, requiring clear labeling on all virtual human content and banning services that could mislead children or fuel addiction. The rules prohibit digital humans from offering virtual intimate relationships to anyone under 18 and ban unauthorized use of personal information to create AI avatars.

China's Cyberspace Regulator Introduces Sweeping Digital Humans Framework

The Cyberspace Administration of China issued draft regulations on Friday aimed at governing the rapidly expanding digital humans sector, marking one of the most comprehensive attempts to regulate artificial intelligence applications involving virtual avatars. The proposed rules, published for public comment until May 6, require prominent content labeling on all virtual human content and establish strict boundaries around services targeting minors

1

. At the core of these draft regulations for digital humans lies a ban on addictive services for children, specifically prohibiting digital humans from providing virtual intimate relationships to those under 18

2

.

Source: Reuters

Source: Reuters

China regulation in this space reflects Beijing's broader strategy to maintain control as AI technologies advance while simultaneously pursuing aggressive adoption across its economy. The framework addresses mounting concerns about the psychological impact of AI-powered virtual companions on young users, a demographic particularly vulnerable to forming unhealthy attachments to digital entities. Service providers must now implement measures to prevent and resist content that is sexually suggestive, depicts horror or cruelty, or incites discrimination based on ethnicity or region

1

.

Protecting Personal Data and Identity Verification Systems

The regulations tackle unauthorized use of personal information by explicitly banning the creation of digital humans using other people's data without consent. This provision addresses growing concerns about deepfakes and unauthorized digital replicas that could damage reputations or facilitate fraud

2

. The draft rules also prohibit using virtual humans to bypass identity verification systems, closing a potential loophole that could enable malicious actors to evade accountability or circumvent existing security protocols.

These measures align with China's broader cybersecurity priorities, ensuring that advances in artificial intelligence don't undermine established safeguards. The Cyberspace Administration of China emphasized that digital humans are prohibited from disseminating content that endangers national security, incites subversion of state power, promotes secession, or undermines national unity

1

.

AI Industry Alignment with National Values and Mental Health Safeguards

Beyond security concerns, the regulations demonstrate China's commitment to AI industry alignment with national values and socialist principles. According to an analysis published on China's cyberspace regulator website, "The governance of digital virtual humans is no longer merely an issue of industry norms; rather, it has become a strategic scientific problem that concerns the security of the cyberspace, public interests, and the high-quality development of the digital economy"

2

.

The framework includes provisions requiring providers to take necessary measures to intervene and provide professional assistance when users exhibit suicidal or self-harming tendencies, representing an unusual intersection of AI governance and mental health policy

1

. This suggests regulators recognize the potential psychological risks associated with intensive digital human interactions.

Strategic Timing Amid China's AI Ambitions

The timing of these regulations coincides with China's five-year policy blueprint issued last month, which outlined ambitious plans to aggressively adopt AI throughout its economy. The dual approach of promoting rapid AI development while tightening governance creates a distinctive regulatory environment where innovation must operate within clearly defined ideological and security boundaries. The new rules aim to fill a gap in governance in the digital human sector, setting clear red lines for healthy industry development

2

.

As the public comment period extends until May 6, industry observers will be watching how these regulations shape the global conversation around AI governance, particularly as other nations grapple with similar challenges around virtual companions, deepfakes, and the psychological impact of increasingly realistic digital humans on vulnerable populations.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo