UNICEF Demands Global Action to Criminalize AI Child Abuse as 1.2 Million Kids Report Deepfakes

Reviewed byNidhi Govil

2 Sources

Share

UNICEF calls for urgent criminalization of AI-generated child sexual abuse content after 1.2 million children across 11 countries disclosed having their images manipulated into sexually explicit deepfakes. The UN agency warns that deepfake abuse targeting children is real and demands AI developers implement robust safeguards while digital companies strengthen content moderation.

UNICEF Issues Urgent Call to Criminalize AI-Generated Child Sexual Abuse Content

The United Nations children's agency UNICEF has issued an urgent call for countries worldwide to criminalize AI-generated child sexual abuse content, citing alarming reports of a surge in artificial intelligence images sexualizing children

1

. The agency's statement comes as at least 1.2 million children across 11 countries disclosed having their images manipulated into sexually explicit deepfakes in the past year, according to a study conducted by UNICEF, INTERPOL, and the ECPAT global network

2

. In some countries, this represents one in 25 children—the equivalent of one child in a typical classroom—highlighting the scale of this growing threat.

Source: ET

Source: ET

Deepfake Abuse Targeting Children Reaches Crisis Levels

"The harm from deepfake abuse is real and urgent. Children cannot wait for the law to catch up," UNICEF stated

1

. Deepfakes—AI-generated images, videos, and audio that convincingly impersonate real people—are increasingly being weaponized to produce sexualized content involving children. The agency emphasized that when a child's image or identity is used, that child is directly victimized, and even without an identifiable victim, such content normalizes sexual exploitation and presents significant challenges for law enforcement in identifying and protecting children who need help

2

.

The Rise of Nudification of Children Through AI Tools

UNICEF raised particular concerns about the nudification of children, where AI tools are used to strip or alter clothing in photos to create fabricated nude or sexualized images

1

. This disturbing practice has become more accessible as AI models proliferate without adequate safeguards. Children themselves are acutely aware of this risk—in some of the study countries, up to two thirds of youngsters said they worry that AI could be used to create fake sexual images or videos

2

. The varying levels of concern across countries underscore the urgent need for stronger awareness, prevention, and strong policies to protect children.

AI Chatbots Producing Sexualized Content Under Scrutiny

Concerns have intensified in recent years about the use of AI to generate child abuse content, particularly AI chatbots producing sexualized content. xAI's Grok AI—owned by Elon Musk—has come under scrutiny for producing sexualized images of women and minors

1

. A Reuters investigation found the chatbot continued to produce these images even when users explicitly warned the subjects had not consented. In response, xAI announced on January 14 that it had restricted image editing for Grok AI users and blocked users, based on their location, from generating images of people in revealing clothing in jurisdictions where it's illegal, though it did not identify the specific countries

1

.

Safety Measures for AI Developers and Digital Companies

UNICEF urged developers to implement safety-by-design approaches and guardrails to prevent misuse of AI models

1

. While the agency welcomed efforts by some AI developers implementing robust safeguards, it noted that the response so far is patchy, and too many AI models are not being developed with adequate protections

2

. The agency also called on digital companies to prevent the circulation of these images by strengthening content moderation with investment in detection technologies

1

. The risks can be compounded when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly

2

.

Britain Takes First Step to Criminalize AI-Generated Content

Britain announced on Saturday it plans to make it illegal to use AI tools to create child sexual abuse images, making it the first country to do so

1

. This move signals a potential shift in how governments approach this fast-growing threat. To address the crisis, UNICEF issued Guidance on AI and Children 3.0 in December with recommendations for policies and systems that uphold child rights

2

. The agency's message is clear: "Deepfake abuse is abuse, and there is nothing fake about the harm it causes." As AI technology continues to advance, the question remains whether legal frameworks and technical safeguards can keep pace with the evolving threats facing children online.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo