UNICEF Demands Global Crackdown on AI-Generated Child Abuse as 1.2 Million Kids Victimized

Reviewed byNidhi Govil

3 Sources

Share

UNICEF issued an urgent call for governments worldwide to criminalize AI-generated child sexual abuse material after research revealed at least 1.2 million children had their images manipulated into sexually explicit deepfakes in the past year. The UN agency warned that deepfake abuse is real abuse, demanding AI developers implement safety-by-design approaches and mandatory child-rights impact assessments to protect vulnerable children.

UNICEF Sounds Alarm on AI-Generated Child Abuse Crisis

UNICEF issued an urgent call Wednesday for governments to criminalize AI-generated child abuse material, citing alarming evidence that at least 1.2 million children worldwide had their images manipulated into sexually explicit deepfakes in the past year

1

. The figures, revealed in Disrupting Harm Phase 2, a research project led by UNICEF's Office of Strategy and Evidence Innocenti, ECPAT International, and INTERPOL, show that in some nations this represents one in 25 children—the equivalent of one child in a typical classroom

3

. The research, based on a nationally representative household survey of approximately 11,000 children across 11 countries, highlights how perpetrators can now create realistic sexual images of a child without their involvement or awareness.

Source: Decrypt

Source: Decrypt

Growing Threat of Nudification and Deepfake Abuse

The UN agency raised particular concerns about the nudification of children using AI, where tools are used to strip or alter clothing in photos to create fabricated nude or sexualized images

2

. In some study countries, up to two-thirds of children said they worry AI could be used to create fake sexual images or videos of them, though levels of concern vary widely between countries. "We must be clear. Sexualised images of children generated or manipulated using AI tools are child sexual abuse material (CSAM)," UNICEF stated

1

. "Deepfake abuse is abuse, and there is nothing fake about the harm it causes."

Source: ET

Source: ET

Grok Controversy Highlights Urgent Need for Action

The call gains urgency as French authorities raided X's Paris offices on Tuesday as part of a criminal investigation into alleged child pornography linked to the platform's AI chatbot Grok, with prosecutors summoning Elon Musk and several executives for questioning

1

. A Center for Countering Digital Hate report released last month estimated Grok produced 23,338 sexualized images of children over an 11-day period between December 29 and January 9. The AI chatbot owned by xAI has come under scrutiny for producing sexualized images of women and minors, with a Reuters investigation finding it continued to produce these images even when users explicitly warned the subjects had not consented.

Escalating Scale of AI-Generated Child Sexual Abuse Content

The Internet Watch Foundation flagged nearly 14,000 suspected AI-generated images on a single dark-web forum in one month, with about a third confirmed as criminal

1

. South Korean authorities reported a tenfold surge in AI and deepfake-linked sexual offenses between 2022 and 2024, with most suspects identified as teenagers. These developments mark "a profound escalation of the risks children face in the digital environment," where a child can have their right to protection violated "without ever sending a message or even knowing it has happened," according to the issue brief.

Demands for Safety-by-Design Approaches and Criminalization

UNICEF urgently called on all governments to expand definitions of child sexual abuse material to include AI-generated content and criminalize its creation, procurement, possession, and distribution

1

. The agency also urged AI developers implement safety measures through safety-by-design approaches and demanded that digital companies prevent the circulation of such material by strengthening content moderation with investment in detection technologies

2

. The brief calls for states to require companies to conduct child rights due diligence, particularly child-rights impact assessments, and for every actor in the AI value chain to embed guardrails, including pre-release safety testing for open-source models

1

. Britain announced on Saturday it plans to make it illegal to use AI tools to create child sexual abuse images, making it the first country to do so

2

. "The harm from deepfake abuse is real and urgent," UNICEF warned. "Children cannot wait for the law to catch up."

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo