3 Sources
3 Sources
[1]
UNICEF Calls on Governments to Criminalize AI-Generated Child Abuse Material - Decrypt
The agency urged tighter laws and "safety-by-design" rules for AI developers, including mandatory child-rights impact checks. UNICEF issued an urgent call Wednesday for governments to criminalize AI-generated child sexual abuse material, citing alarming evidence that at least 1.2 million children worldwide had their images manipulated into sexually explicit deepfakes in the past year. The figures, revealed in Disrupting Harm Phase 2, a research project led by UNICEF's Office of Strategy and Evidence Innocenti, ECPAT International, and INTERPOL, show in some nations the figure represents one in 25 children, the equivalent of one child in a typical classroom, according to a Wednesday statement and accompanying issue brief. The research, based on a nationally representative household survey of approximately 11,000 children across 11 countries, highlights how perpetrators can now create realistic sexual images of a child without their involvement or awareness. In some study countries, up to two-thirds said they worry AI could be used to create fake sexual images or videos of them, though levels of concern vary widely between countries, according to the data. "We must be clear. Sexualised images of children generated or manipulated using AI tools are child sexual abuse material (CSAM)," UNICEF said. "Deepfake abuse is abuse, and there is nothing fake about the harm it causes." The call gains urgency as French authorities raided X's Paris offices on Tuesday as part of a criminal investigation into alleged child pornography linked to the platform's AI chatbot Grok, with prosecutors summoning Elon Musk and several executives for questioning. A Center for Countering Digital Hate report released last month estimated Grok produced 23,338 sexualized images of children over an 11-day period between December 29 and January 9. The issue brief released alongside the statement notes these developments mark "a profound escalation of the risks children face in the digital environment," where a child can have their right to protection violated "without ever sending a message or even knowing it has happened." The UK's Internet Watch Foundation flagged nearly 14,000 suspected AI-generated images on a single dark-web forum in one month, about a third confirmed as criminal, while South Korean authorities reported a tenfold surge in AI and deepfake-linked sexual offenses between 2022 and 2024, with most suspects identified as teenagers. The organization urgently called on all governments to expand definitions of child sexual abuse material to include AI-generated content and criminalize its creation, procurement, possession, and distribution. UNICEF also demanded that AI developers implement safety-by-design approaches and that digital companies prevent the circulation of such material. The brief calls for states to require companies to conduct child rights due diligence, particularly child rights impact assessments, and for every actor in the AI value chain to embed safety measures, including pre-release safety testing for open-source models. "The harm from deepfake abuse is real and urgent," UNICEF warned. "Children cannot wait for the law to catch up."
[2]
UNICEF calls for criminalisation of AI content depicting child sex abuse
The United Nations children's agency UNICEF on Wednesday called for countries to criminalize the creation of AI-generated child sexual abuse content, saying it was alarmed by reports of an increase in the number of artificial intelligence images sexualising children. The United Nations children's agency UNICEF on Wednesday called for countries to criminalize the creation of AI-generated child sexual abuse content, saying it was alarmed by reports of an increase in the number of artificial intelligence images sexualizing children. The agency also urged developers to implement safety-by-design approaches and guardrails to prevent misuse of AI models. It said digital companies should prevent the circulation of these images by strengthening content moderation with investment in detection technologies. "The harm from deepfake abuse is real and urgent. Children cannot wait for the law to catch up," UNICEF said in a statement. Deepfakes are AI-generated images, videos, and audio that convincingly impersonate real people. UNICEF also raised concerns about what it called the "nudification" of children, using AI to strip or alter clothing in photos to create fabricated nude or sexualized images. At least 1.2 million children across 11 countries disclosed having their images manipulated into sexually explicit deepfakes in the past year, according to UNICEF. Britain said on Saturday it plans to make it illegal to use AI tools to create child sexual abuse images, making it the first country to do so. Concerns have increased in recent years about the use of AI to generate child abuse content, particularly chatbots such as xAI's Grok - owned by Elon Musk - which has come under scrutiny for producing sexualized images of women and minors. A Reuters investigation found the chatbot continued to produce these images even when users explicitly warned the subjects had not consented. xAI said on January 14 it had restricted image editing for Grok AI users and blocked users, based on their location, from generating images of people in revealing clothing in "jurisdictions where it's illegal." It did not identify the countries. It had earlier limited the use of Grok's image generation and editing features only to paying subscribers. (Reporting by Jasper Ward in Washington; editing by Michelle Nichols and Rod Nickel)
[3]
'Deepfake Abuse Is Abuse,' UNICEF Warns
"The harm from deepfake abuse is real and urgent," the UN agency said in a statement. "Children cannot wait for the law to catch up." At least 1.2 million youngsters have disclosed having had their images manipulated into sexually explicit deepfakes in the past year, according to a study across 11 countries conducted by the UN agency, international police agency, INTERPOL and the ECPAT global network working to end the sexual exploitation of children worldwide. In some countries, this represents one in 25 children or the equivalent of one child in a typical classroom, the study found. 'Nudification' tools Deepfakes - images, videos, or audio generated or manipulated with AI and designed to look real - are increasingly being used to produce sexualised content involving children, including through so-called "nudification", where AI tools are used to strip or alter clothing in photos to create fabricated nude or sexualised images. "When a child's image or identity is used, that child is directly victimised. Even without an identifiable victim, AI-generated child sexual abuse material normalises the sexual exploitation of children, fuels demand for abusive content and presents significant challenges for law enforcement in identifying and protecting children that need help," UNICEF said. "Deepfake abuse is abuse, and there is nothing fake about the harm it causes." Demand for robust safeguards The UN agency said it strongly welcomed the efforts of those AI developers who are implementing "safety-by-design" approaches and robust guardrails to prevent misuse of their systems. However, the response so far is patchy, and too many AI models are not being developed with adequate safeguards. The risks can be compounded when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly. "Children themselves are deeply aware of this risk," UNICEF said, adding that in some of the study countries, up to two thirds of youngsters said they worry that AI could be used to create fake sexual images or videos. A fast-growing threat "Levels of concern vary widely between countries, underscoring the urgent need for stronger awareness, prevention and protection measures." To address this fast-growing threat, the UN agency issued Guidance on AI and Children 3.0 in December with recommendations for policies and systems that uphold child rights.
Share
Share
Copy Link
UNICEF issued an urgent call for governments worldwide to criminalize AI-generated child sexual abuse material after research revealed at least 1.2 million children had their images manipulated into sexually explicit deepfakes in the past year. The UN agency warned that deepfake abuse is real abuse, demanding AI developers implement safety-by-design approaches and mandatory child-rights impact assessments to protect vulnerable children.
UNICEF issued an urgent call Wednesday for governments to criminalize AI-generated child abuse material, citing alarming evidence that at least 1.2 million children worldwide had their images manipulated into sexually explicit deepfakes in the past year
1
. The figures, revealed in Disrupting Harm Phase 2, a research project led by UNICEF's Office of Strategy and Evidence Innocenti, ECPAT International, and INTERPOL, show that in some nations this represents one in 25 children—the equivalent of one child in a typical classroom3
. The research, based on a nationally representative household survey of approximately 11,000 children across 11 countries, highlights how perpetrators can now create realistic sexual images of a child without their involvement or awareness.
Source: Decrypt
The UN agency raised particular concerns about the nudification of children using AI, where tools are used to strip or alter clothing in photos to create fabricated nude or sexualized images
2
. In some study countries, up to two-thirds of children said they worry AI could be used to create fake sexual images or videos of them, though levels of concern vary widely between countries. "We must be clear. Sexualised images of children generated or manipulated using AI tools are child sexual abuse material (CSAM)," UNICEF stated1
. "Deepfake abuse is abuse, and there is nothing fake about the harm it causes."
Source: ET
The call gains urgency as French authorities raided X's Paris offices on Tuesday as part of a criminal investigation into alleged child pornography linked to the platform's AI chatbot Grok, with prosecutors summoning Elon Musk and several executives for questioning
1
. A Center for Countering Digital Hate report released last month estimated Grok produced 23,338 sexualized images of children over an 11-day period between December 29 and January 9. The AI chatbot owned by xAI has come under scrutiny for producing sexualized images of women and minors, with a Reuters investigation finding it continued to produce these images even when users explicitly warned the subjects had not consented.Related Stories
The Internet Watch Foundation flagged nearly 14,000 suspected AI-generated images on a single dark-web forum in one month, with about a third confirmed as criminal
1
. South Korean authorities reported a tenfold surge in AI and deepfake-linked sexual offenses between 2022 and 2024, with most suspects identified as teenagers. These developments mark "a profound escalation of the risks children face in the digital environment," where a child can have their right to protection violated "without ever sending a message or even knowing it has happened," according to the issue brief.UNICEF urgently called on all governments to expand definitions of child sexual abuse material to include AI-generated content and criminalize its creation, procurement, possession, and distribution
1
. The agency also urged AI developers implement safety measures through safety-by-design approaches and demanded that digital companies prevent the circulation of such material by strengthening content moderation with investment in detection technologies2
. The brief calls for states to require companies to conduct child rights due diligence, particularly child-rights impact assessments, and for every actor in the AI value chain to embed guardrails, including pre-release safety testing for open-source models1
. Britain announced on Saturday it plans to make it illegal to use AI tools to create child sexual abuse images, making it the first country to do so2
. "The harm from deepfake abuse is real and urgent," UNICEF warned. "Children cannot wait for the law to catch up."Summarized by
Navi
22 Jul 2024

29 Apr 2025•Policy and Regulation

21 Feb 2025•Policy and Regulation

1
Technology

2
Policy and Regulation

3
Policy and Regulation
