2 Sources
2 Sources
[1]
UNICEF calls for criminalisation of AI content depicting child sex abuse
The United Nations children's agency UNICEF on Wednesday called for countries to criminalize the creation of AI-generated child sexual abuse content, saying it was alarmed by reports of an increase in the number of artificial intelligence images sexualising children. The United Nations children's agency UNICEF on Wednesday called for countries to criminalize the creation of AI-generated child sexual abuse content, saying it was alarmed by reports of an increase in the number of artificial intelligence images sexualizing children. The agency also urged developers to implement safety-by-design approaches and guardrails to prevent misuse of AI models. It said digital companies should prevent the circulation of these images by strengthening content moderation with investment in detection technologies. "The harm from deepfake abuse is real and urgent. Children cannot wait for the law to catch up," UNICEF said in a statement. Deepfakes are AI-generated images, videos, and audio that convincingly impersonate real people. UNICEF also raised concerns about what it called the "nudification" of children, using AI to strip or alter clothing in photos to create fabricated nude or sexualized images. At least 1.2 million children across 11 countries disclosed having their images manipulated into sexually explicit deepfakes in the past year, according to UNICEF. Britain said on Saturday it plans to make it illegal to use AI tools to create child sexual abuse images, making it the first country to do so. Concerns have increased in recent years about the use of AI to generate child abuse content, particularly chatbots such as xAI's Grok - owned by Elon Musk - which has come under scrutiny for â producing sexualized images of â women and minors. A Reuters investigation found the chatbot continued to produce these images even when users explicitly warned the subjects had not consented. xAI said on January 14 it had restricted image editing for Grok AI users and blocked users, based on their location, from generating images of people in revealing clothing in "jurisdictions where it's illegal." It did not identify the countries. It had earlier limited the use of Grok's image generation and editing features only to paying subscribers. (Reporting by Jasper Ward in Washington; editing by Michelle Nichols and Rod Nickel)
[2]
'Deepfake Abuse Is Abuse,' UNICEF Warns
"The harm from deepfake abuse is real and urgent," the UN agency said in a statement. "Children cannot wait for the law to catch up." At least 1.2 million youngsters have disclosed having had their images manipulated into sexually explicit deepfakes in the past year, according to a study across 11 countries conducted by the UN agency, international police agency, INTERPOL and the ECPAT global network working to end the sexual exploitation of children worldwide. In some countries, this represents one in 25 children or the equivalent of one child in a typical classroom, the study found. 'Nudification' tools Deepfakes - images, videos, or audio generated or manipulated with AI and designed to look real - are increasingly being used to produce sexualised content involving children, including through so-called "nudification", where AI tools are used to strip or alter clothing in photos to create fabricated nude or sexualised images. "When a child's image or identity is used, that child is directly victimised. Even without an identifiable victim, AI-generated child sexual abuse material normalises the sexual exploitation of children, fuels demand for abusive content and presents significant challenges for law enforcement in identifying and protecting children that need help," UNICEF said. "Deepfake abuse is abuse, and there is nothing fake about the harm it causes." Demand for robust safeguards The UN agency said it strongly welcomed the efforts of those AI developers who are implementing "safety-by-design" approaches and robust guardrails to prevent misuse of their systems. However, the response so far is patchy, and too many AI models are not being developed with adequate safeguards. The risks can be compounded when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly. "Children themselves are deeply aware of this risk," UNICEF said, adding that in some of the study countries, up to two thirds of youngsters said they worry that AI could be used to create fake sexual images or videos. A fast-growing threat "Levels of concern vary widely between countries, underscoring the urgent need for stronger awareness, prevention and protection measures." To address this fast-growing threat, the UN agency issued Guidance on AI and Children 3.0 in December with recommendations for policies and systems that uphold child rights.
Share
Share
Copy Link
UNICEF calls for urgent criminalization of AI-generated child sexual abuse content after 1.2 million children across 11 countries disclosed having their images manipulated into sexually explicit deepfakes. The UN agency warns that deepfake abuse targeting children is real and demands AI developers implement robust safeguards while digital companies strengthen content moderation.
The United Nations children's agency UNICEF has issued an urgent call for countries worldwide to criminalize AI-generated child sexual abuse content, citing alarming reports of a surge in artificial intelligence images sexualizing children
1
. The agency's statement comes as at least 1.2 million children across 11 countries disclosed having their images manipulated into sexually explicit deepfakes in the past year, according to a study conducted by UNICEF, INTERPOL, and the ECPAT global network2
. In some countries, this represents one in 25 childrenâthe equivalent of one child in a typical classroomâhighlighting the scale of this growing threat.
Source: ET
"The harm from deepfake abuse is real and urgent. Children cannot wait for the law to catch up," UNICEF stated
1
. DeepfakesâAI-generated images, videos, and audio that convincingly impersonate real peopleâare increasingly being weaponized to produce sexualized content involving children. The agency emphasized that when a child's image or identity is used, that child is directly victimized, and even without an identifiable victim, such content normalizes sexual exploitation and presents significant challenges for law enforcement in identifying and protecting children who need help2
.UNICEF raised particular concerns about the nudification of children, where AI tools are used to strip or alter clothing in photos to create fabricated nude or sexualized images
1
. This disturbing practice has become more accessible as AI models proliferate without adequate safeguards. Children themselves are acutely aware of this riskâin some of the study countries, up to two thirds of youngsters said they worry that AI could be used to create fake sexual images or videos2
. The varying levels of concern across countries underscore the urgent need for stronger awareness, prevention, and strong policies to protect children.Concerns have intensified in recent years about the use of AI to generate child abuse content, particularly AI chatbots producing sexualized content. xAI's Grok AIâowned by Elon Muskâhas come under scrutiny for producing sexualized images of women and minors
1
. A Reuters investigation found the chatbot continued to produce these images even when users explicitly warned the subjects had not consented. In response, xAI announced on January 14 that it had restricted image editing for Grok AI users and blocked users, based on their location, from generating images of people in revealing clothing in jurisdictions where it's illegal, though it did not identify the specific countries1
.Related Stories
UNICEF urged developers to implement safety-by-design approaches and guardrails to prevent misuse of AI models
1
. While the agency welcomed efforts by some AI developers implementing robust safeguards, it noted that the response so far is patchy, and too many AI models are not being developed with adequate protections2
. The agency also called on digital companies to prevent the circulation of these images by strengthening content moderation with investment in detection technologies1
. The risks can be compounded when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly2
.Britain announced on Saturday it plans to make it illegal to use AI tools to create child sexual abuse images, making it the first country to do so
1
. This move signals a potential shift in how governments approach this fast-growing threat. To address the crisis, UNICEF issued Guidance on AI and Children 3.0 in December with recommendations for policies and systems that uphold child rights2
. The agency's message is clear: "Deepfake abuse is abuse, and there is nothing fake about the harm it causes." As AI technology continues to advance, the question remains whether legal frameworks and technical safeguards can keep pace with the evolving threats facing children online.Summarized by
Navi
22 Jul 2024

29 Apr 2025âąPolicy and Regulation

10 Jul 2025âąTechnology

1
Business and Economy

2
Technology

3
Technology
