3 Sources
[1]
The Deepfake Nudes Crisis in Schools Is Much Worse Than You Thought
Around the world, teenage boys are saving Instagram and Snapchat images of girls they know from school and using harmful "nudify" apps to create fake nude photos or videos of them. These deepfakes can quickly be shared across whole schools, leaving victims feeling humiliated, violated, hopeless, and scared the images will haunt them forever. The deepfake crisis hitting schools started slowly a couple of years ago, but it has since grown considerably as the technology used to create the explicit imagery has become more accessible. Deepfake sexual abuse incidents have hit around 90 schools globally and have impacted more than 600 pupils, according to a review of publicly reported incidents by WIRED and Indicator, a publication focusing on digital deception and misinformation. The findings show that since 2023, schoolchildren -- most often boys in high schools -- in at least 28 countries have been accused of using generative AI to target their classmates with sexualized deepfakes. The explicit imagery, containing minors, is considered to be child sexual abuse material (CSAM). This analysis is believed to be the first to review real-world cases of AI deepfake abuse taking place at schools globally. As a whole, the analysis shows the worldwide reach of harmful AI nudification technology, which can earn their creators millions of dollars per year, and shows that in many incidents, schools and law enforcement officials are often not prepared to respond to the serious sexual abuse incidents. Across North America, there have been nearly 30 reported deepfake sexual abuse cases since 2023 -- including one with more than 60 alleged victims, one where the victim was temporarily expelled from school, and others where pupils at multiple schools have allegedly been targeted simultaneously. More than 10 cases have been publicly reported in South America, more than 20 across Europe, and another dozen in Australia and East Asia combined. The true scale of deepfake sexual abuse taking place in schools is likely much higher. One survey by United Nations children's agency Unicef estimates that 1.2 million children had sexual deepfakes created of them last year. One in five young people in Spain told Save the Children researchers that deepfake nudes had been created of them. Child protection group Thorn found one in eight teens know someone targeted, and in 2024, 15 percent of students surveyed by the Center for Democracy and Technology said they knew about AI-generated deepfakes linked to their school. "I think you'd be hard-pressed to find a school that has not been affected by this," says Lloyd Richardson, director of technology at the Canadian Centre for Child Protection. "The most important thing is how we're able to help the victims when this happens, because the effects of this can be massive." WIRED and Indicator's analysis looked at incidents that have been publicly reported with specific details, such as locations of schools and potential victim counts. Mostly these are English-language reporting, with a lack of data being available for many countries. Many incidents are never reported in the press, may not include specific details if they are, and instead can be handled privately by schools and law enforcement officials.
[2]
Opinion | Deepfake Nudes Are Haunting America's Teens
You are a teenage girl in 2026. You're going hiking. You're at the beach. You're getting glam for a homecoming dance, posing with your friends, enjoying the kinds of moments that high school kids have been memorializing without incident for decades. These are the kinds of wholesome, keepsake memories that have been forever ruined for the three Jane Does in Tennessee who are part of a class-action lawsuit filed in March against xAI, Elon Musk's A.I. company. A person known to at least one of these three girls used xAI's assistant, Grok, to generate sexually explicit images appearing to be them based on real, clothed photos. Of Plaintiff Jane Doe 1, the lawsuit asserts that doctored images "showed her entire body, including her genitals, without any clothes. The video depicted her undressing until she was entirely nude." These scenes were created, in part, using this teenager's face from her yearbook photo. It gets worse. The perpetrator allegedly circulated altered pictures of at least 18 underage girls to Discord, a popular messaging platform. Their first names and the name of their school appear to be attached to the images, making them identifiable. All three Jane Does have experienced extreme stress because of this victimization. Jane Doe 1, according to the lawsuit, "feels acute anxiety about who has viewed these files online and feels a complete lack of control over the ongoing dissemination of the files." The suit describes lives narrowed by this injury. Two of the Jane Does fear engaging in normal activities like going to class as a result of this abuse, and all three say that their reputations are damaged when people believe these images are real. While this suit focuses on female victims, teenage boys have also been harmed by harassment and extortion using A.I.-generated deepfakes.
[3]
Their daughter was bullied by fake nude images at school. They're warning others.
She was sitting in the lunchroom when a friend approached her. "Your nudes got leaked," the friend said. But the 16-year-old student had never sent a nude. Instead, there was a deepfake pornographic image of her circulating at her high school. In her next class, she asked another friend, "Have you heard anything about nude images of me?" The friend said yes - at a party. Someone showed a photo of a naked body and identified it as the 16-year-old girl. Her parents told USA TODAY that the deepfake scandal began months prior in December 2024, when their daughter had a falling out with a friend. A rumor spread through the school - located in the Pennsylvania New Hope-Solebury School District - that their daughter was "sending nudes." But until the exchange in the cafeteria in February 2025, her parents say she had no idea explicit photos were being generated and distributed among students. The impact on the student and her family has been devastating. Once social and bubbly, the now 18-year-old has become reclusive, her parents say, afraid to go to the grocery store or pharmacy out of fear that other patrons have seen the deepfake images. The parents and their legal representative, Matthew Faranda-Diedrich, a partner at the Royer Cooper Cohen Braunfeld law firm, claim the school failed to adequately investigate the deepfake scandal and provide support in the aftermath. They are working with Faranda-Diedrich to initiate legal action against the school district. The family requested to remain anonymous to protect their daughter's privacy. Lawyers for the school district and representatives for the high school did not return USA TODAY's requests for comment. The school district's solicitor sent a letter to Faranda-Diedrich in March 2026 that was reviewed by USA TODAY. The letter states that a Title IX investigation conducted by the school did not find proof that perpetrators accused of sharing the photos "actually circulated deepfake nudes and/or rumors of the same." But the parents pulled their daughter from the public school after what they describe as persistent bullying. Her parents are proud of the progress she's made at her new private school - she's planning to attend senior prom with her new friends after skipping junior prom at her old school - but they say the deepfake scandal still weighs on the entire family. They aren't alone in facing the impact of deepfake abuse. A growing number of schools across the country are grappling with the rise of deepfakes and "nudify" applications, and parents are often left in the dark, according to AI experts, lawyers and impacted families. For victims of deepfake abuse, the emotional toll can be serious and long-lasting. Often when they reach out for support, they feel alone or misunderstood. When the 18-year-old's parents share their experience with friends, most respond with shock. "There's still a lack of awareness around that this can happen to anybody," her father says. "We live in a very resourced school district, a very resourced area of the world. If it can happen here, it can happen anywhere." Here's what they want other parents to know about deepfake abuse, and how experts are helping schools navigating this rising crisis. 'It's a fake picture, but it's a real life' The first time parents hear about pornographic deepfake shouldn't be when their children encounter this type of media or are victimized. "The school is supposed to be the one that is up to date on these types of things that are happening," the mother says. "There should be outreach to the community, because they do outreach for other things like bullying." In 2024, a survey of 3,170 K-12 students, teachers and parents was conducted by the Center for Democracy & Technology to demonstrate the prevalence of AI deepfakes in schools and how prepared schools were to handle cases. The survey found that 6 in 10 teachers were not aware of school policies and procedures for addressing authentic or deepfake sexual images, and only 16% said their school's teacher training covered how to protect the privacy of a student depicted in a deepfake. Only 13% of students reported that their school has explained that sharing this type of AI-generated media is harmful to the person depicted. Evan Harris, a national expert on emerging AI risks in schools at Pathos Consulting Group, says schools already have so much on their plate and are struggling to keep up with this technology. Harris has conducted thousands of webinars to help schools navigate this rising crisis, worked on the ground to workshop policies with various schools and contributed to a free educational program co-led by Elliston Berry, a victim of deepfake abuse, and Adaptive Security, a security awareness training platform. "A big part of my job is making all of this feel manageable, because if people are in that fight-or-flight mode, they can't really get into anything concrete or actionable," Harris explains. He starts by establishing three buckets: policies, crisis readiness plans, and preventative education for faculty, parents and students. It's a "shared responsibility" between the school, parents and the students themselves, the Pennsylvania student's father says. "It takes a village." Her mother also wants students to complete educational trainings that emphasize the "human repercussions of doing this to an individual." "(Our daughter) loved school, she loved her town," her father says. "And in the end, she ended up having to leave the school and she's afraid to go into town." "It's a fake picture, but it's a real life," her mother says. Don't wait to talk to your kids about deepfakes The traditional parental advice and health class warnings ("don't send nude photos") are no longer sufficient, experts say. Anyone can have a explicit photo of them made and shared, even if they never take it themselves. The Pennsylvania student's parents say that keeping an open dialogue with your children is imperative. "Kids that are going through this need to be heard. They need to be believed," her father says. "Being able to know what was going on, we were able to help her," he continued. This required tough conversations, but they say this helped prevent their daughter from sinking further into a "dark place." Harris says that schools and parents need to establish an "anti-judgment culture" by modeling positive responses while educating stakeholders about deepfake abuse - before a crisis arises. "(Students) have to hear that message that they go to a school where if they come forward, they will be supported, they will be believed and they won't be judged," he says. "Shame is the number one enemy here. That's what we're trying to counter." Parents and experts call for stronger mandatory reporting laws The Pennsylvania student's parents say the school took too long to act on multiple complaints about what was happening to their daughter and did not file ChildLine report to the state, meant to identify potential child abuse, until May 2025. They want schools to implement stronger protections and reporting practices to prevent deepfake abuse from happening in the first place - and to better support students when it does. "It's about a lack of understanding," Faranda-Diedrich says. "If someone had walked in and said there's child pornography being spread in the school, there's no way the school would have handled it the way they did." State Sen. Tracy Pennycuick cosponsored the bill updating Pennsylvania's AI child pornography laws, and is hoping to see their state's bipartisan action extend nationally. In Pennsylvania, she is sponsoring another bill that would tighten up reporting requirements for mandatory reporters, like teachers and school administrators. Pennycuick wants to leave "no ambiguity" in what mandatory reporters are required to act on: "If you suspect that there is any kind of child sexual abuse material, you report." The Pennsylvania student's parents want to protect other families from living their nightmare, too. "Schools need to trust the students that are coming to them and realize it's a new world we're living in," her mother says. "If you hear something, say something. I'm around my children; I hear their friends talking. Start the conversation with them about it. It's uncomfortable, but it's something that's real nowadays." This story was supported by a grant from the Tarbell Center for AI Journalism. Funders do not provide editorial input.
Share
Copy Link
A global analysis reveals that deepfake sexual abuse incidents have hit around 90 schools worldwide, impacting more than 600 pupils across 28 countries since 2023. Teenage boys are using AI-powered nudify applications to create fake nude images of classmates, leaving victims traumatized while schools and law enforcement struggle to respond adequately to what experts call child sexual abuse material.
A disturbing pattern has emerged across the globe: teenage boys are downloading photos of female classmates from Instagram and Snapchat, then using AI-powered nudify applications to generate explicit imagery. The deepfake crisis in schools has escalated dramatically, with incidents now documented at around 90 schools worldwide, affecting more than 600 pupils across at least 28 countries since 2023, according to a comprehensive review by WIRED and Indicator
1
. These AI-generated images constitute child sexual abuse material (CSAM), yet many schools and law enforcement agencies remain unprepared to handle these serious cases of deepfake sexual abuse.
Source: NYT
The geographic spread is extensive. Nearly 30 cases have been reported across North America, including one incident with more than 60 alleged victims. More than 10 cases surfaced in South America, over 20 across Europe, and another dozen in Australia and East Asia combined
1
. However, these figures likely represent only a fraction of the actual scale. A Unicef survey estimates that 1.2 million children had sexual deepfakes created of them last year, while one in five young people in Spain told Save the Children researchers that fake nude images of classmates had been created of them1
.The legal landscape is shifting as victims fight back. A class-action lawsuit filed in March 2026 against xAI, Elon Musk's AI company, highlights how accessible technology enables this abuse. Three Jane Does in Tennessee allege that a perpetrator used xAI's assistant, Grok, to generate sexually explicit images based on their real, clothed photos—including a yearbook picture. One victim's doctored video "showed her entire body, including her genitals, without any clothes," according to the lawsuit
2
. The perpetrator allegedly circulated altered pictures of at least 18 underage girls to Discord, attaching their first names and school name to make them identifiable.
Source: Wired
The psychological impact on victims is severe and long-lasting. A 16-year-old student in Pennsylvania's New Hope-Solebury School District learned in her school lunchroom that deepfake pornographic images of her were circulating. "Your nudes got leaked," a friend told her—despite the fact she had never sent such photos
3
. The scandal began in December 2024 after a falling out with a friend, and by February 2025, explicit photos were being shown at parties and spread among students.
Source: USA Today
Once social and bubbly, the now 18-year-old has become reclusive, afraid to visit the grocery store or pharmacy out of fear that others have seen the images
3
. Jane Doe 1 from the Tennessee lawsuit "feels acute anxiety about who has viewed these files online and feels a complete lack of control over the ongoing dissemination," according to court documents. Two of the Jane Does fear engaging in normal activities like attending class, and all three report reputational damage when people believe the images are real2
. While most reported cases involve female victims, teenage boys have also experienced harassment and extortion using deepfake technology.Related Stories
Many families report that schools fail to provide adequate responses. The Pennsylvania family is pursuing legal action against school district after what they describe as inadequate investigation and support. A Title IX investigation conducted by the school found no proof that accused perpetrators "actually circulated deepfake nudes and/or rumors of the same," according to a letter from the school district's solicitor reviewed by USA TODAY
3
. The parents pulled their daughter from public school after persistent bullying and victimization.A 2024 survey by the Center for Democracy and Technology of 3,170 K-12 students, teachers, and parents revealed alarming gaps in preparedness. Six in 10 teachers were unaware of school policies for addressing deepfake sexual images, only 16 percent said teacher training covered protecting student privacy in deepfake cases, and just 13 percent of students reported that their school explained the harm of sharing AI-generated images
3
. "I think you'd be hard-pressed to find a school that has not been affected by this," says Lloyd Richardson, director of technology at the Canadian Centre for Child Protection1
.Parental awareness remains critically low. The Pennsylvania mother emphasizes that "the school is supposed to be the one that is up to date on these types of things" and should provide outreach about deepfake risks just as they do for other forms of bullying
3
. Evan Harris, a national expert on emerging AI risks in schools at Pathos Consulting Group, has conducted thousands of webinars to help schools navigate this crisis and contributed to free educational programs. The accessibility of nudify applications, which can earn creators millions of dollars annually, means this threat will likely intensify before adequate protections are in place1
. As legal frameworks evolve and awareness grows, the focus must shift to supporting victims and holding both perpetrators and enabling platforms accountable.Summarized by
Navi
22 Dec 2025•Entertainment and Society

10 Jan 2025•Technology

10 Sept 2024

1
Policy and Regulation

2
Entertainment and Society

3
Technology
