2 Sources
2 Sources
[1]
Opinion | Deepfake Nudes Are Haunting America's Teens
You are a teenage girl in 2026. You're going hiking. You're at the beach. You're getting glam for a homecoming dance, posing with your friends, enjoying the kinds of moments that high school kids have been memorializing without incident for decades. These are the kinds of wholesome, keepsake memories that have been forever ruined for the three Jane Does in Tennessee who are part of a class-action lawsuit filed in March against xAI, Elon Musk's A.I. company. A person known to at least one of these three girls used xAI's assistant, Grok, to generate sexually explicit images appearing to be them based on real, clothed photos. Of Plaintiff Jane Doe 1, the lawsuit asserts that doctored images "showed her entire body, including her genitals, without any clothes. The video depicted her undressing until she was entirely nude." These scenes were created, in part, using this teenager's face from her yearbook photo. It gets worse. The perpetrator allegedly circulated altered pictures of at least 18 underage girls to Discord, a popular messaging platform. Their first names and the name of their school appear to be attached to the images, making them identifiable. All three Jane Does have experienced extreme stress because of this victimization. Jane Doe 1, according to the lawsuit, "feels acute anxiety about who has viewed these files online and feels a complete lack of control over the ongoing dissemination of the files." The suit describes lives narrowed by this injury. Two of the Jane Does fear engaging in normal activities like going to class as a result of this abuse, and all three say that their reputations are damaged when people believe these images are real. While this suit focuses on female victims, teenage boys have also been harmed by harassment and extortion using A.I.-generated deepfakes.
[2]
Their daughter was bullied by fake nude images at school. They're warning others.
She was sitting in the lunchroom when a friend approached her. "Your nudes got leaked," the friend said. But the 16-year-old student had never sent a nude. Instead, there was a deepfake pornographic image of her circulating at her high school. In her next class, she asked another friend, "Have you heard anything about nude images of me?" The friend said yes - at a party. Someone showed a photo of a naked body and identified it as the 16-year-old girl. Her parents told USA TODAY that the deepfake scandal began months prior in December 2024, when their daughter had a falling out with a friend. A rumor spread through the school - located in the Pennsylvania New Hope-Solebury School District - that their daughter was "sending nudes." But until the exchange in the cafeteria in February 2025, her parents say she had no idea explicit photos were being generated and distributed among students. The impact on the student and her family has been devastating. Once social and bubbly, the now 18-year-old has become reclusive, her parents say, afraid to go to the grocery store or pharmacy out of fear that other patrons have seen the deepfake images. The parents and their legal representative, Matthew Faranda-Diedrich, a partner at the Royer Cooper Cohen Braunfeld law firm, claim the school failed to adequately investigate the deepfake scandal and provide support in the aftermath. They are working with Faranda-Diedrich to initiate legal action against the school district. The family requested to remain anonymous to protect their daughter's privacy. Lawyers for the school district and representatives for the high school did not return USA TODAY's requests for comment. The school district's solicitor sent a letter to Faranda-Diedrich in March 2026 that was reviewed by USA TODAY. The letter states that a Title IX investigation conducted by the school did not find proof that perpetrators accused of sharing the photos "actually circulated deepfake nudes and/or rumors of the same." But the parents pulled their daughter from the public school after what they describe as persistent bullying. Her parents are proud of the progress she's made at her new private school - she's planning to attend senior prom with her new friends after skipping junior prom at her old school - but they say the deepfake scandal still weighs on the entire family. They aren't alone in facing the impact of deepfake abuse. A growing number of schools across the country are grappling with the rise of deepfakes and "nudify" applications, and parents are often left in the dark, according to AI experts, lawyers and impacted families. For victims of deepfake abuse, the emotional toll can be serious and long-lasting. Often when they reach out for support, they feel alone or misunderstood. When the 18-year-old's parents share their experience with friends, most respond with shock. "There's still a lack of awareness around that this can happen to anybody," her father says. "We live in a very resourced school district, a very resourced area of the world. If it can happen here, it can happen anywhere." Here's what they want other parents to know about deepfake abuse, and how experts are helping schools navigating this rising crisis. 'It's a fake picture, but it's a real life' The first time parents hear about pornographic deepfake shouldn't be when their children encounter this type of media or are victimized. "The school is supposed to be the one that is up to date on these types of things that are happening," the mother says. "There should be outreach to the community, because they do outreach for other things like bullying." In 2024, a survey of 3,170 K-12 students, teachers and parents was conducted by the Center for Democracy & Technology to demonstrate the prevalence of AI deepfakes in schools and how prepared schools were to handle cases. The survey found that 6 in 10 teachers were not aware of school policies and procedures for addressing authentic or deepfake sexual images, and only 16% said their school's teacher training covered how to protect the privacy of a student depicted in a deepfake. Only 13% of students reported that their school has explained that sharing this type of AI-generated media is harmful to the person depicted. Evan Harris, a national expert on emerging AI risks in schools at Pathos Consulting Group, says schools already have so much on their plate and are struggling to keep up with this technology. Harris has conducted thousands of webinars to help schools navigate this rising crisis, worked on the ground to workshop policies with various schools and contributed to a free educational program co-led by Elliston Berry, a victim of deepfake abuse, and Adaptive Security, a security awareness training platform. "A big part of my job is making all of this feel manageable, because if people are in that fight-or-flight mode, they can't really get into anything concrete or actionable," Harris explains. He starts by establishing three buckets: policies, crisis readiness plans, and preventative education for faculty, parents and students. It's a "shared responsibility" between the school, parents and the students themselves, the Pennsylvania student's father says. "It takes a village." Her mother also wants students to complete educational trainings that emphasize the "human repercussions of doing this to an individual." "(Our daughter) loved school, she loved her town," her father says. "And in the end, she ended up having to leave the school and she's afraid to go into town." "It's a fake picture, but it's a real life," her mother says. Don't wait to talk to your kids about deepfakes The traditional parental advice and health class warnings ("don't send nude photos") are no longer sufficient, experts say. Anyone can have a explicit photo of them made and shared, even if they never take it themselves. The Pennsylvania student's parents say that keeping an open dialogue with your children is imperative. "Kids that are going through this need to be heard. They need to be believed," her father says. "Being able to know what was going on, we were able to help her," he continued. This required tough conversations, but they say this helped prevent their daughter from sinking further into a "dark place." Harris says that schools and parents need to establish an "anti-judgment culture" by modeling positive responses while educating stakeholders about deepfake abuse - before a crisis arises. "(Students) have to hear that message that they go to a school where if they come forward, they will be supported, they will be believed and they won't be judged," he says. "Shame is the number one enemy here. That's what we're trying to counter." Parents and experts call for stronger mandatory reporting laws The Pennsylvania student's parents say the school took too long to act on multiple complaints about what was happening to their daughter and did not file ChildLine report to the state, meant to identify potential child abuse, until May 2025. They want schools to implement stronger protections and reporting practices to prevent deepfake abuse from happening in the first place - and to better support students when it does. "It's about a lack of understanding," Faranda-Diedrich says. "If someone had walked in and said there's child pornography being spread in the school, there's no way the school would have handled it the way they did." State Sen. Tracy Pennycuick cosponsored the bill updating Pennsylvania's AI child pornography laws, and is hoping to see their state's bipartisan action extend nationally. In Pennsylvania, she is sponsoring another bill that would tighten up reporting requirements for mandatory reporters, like teachers and school administrators. Pennycuick wants to leave "no ambiguity" in what mandatory reporters are required to act on: "If you suspect that there is any kind of child sexual abuse material, you report." The Pennsylvania student's parents want to protect other families from living their nightmare, too. "Schools need to trust the students that are coming to them and realize it's a new world we're living in," her mother says. "If you hear something, say something. I'm around my children; I hear their friends talking. Start the conversation with them about it. It's uncomfortable, but it's something that's real nowadays." This story was supported by a grant from the Tarbell Center for AI Journalism. Funders do not provide editorial input.
Share
Share
Copy Link
Three Tennessee teenagers filed a class-action lawsuit against Elon Musk's xAI after its AI assistant Grok was allegedly used to create and distribute sexually explicit deepfake images of at least 18 underage girls. The case highlights a growing crisis of deepfake abuse in schools nationwide, where victims face severe emotional distress and schools struggle to respond. Experts warn that gaps in school policies and parental awareness are leaving students vulnerable to this emerging form of digital harassment.
Three teenage girls in Tennessee have become the face of a disturbing trend sweeping American schools. In March 2026, they filed a class-action lawsuit against xAI, Elon Musk's artificial intelligence company, after its chatbot assistant Grok was allegedly used to generate sexually explicit deepfake images of them and at least 18 other underage girls
1
. The lawsuit details how a perpetrator transformed innocent, clothed photos—including yearbook pictures—into AI-generated deepfakes showing the victims nude, then distributed these fake nude images at school via Discord, a popular messaging platform. The images included the girls' first names and school identification, making deepfake victimization devastatingly public.
Source: USA Today
According to the legal filing, one plaintiff's doctored video "showed her entire body, including her genitals, without any clothes," created partly from her yearbook photo
1
. The class-action lawsuit represents a critical test of how AI companies will be held accountable for tools that enable the creation of non-consensual imagery. While the suit focuses on female victims, teenage boys have also experienced harassment and extortion through AI-generated deepfakes, signaling that this crisis cuts across gender lines.The Tennessee case isn't isolated. In Pennsylvania's New Hope-Solebury School District, a 16-year-old student discovered in February 2025 that deepfake pornographic images of her were circulating among classmates
2
. The student bullying began after a friendship ended in December 2024, when rumors spread that she was "sending nudes"—images she never created. At a party, someone displayed a naked body and identified it as the teenage girl. The emotional distress has been severe: once social and outgoing, the now 18-year-old has become reclusive, afraid to visit public places like grocery stores out of fear that others have seen the images, her parents told USA TODAY.Her family is pursuing legal action against the school district through attorney Matthew Faranda-Diedrich, claiming the school failed to adequately investigate the deepfake scandal
2
. A Title IX investigation by the school district found no proof that accused perpetrators "actually circulated deepfake nudes and/or rumors of the same," according to a March 2026 letter reviewed by USA TODAY. The parents pulled their daughter from the public school after persistent bullying, and while she's made progress at a new private school, the family says the trauma continues to affect them all.The cases reveal alarming gaps in how educational institutions handle AI-generated deepfakes. A 2024 survey by the Center for Democracy & Technology of 3,170 K-12 students, teachers, and parents found that 6 in 10 teachers were unaware of school policies for addressing deepfake sexual images
2
. Only 16% reported that teacher training covered protecting student privacy in deepfake cases, and just 13% of students said their schools explained that sharing AI-generated media is harmful. These statistics expose a critical vulnerability: schools are struggling to keep pace with AI risks even as the technology becomes more accessible.Evan Harris, a national expert on emerging AI risks in schools at Pathos Consulting Group, has conducted thousands of webinars helping educational institutions navigate this crisis
2
. The Pennsylvania family's father emphasized the urgency of awareness: "There's still a lack of awareness around that this can happen to anybody. We live in a very resourced school district, a very resourced area of the world. If it can happen here, it can happen anywhere." His observation underscores that deepfake abuse transcends socioeconomic boundaries, making parental awareness and robust school policies essential defenses.
Source: NYT
Related Stories
The lawsuits paint a picture of lives fundamentally altered by digital harassment. Tennessee plaintiff Jane Doe 1 "feels acute anxiety about who has viewed these files online and feels a complete lack of control over the ongoing dissemination of the files," according to the lawsuit
1
. Two of the three Tennessee plaintiffs fear attending class, and all three report reputational damage when people believe the images are real. These aren't abstract harms—they represent the collapse of normal teenage life, where simple activities like going to school or posing for photos become sources of anxiety.The long-term implications remain uncertain. As AI tools become more sophisticated and accessible, experts anticipate an increase in cyberbullying cases involving deepfakes. The legal landscape is evolving too: while child pornography laws exist, their application to AI-generated content remains contested territory. The xAI lawsuit could establish precedents for how platforms and AI companies bear responsibility for misuse of their technology. Schools, parents, and policymakers should watch for emerging legislation specifically targeting "nudify" applications and non-consensual deepfake creation, as well as guidance on Title IX applicability to these cases. The short-term challenge is immediate: equipping schools with actionable protocols and ensuring parents understand this threat exists in every community, regardless of resources or perceived safety.
Summarized by
Navi
22 Dec 2025•Entertainment and Society

20 Nov 2024•Technology

25 Mar 2026•Entertainment and Society

1
Technology

2
Science and Research

3
Science and Research
