4 Sources
4 Sources
[1]
The rise of deepfake cyberbullying poses a growing problem for schools
Schools are facing a growing problem of students using artificial intelligence to transform innocent images of classmates into sexually explicit deepfakes. The fallout from the spread of the manipulated photos and videos can create a nightmare for the victims. The challenge for schools was highlighted this fall when AI-generated nude images swept through a Louisiana middle school. Two boys ultimately were charged, but not before one of the victims was expelled for starting a fight with a boy she accused of creating the images of her and her friends. "While the ability to alter images has been available for decades, the rise of A.I. has made it easier for anyone to alter or create such images with little to no training or experience," Lafourche Parish Sheriff Craig Webre said in a news release. "This incident highlights a serious concern that all parents should address with their children." Here are key takeaways from AP's story on the rise of AI-generated nude images and how schools are responding. The prosecution stemming from the Louisiana middle school deepfakes is believed to be the first under the state's new law, said Republican state Sen. Patrick Connick, who authored the legislation. The law is one of many across the country taking aim at deepfakes. In 2025, at least half the states enacted legislation addressing the use of generative AI to create seemingly realistic, but fabricated, images and sounds, according to the National Conference of State Legislatures. Some of the laws address simulated child sexual abuse material. Students also have been prosecuted in Florida and Pennsylvania and expelled in places like California. One fifth grade teacher in Texas also was charged with using AI to create child pornography of his students. Deepfakes started as a way to humiliate political opponents and young starlets. Until the past few years, people needed some technical skills to make them realistic, said Sergio Alexander, a research associate at Texas Christian University who has written about the issue. "Now, you can do it on an app, you can download it on social media, and you don't have to have any technical expertise whatsoever," he said. He described the scope of the problem as staggering. The National Center for Missing and Exploited Children said the number of AI-generated child sexual abuse images reported to its cyber tipline soared from 4,700 in 2023 to 440,000 in just the first six months of 2025. Sameer Hinduja, the co-director of the Cyberbullying Research Center, recommends that schools update their policies on AI-generated deepfakes and get better at explaining them. That way, he said, "students don't think that the staff, the educators are completely oblivious, which might make them feel like they can act with impunity." He said many parents assume that schools are addressing the issue when they aren't. "So many of them are just so unaware and so ignorant," said Hinduja, who is also a professor in the School of Criminology and Criminal Justice at Florida Atlantic University. "We hear about the ostrich syndrome, just kind of burying their heads in the sand, hoping that this isn't happening amongst their youth." AI deepfakes are different from traditional bullying because instead of a nasty text or rumor, there is a video or image that often goes viral and then continues to resurface, creating a cycle of trauma, Alexander said. Many victims become depressed and anxious, he said. "They literally shut down because it makes it feel like, you know, there's no way they can even prove that this is not real -- because it does look 100% real," he said. Parents can start the conversation by casually asking their kids if they've seen any funny fake videos online, Alexander said. Take a moment to laugh at some of them, like Bigfoot chasing after hikers, he said. From there, parents can ask their kids, "Have you thought about what it would be like if you were in this video, even the funny one?" And then parents can ask if a classmate has made a fake video, even an innocuous one. "Based on the numbers, I guarantee they'll say that they know someone," he said. If kids encounter things like deepfakes, they need to know they can talk to their parents without getting in trouble, said Laura Tierney, who is the founder and CEO of The Social Institute, which educates people on responsible social media use and has helped schools develop policies. She said many kids fear their parents will overreact or take their phones away. She uses the acronym SHIELD as a roadmap for how to respond. The "S" stands for "stop" and don't forward. "H" is for "huddle" with a trusted adult. The "I" is for "inform" any social media platforms on which the image is posted. "E" is a cue to collect "evidence," like who is spreading the image, but not to download anything. The "L" is for "limit" social media access. The "D" is a reminder to "direct" victims to help. "The fact that that acronym is six steps I think shows that this issue is really complicated," she said. ___ The Associated Press' education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP's standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.
[2]
Boys at her school shared AI-generated, nude images of her. After a fight, she was the one expelled
THIBODAUX, La. (AP) -- The teasing was relentless. Nude images of a 13-year-old girl and her friends, generated by artificial intelligence, were circulating on social media and had become the talk of a Louisiana middle school. The girls begged for help, first from a school guidance counselor and then from a sheriff's deputy assigned to their school. But the images were shared on Snapchat, an app that deletes messages seconds after they're viewed, and the adults couldn't find them. The principal had doubts they even existed. Among the kids, the pictures were still spreading. When the 13-year-old girl stepped onto the Lafourche Parish school bus at the end of the day, a classmate was showing one of them to a friend. "That's when I got angry," the eighth grader recalled at her discipline hearing. Fed up, she attacked a boy on the bus, inviting others to join her. She was kicked out of Sixth Ward Middle School for more than 10 weeks and sent to an alternative school. She said the boy whom she and her friends suspected of creating the images wasn't sent to that alternative school with her. The 13-year-old girl's attorneys allege he avoided school discipline altogether. When the sheriff's department looked into the case, they took the opposite actions. They charged two of the boys who'd been accused of sharing explicit images -- and not the girl. The Louisiana episode highlights the nightmarish potential of AI deepfakes. They can, and do, upend children's lives -- at school, and at home. And while schools are working to address artificial intelligence in classroom instruction, they often have done little to prepare for what the new tech means for cyberbullying and harassment. Once again, as kids increasingly use new tech to hurt one another, adults are behind the curve, said Sergio Alexander, a research associate at Texas Christian University focused on emerging technology. "When we ignore the digital harm, the only moment that becomes visible is when the victim finally breaks," Alexander said. In Lafourche Parish, the school district followed all its protocols for reporting misconduct, Superintendent Jarod Martin said in a statement. He said a "one-sided story" had been presented of the case that fails to illustrate its "totality and complex nature." After hearing rumors about the nude images, the 13-year-old said she marched with two friends -- one nearly in tears -- to the guidance counselor around 7 a.m. on Aug. 26. The Associated Press isn't naming her because she is a minor and because AP doesn't normally name victims of sexual crimes. She was there for moral support, not initially realizing there were images of her, too, according to testimony at her school disciplinary hearing. Ultimately, the weeks-long investigation at the school in Thibodaux, about 45 miles (72 kilometers) southwest of New Orleans, uncovered AI-generated nude images of eight female middle school students and two adults, the district and sheriff's office said in a joint statement. "Full nudes with her face put on them" is how the girl's father, Joseph Daniels, described them. Until recently, it took some technical skill to make realistic deepfakes. Technology now makes it easy to pluck a photo off social media, "nudify" it and create a viral nightmare for an unsuspecting classmate. Most schools are "just kind of burying their heads in the sand, hoping that this isn't happening," said Sameer Hinduja, co-director of the Cyberbullying Research Center and professor of criminology at Florida Atlantic University. Lafourche Parish School District was just starting to develop policies on artificial intelligence. The school-level AI guidance mainly addressed academics, according to documents provided through a records request. The district also hadn't updated its training on cyberbullying to reflect the threat of AI-generated, sexually explicit images. The curriculum its schools used was from 2018. Although the girls at Sixth Ward Middle School hadn't seen the images firsthand, they heard about them from boys at school. Based on those conversations, the girls accused a classmate and two students from other schools of creating and spreading the nudes on Snapchat and possibly TikTok. The principal, Danielle Coriell, said an investigation came up cold that day as no student took responsibility. The deputy assigned to the school searched social media for the images unsuccessfully, according to a recording of the disciplinary hearing. "I was led to believe that this was just hearsay and rumors," the girl's father said, recounting a conversation he had that morning with the school counselor. But the girl was miserable, and a police incident report showed more girls were reporting that they were victims, too. The 13-year-old returned to the counselor in the afternoon, asking to call her father. She said she was refused. Her father says she sent a text message that said, "Dad," and nothing else. They didn't talk. With the mocking unrelenting, the girl texted her sister, "It's not getting handled." As the school day wound down, the principal was skeptical. At the disciplinary hearing, the girl's attorney asked why the sheriff's deputy didn't check the phone of the boy the girls were accusing and why he was allowed on the same bus as the girl. "Kids lie a lot," responded Coriell, the principal. "They lie about all kinds of things. They blow lots of things out of proportion on a daily basis. In 17 years, they do it all the time. So to my knowledge, at 2 o'clock when I checked again, there were no pictures." When the girl stepped onto the bus 15 minutes later, the boy was showing the AI-generated images to a friend. Fake nude images of her friends were visible on the boy's phone, the girl said, a claim backed up by a photo taken on the bus. A video from the school bus showed at least a half-dozen students circulating the images, said Martin, the superintendent, at a school board meeting. "I went the whole day with getting bullied and getting made fun of about my body," the girl said at her hearing. When she boarded the bus, she said, anger was building up. After seeing the boy and his phone, she slapped him, said Coriell, the principal. The boy shrugged off the slap, a video shows. She hit him a second time. Then, the principal said, the girl asked aloud: "Why am I the only one doing this?" Two classmates hit the boy, the principal said, before the 13-year-old climbed over a seat and punched and stomped on him. Video of the fight was posted on Facebook. "Overwhelming social media sentiment was one of outrage and a demand that the students involved in the fight be held accountable," the district and sheriff's office said in their joint statement released in November. The girl had no past disciplinary problems, but she was assigned to an alternative school as the district moved to expel her for a full semester -- 89 school days. It was on the day of the girl's disciplinary hearing, three weeks after the fight, that the first of the boys was charged. The student was charged with 10 counts of unlawful dissemination of images created by artificial intelligence under a new Louisiana state law, part of a wave of such legislation around the country. A second boy was charged in December with identical charges, the sheriff's department said. Neither was identified by authorities because of their ages. The girl would face no charges because of what the sheriff's office described as the "totality of the circumstances." At the disciplinary hearing, the principal refused to answer questions from the girl's attorneys about what kind of school discipline the boy would face. The district said in a statement that federal student privacy laws prohibit it from discussing individual students' disciplinary records. Gregory Miller, an attorney for the girl, said he has no knowledge of any school discipline for the classmate accused of sharing the images. Ultimately, the panel expelled the 13-year-old. She wept, her father said. "She just felt like she was victimized multiple times -- by the pictures and by the school not believing her and by them putting her on a bus and then expelling her for her actions," he said in an interview. After she was sent to the alternative school, the girl started skipping meals, her father said. Unable to concentrate, she completed none of the school's online work for several days before her father got her into therapy for depression and anxiety. Nobody initially noticed when she stopped doing her assignments, her father said. "She kind of got left behind," he said. Her attorneys appealed to the school board, and another hearing was scheduled for seven weeks later. By then, so much time had passed that she could have returned to her old school on probation. But because she'd missed assignments before getting treated for depression, the district wanted her to remain at the alternative site another 12 weeks. For students who are suspended or expelled, the impact can last years. They're more likely to be suspended again. They become disconnected from their classmates, and they're more likely to become disengaged from school. They're more likely to have lower grades and lower graduation rates. "She's already been out of school enough," one of the girl's attorneys, Matt Ory, told the board on Nov. 5. "She is a victim. "She," he repeated, "is a victim." Martin, the superintendent, countered: "Sometimes in life we can be both victims and perpetrators." But the board was swayed. One member, Henry Lafont, said: "There are a lot of things in that video that I don't like. But I'm also trying to put into perspective what she went through all day." They allowed her to return to campus immediately. Her first day back at school was Nov. 7, although she will remain on probation until Jan. 29. That means no dances, no sports and no extracurricular activities. She already missed out on basketball tryouts, meaning she won't be able to play this season, her father said. He finds the situation "heartbreaking." "I was hoping she would make great friends, they would go to the high school together and, you know, it'd keep everybody out of trouble on the right tracks," her father said. "I think they ruined that." ___ The Associated Press' education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP's standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.
[3]
Boys at her school shared AI-generated, nude images of her. After a fight, she was the one expelled
THIBODAUX, La. (AP) -- The teasing was relentless. Nude images of a 13-year-old girl and her friends, generated by artificial intelligence, were circulating on social media and had become the talk of a Louisiana middle school. The girls begged for help, first from a school guidance counselor and then from a sheriff's deputy assigned to their school. But the images were shared on Snapchat, an app that deletes messages seconds after they're viewed, and the adults couldn't find them. The principal had doubts they even existed. Among the kids, the pictures were still spreading. When the 13-year-old girl stepped onto the Lafourche Parish school bus at the end of the day, a classmate was showing one of them to a friend. "That's when I got angry," the eighth grader recalled at her discipline hearing. Fed up, she attacked a boy on the bus, inviting others to join her. She was kicked out of Sixth Ward Middle School for more than 10 weeks and sent to an alternative school. She said the boy whom she and her friends suspected of creating the images wasn't sent to that alternative school with her. The 13-year-old girl's attorneys allege he avoided school discipline altogether. When the sheriff's department looked into the case, they took the opposite actions. They charged two of the boys who'd been accused of sharing explicit images -- and not the girl. The Louisiana episode highlights the nightmarish potential of AI deepfakes. They can, and do, upend children's lives -- at school, and at home. And while schools are working to address artificial intelligence in classroom instruction, they often have done little to prepare for what the new tech means for cyberbullying and harassment. Once again, as kids increasingly use new tech to hurt one another, adults are behind the curve, said Sergio Alexander, a research associate at Texas Christian University focused on emerging technology. "When we ignore the digital harm, the only moment that becomes visible is when the victim finally breaks," Alexander said. In Lafourche Parish, the school district followed all its protocols for reporting misconduct, Superintendent Jarod Martin said in a statement. He said a "one-sided story" had been presented of the case that fails to illustrate its "totality and complex nature." A girl's nightmare begins with rumors After hearing rumors about the nude images, the 13-year-old said she marched with two friends -- one nearly in tears -- to the guidance counselor around 7 a.m. on Aug. 26. The Associated Press isn't naming her because she is a minor and because AP doesn't normally name victims of sexual crimes. She was there for moral support, not initially realizing there were images of her, too, according to testimony at her school disciplinary hearing. Ultimately, the weeks-long investigation at the school in Thibodaux, about 45 miles (72 kilometers) southwest of New Orleans, uncovered AI-generated nude images of eight female middle school students and two adults, the district and sheriff's office said in a joint statement. "Full nudes with her face put on them" is how the girl's father, Joseph Daniels, described them. Until recently, it took some technical skill to make realistic deepfakes. Technology now makes it easy to pluck a photo off social media, "nudify" it and create a viral nightmare for an unsuspecting classmate. Most schools are "just kind of burying their heads in the sand, hoping that this isn't happening," said Sameer Hinduja, co-director of the Cyberbullying Research Center and professor of criminology at Florida Atlantic University. Lafourche Parish School District was just starting to develop policies on artificial intelligence. The school-level AI guidance mainly addressed academics, according to documents provided through a records request. The district also hadn't updated its training on cyberbullying to reflect the threat of AI-generated, sexually explicit images. The curriculum its schools used was from 2018. A school investigation hits obstacles Although the girls at Sixth Ward Middle School hadn't seen the images firsthand, they heard about them from boys at school. Based on those conversations, the girls accused a classmate and two students from other schools of creating and spreading the nudes on Snapchat and possibly TikTok. The principal, Danielle Coriell, said an investigation came up cold that day as no student took responsibility. The deputy assigned to the school searched social media for the images unsuccessfully, according to a recording of the disciplinary hearing. "I was led to believe that this was just hearsay and rumors," the girl's father said, recounting a conversation he had that morning with the school counselor. But the girl was miserable, and a police incident report showed more girls were reporting that they were victims, too. The 13-year-old returned to the counselor in the afternoon, asking to call her father. She said she was refused. Her father says she sent a text message that said, "Dad," and nothing else. They didn't talk. With the mocking unrelenting, the girl texted her sister, "It's not getting handled." As the school day wound down, the principal was skeptical. At the disciplinary hearing, the girl's attorney asked why the sheriff's deputy didn't check the phone of the boy the girls were accusing and why he was allowed on the same bus as the girl. "Kids lie a lot," responded Coriell, the principal. "They lie about all kinds of things. They blow lots of things out of proportion on a daily basis. In 17 years, they do it all the time. So to my knowledge, at 2 o'clock when I checked again, there were no pictures." A fight breaks out on the school bus When the girl stepped onto the bus 15 minutes later, the boy was showing the AI-generated images to a friend. Fake nude images of her friends were visible on the boy's phone, the girl said, a claim backed up by a photo taken on the bus. A video from the school bus showed at least a half-dozen students circulating the images, said Martin, the superintendent, at a school board meeting. "I went the whole day with getting bullied and getting made fun of about my body," the girl said at her hearing. When she boarded the bus, she said, anger was building up. After seeing the boy and his phone, she slapped him, said Coriell, the principal. The boy shrugged off the slap, a video shows. She hit him a second time. Then, the principal said, the girl asked aloud: "Why am I the only one doing this?" Two classmates hit the boy, the principal said, before the 13-year-old climbed over a seat and punched and stomped on him. Video of the fight was posted on Facebook. "Overwhelming social media sentiment was one of outrage and a demand that the students involved in the fight be held accountable," the district and sheriff's office said in their joint statement released in November. The girl had no past disciplinary problems, but she was assigned to an alternative school as the district moved to expel her for a full semester -- 89 school days. Weeks later, a boy is charged It was on the day of the girl's disciplinary hearing, three weeks after the fight, that the first of the boys was charged. The student was charged with 10 counts of unlawful dissemination of images created by artificial intelligence under a new Louisiana state law, part of a wave of such legislation around the country. A second boy was charged in December with identical charges, the sheriff's department said. Neither was identified by authorities because of their ages. The girl would face no charges because of what the sheriff's office described as the "totality of the circumstances." At the disciplinary hearing, the principal refused to answer questions from the girl's attorneys about what kind of school discipline the boy would face. The district said in a statement that federal student privacy laws prohibit it from discussing individual students' disciplinary records. Gregory Miller, an attorney for the girl, said he has no knowledge of any school discipline for the classmate accused of sharing the images. Ultimately, the panel expelled the 13-year-old. She wept, her father said. "She just felt like she was victimized multiple times -- by the pictures and by the school not believing her and by them putting her on a bus and then expelling her for her actions," he said in an interview. The fallout sends a student off course After she was sent to the alternative school, the girl started skipping meals, her father said. Unable to concentrate, she completed none of the school's online work for several days before her father got her into therapy for depression and anxiety. Nobody initially noticed when she stopped doing her assignments, her father said. "She kind of got left behind," he said. Her attorneys appealed to the school board, and another hearing was scheduled for seven weeks later. By then, so much time had passed that she could have returned to her old school on probation. But because she'd missed assignments before getting treated for depression, the district wanted her to remain at the alternative site another 12 weeks. For students who are suspended or expelled, the impact can last years. They're more likely to be suspended again. They become disconnected from their classmates, and they're more likely to become disengaged from school. They're more likely to have lower grades and lower graduation rates. "She's already been out of school enough," one of the girl's attorneys, Matt Ory, told the board on Nov. 5. "She is a victim. "She," he repeated, "is a victim." Martin, the superintendent, countered: "Sometimes in life we can be both victims and perpetrators." But the board was swayed. One member, Henry Lafont, said: "There are a lot of things in that video that I don't like. But I'm also trying to put into perspective what she went through all day." They allowed her to return to campus immediately. Her first day back at school was Nov. 7, although she will remain on probation until Jan. 29. That means no dances, no sports and no extracurricular activities. She already missed out on basketball tryouts, meaning she won't be able to play this season, her father said. He finds the situation "heartbreaking." "I was hoping she would make great friends, they would go to the high school together and, you know, it'd keep everybody out of trouble on the right tracks," her father said. "I think they ruined that." ___ The Associated Press' education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP's standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org. ___ Hollingsworth reported from Mission, Kansas.
[4]
The Rise of Deepfake Cyberbullying Poses a Growing Problem for Schools
Schools are facing a growing problem of students using artificial intelligence to transform innocent images of classmates into sexually explicit deepfakes. The fallout from the spread of the manipulated photos and videos can create a nightmare for the victims. The challenge for schools was highlighted this fall when AI-generated nude images swept through a Louisiana middle school. Two boys ultimately were charged, but not before one of the victims was expelled for starting a fight with a boy she accused of creating the images of her and her friends. "While the ability to alter images has been available for decades, the rise of A.I. has made it easier for anyone to alter or create such images with little to no training or experience," Lafourche Parish Sheriff Craig Webre said in a news release. "This incident highlights a serious concern that all parents should address with their children." Here are key takeaways from AP's story on the rise of AI-generated nude images and how schools are responding. More states pass laws to address deepfakes The prosecution stemming from the Louisiana middle school deepfakes is believed to be the first under the state's new law, said Republican state Sen. Patrick Connick, who authored the legislation. The law is one of many across the country taking aim at deepfakes. In 2025, at least half the states enacted legislation addressing the use of generative AI to create seemingly realistic, but fabricated, images and sounds, according to the National Conference of State Legislatures. Some of the laws address simulated child sexual abuse material. Students also have been prosecuted in Florida and Pennsylvania and expelled in places like California. One fifth grade teacher in Texas also was charged with using AI to create child pornography of his students. Deepfakes become easier to create as technology evolves Deepfakes started as a way to humiliate political opponents and young starlets. Until the past few years, people needed some technical skills to make them realistic, said Sergio Alexander, a research associate at Texas Christian University who has written about the issue. "Now, you can do it on an app, you can download it on social media, and you don't have to have any technical expertise whatsoever," he said. He described the scope of the problem as staggering. The National Center for Missing and Exploited Children said the number of AI-generated child sexual abuse images reported to its cyber tipline soared from 4,700 in 2023 to 440,000 in just the first six months of 2025. Experts fear schools aren't doing enough Sameer Hinduja, the co-director of the Cyberbullying Research Center, recommends that schools update their policies on AI-generated deepfakes and get better at explaining them. That way, he said, "students don't think that the staff, the educators are completely oblivious, which might make them feel like they can act with impunity." He said many parents assume that schools are addressing the issue when they aren't. "So many of them are just so unaware and so ignorant," said Hinduja, who is also a professor in the School of Criminology and Criminal Justice at Florida Atlantic University. "We hear about the ostrich syndrome, just kind of burying their heads in the sand, hoping that this isn't happening amongst their youth." Trauma from AI deepfakes can be particularly harmful AI deepfakes are different from traditional bullying because instead of a nasty text or rumor, there is a video or image that often goes viral and then continues to resurface, creating a cycle of trauma, Alexander said. Many victims become depressed and anxious, he said. "They literally shut down because it makes it feel like, you know, there's no way they can even prove that this is not real -- because it does look 100% real," he said. Parents are encouraged to talk to students Parents can start the conversation by casually asking their kids if they've seen any funny fake videos online, Alexander said. Take a moment to laugh at some of them, like Bigfoot chasing after hikers, he said. From there, parents can ask their kids, "Have you thought about what it would be like if you were in this video, even the funny one?" And then parents can ask if a classmate has made a fake video, even an innocuous one. "Based on the numbers, I guarantee they'll say that they know someone," he said. If kids encounter things like deepfakes, they need to know they can talk to their parents without getting in trouble, said Laura Tierney, who is the founder and CEO of The Social Institute, which educates people on responsible social media use and has helped schools develop policies. She said many kids fear their parents will overreact or take their phones away. She uses the acronym SHIELD as a roadmap for how to respond. The "S" stands for "stop" and don't forward. "H" is for "huddle" with a trusted adult. The "I" is for "inform" any social media platforms on which the image is posted. "E" is a cue to collect "evidence," like who is spreading the image, but not to download anything. The "L" is for "limit" social media access. The "D" is a reminder to "direct" victims to help. "The fact that that acronym is six steps I think shows that this issue is really complicated," she said. ___ The Associated Press' education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP's standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.
Share
Share
Copy Link
A Louisiana middle school incident revealed how AI-generated nude images can devastate students while schools struggle to respond. Reports of AI-generated child sexual abuse images skyrocketed from 4,700 in 2023 to 440,000 in just the first half of 2025. The case highlights urgent questions about school preparedness, legal accountability, and the need for updated policies to address this growing technological threat.
A disturbing incident at Sixth Ward Middle School in Lafourche Parish, Louisiana, has exposed the growing problem for schools facing deepfake cyberbullying. In August, AI-generated nude images of eight female students and two adults circulated among students, primarily through Snapchat
1
. The sexually explicit images, created by transforming innocent photos into fabricated nudes, spread rapidly before adults could intervene. One 13-year-old victim, fed up with relentless teasing and inadequate school response, attacked a boy on the school bus she suspected of creating the images2
. She was expelled for more than 10 weeks and sent to an alternative school, while her attorneys allege the boy avoided school discipline altogether. Two boys were ultimately charged under Louisiana's new law addressing AI-driven cyberbullying, marking what is believed to be the first prosecution under this state legislation4
.
Source: AP
The ease of creating deepfakes has transformed dramatically. Until recently, producing realistic manipulated images required technical expertise. "Now, you can do it on an app, you can download it on social media, and you don't have to have any technical expertise whatsoever," said Sergio Alexander, a research associate at Texas Christian University
1
. The scope of this problem is staggering. The National Center for Missing and Exploited Children reported that AI-generated child sexual abuse images on their cyber tipline soared from 4,700 in 2023 to 440,000 in just the first six months of 20254
. This exponential increase reflects how artificial intelligence tools have democratized the creation of harmful content, allowing students to pluck photos from social media platforms, "nudify" them, and create viral nightmares for unsuspecting classmates3
.The Lafourche Parish incident revealed critical gaps in how schools prepare for deepfake cyberbullying. The district was just starting to develop policies on artificial intelligence, with school-level guidance mainly addressing academics rather than harassment
2
. The district hadn't updated its cyberbullying training to reflect AI-generated threats, relying on curriculum from 2018. When the girls sought help from a guidance counselor and sheriff's deputy, the adults couldn't locate the images because they were circulated among students on Snapchat, which deletes messages seconds after viewing3
. The principal initially doubted the images even existed. Sameer Hinduja, co-director of the Cyberbullying Research Center and professor at Florida Atlantic University, said most schools are "just kind of burying their heads in the sand, hoping that this isn't happening"3
. He recommends schools update their policies on AI-generated deepfakes and communicate them clearly so "students don't think that the staff, the educators are completely oblivious, which might make them feel like they can act with impunity"1
.In 2025, at least half the states enacted legislation addressing the use of generative AI to create fabricated images and sounds, according to the National Conference of State Legislatures
1
. Some laws specifically address simulated child sexual abuse material. Students have been prosecuted in Florida and Pennsylvania, while expulsion has occurred in places like California. One fifth-grade teacher in Texas was charged with using AI to create child pornography of his students4
. Republican state Senator Patrick Connick, who authored Louisiana's legislation, confirmed the Lafourche Parish prosecution represents the first under the state's new law4
. These legal charges signal a shift in how authorities treat AI-generated sexually explicit images, recognizing them as serious criminal offenses rather than mere pranks.Related Stories
AI deepfakes inflict unique psychological damage compared to traditional bullying. Instead of a nasty text or rumor, victims face videos or images that often go viral and continue to resurface, creating a cycle of trauma
1
. Many victims become depressed and anxious. "They literally shut down because it makes it feel like, you know, there's no way they can even prove that this is not real -- because it does look 100% real," Alexander explained4
. The 13-year-old Louisiana victim described relentless teasing, with the AI-generated nude images becoming "the talk" of the school2
. Her father, Joseph Daniels, described them as "full nudes with her face put on them"3
. Alexander noted that "when we ignore the digital harm, the only moment that becomes visible is when the victim finally breaks"2
.Experts emphasize the critical role of parental guidance in addressing deepfake threats. Laura Tierney, founder and CEO of The Social Institute, which helps schools develop policies, stresses that children need to know they can discuss encounters with deepfakes without fear of punishment
4
. Many kids fear parents will overreact or confiscate their phones. Alexander recommends parents start conversations casually by asking if their children have seen funny fake videos online, then gradually steering toward more serious scenarios1
. Tierney developed the SHIELD acronym as a response framework: Stop and don't forward; Huddle with a trusted adult; Inform social media platforms; collect Evidence; Limit social media access; and Direct victims to help4
. Hinduja noted that many parents incorrectly assume schools are addressing the issue when they aren't, creating dangerous gaps in protection for students.Summarized by
Navi
[2]
[3]
1
Technology

2
Technology

3
Technology
