3 Sources
3 Sources
[1]
Teen Arrested After Asking ChatGPT How to Kill His Friend, Police Say
Over the past decade, as mass shootings have become depressingly common, school districts have increasingly invested in surveillance systems designed to monitor students' online activity. Recently, one of those systems pinged after a teen in Florida asked ChatGPT for advice about how to kill his friend, local police said. The episode occurred in Deland, Florida, where an unnamed 13-year-old student attending the city's Southwestern Middle School is alleged to have asked OpenAI's chatbot about "how to kill my friend in the middle of class.†The question immediately set off an alert within a system that was monitoring school-issued computers. That system was run by a company called Gaggle, which provides safety services to school districts throughout the country. Soon, police were interviewing the teen, reports local NBC-affiliate WFLA. The student told cops that he was “just trolling†a friend who had "annoyed him," the local outlet reports. Cops, of course, were less than enthused with the little troll. “Another â€~joke’ that created an emergency on campus,†the Volusia County Sheriff’s Office said. “Parents, please talk to your kids so they don’t make the same mistake.†The student was ultimately arrested and booked at the county jail, the outlet says. It's unclear what he has been charged with. Gizmodo reached out to the sheriff's office for more information. Gaggle's website describes itself as a safety solution for K-12 students, and it offers a variety of services. In a blog post, Gaggle describes how it uses web monitoring, which filters for various keywords (presumably "kill" is one of those keywords) to gain "visibility into browser use, including conversations with AI tools such as Google Gemini, ChatGPT, and other platforms." The company says that its system is designed to flag "concerning behavior tied to self-harm, violence, bullying, and more, and provides context with screen captures." Gaggle clearly prioritizes student safety over all other considerations. On its website, the company dispenses with the subject of student privacy thusly: "Most educators and attorneys will tell you that when your child is using school-provided technology, there should be no expectation of privacy. In fact, your child’s school is legally required by federal law (Children's Internet Protection Act) to protect children from accessing obscene or harmful content over the internet." Naturally, Gaggle has been criticized by privacy rights activists. “It has routinized law enforcement access and presence in students’ lives, including in their home,†Elizabeth Laird, a director at the Center for Democracy and Technology, recently told the Associated Press. The outlet also says that many of the safety alerts issued by Gaggle end up being false alarms. Increasingly, chatbots like ChatGPT are showing up in criminal cases involving mental health incidents. Episodes of so-called "AI psychosis," in which people with mental health problems engage with chatbots and seem to have their delusions exacerbated, have been on the rise. Several recent suicides have also been blamed on the chatbot. Gizmodo reached out to OpenAI for comment.
[2]
Student arrested for ChatGPT threat at Southwestern Middle
A Florida middle school student was arrested after entering a violent query into ChatGPT on a school device, triggering AI monitoring alerts and sparking renewed debate over surveillance, student privacy, and responsible technology use in schools. A 13-year-old student in Deland, Florida, was arrested after using a school-issued device to ask ChatGPT about harming a classmate. The query was immediately detected by a school monitoring system, which alerted security and local law enforcement. The incident took place at Southwestern Middle School. When the student entered the concerning message into OpenAI's ChatGPT, an AI-powered monitoring system called Gaggle flagged the content and notified school police officers. Officers from the Volusia County Sheriff's Office arrested the student, who claimed he was "just trolling" his friend. Authorities, however, treated the query as a serious threat. Social media footage showed the teenager being transported in restraints and booked into the county jail. "Another 'joke' that created an emergency on campus," the sheriff's office stated, warning parents to discuss responsible technology use and appropriate online behavior with their children. The Gaggle system used by the school is designed to detect and block inappropriate content on school devices. It identifies potentially harmful behavior, whether directed at oneself or others, to allow for rapid intervention from school authorities. However, such monitoring technologies are controversial. Gaggle has faced criticism for generating a high number of false alarms and fostering a surveillance-like environment in schools. Critics argue that these systems can infringe on student privacy in the name of enhancing safety. The incident highlights the complex challenges at the intersection of artificial intelligence, technology use, and student safety in modern educational settings.
[3]
13-year-old boy asks ChatGPT a chilling question during class; minutes later, AI alert gets him arrested
A 13-year-old Florida student was arrested after typing "how to kill my friend in the middle of class" into ChatGPT on a school device, reports Futurism. The AI-powered monitoring tool Gaggle immediately flagged the message, alerting authorities who detained the boy at Southwestern Middle School in Deland. Though the teen claimed he was "just trolling," officials stressed the seriousness of the act, sparking debate over AI surveillance in schools. In what began as a seemingly ordinary day at a Florida middle school, a 13-year-old student's online curiosity took a shocking turn. According to a report by Futurism, the boy logged onto a school device and typed a disturbing query into OpenAI's ChatGPT: "How to kill my friend in the middle of class." Within moments, an AI-powered school safety program called Gaggle flagged the message and alerted authorities. A school resource officer immediately confronted the student at Southwestern Middle School in Deland, a city located about an hour north of Orlando, as reported by WFLA. The teenager allegedly told police he was "just trolling" his friend. However, school officials and local law enforcement didn't see humor in the statement -- particularly given the backdrop of America's recurring tragedies of school violence, including the 2018 Parkland shooting in Florida that claimed 17 lives. The Volusia County Sheriff's Office confirmed the student was arrested and booked into a juvenile detention facility. Video clips circulating on social media showed the boy in restraints as he was escorted from a police vehicle. The incident reignited debate around the use of surveillance technology in schools. Gaggle, the monitoring system responsible for detecting the student's ChatGPT query, is designed to track alarming behavior on school-issued devices and alert authorities in real time. While such tools are credited with preventing potential threats, critics argue they create a "digital surveillance state" within learning spaces. Gaggle has faced controversy for issuing false alarms and being accused of policing students' private thoughts rather than addressing root causes of behavior. Responding to the arrest, the Volusia County Sheriff's Office urged parents to discuss responsible online behavior with their children. "Another 'joke' that created an emergency on campus," the department said in a statement quoted by WFLA. "Parents, please talk to your kids so they don't make the same mistake.".
Share
Share
Copy Link
A 13-year-old student in Florida was arrested after asking ChatGPT how to kill his friend during class, triggering an AI-powered school monitoring system. The incident has reignited debates on student surveillance and online safety in schools.
A 13-year-old student at Southwestern Middle School in Deland, Florida, found himself in hot water after asking ChatGPT a disturbing question: "How to kill my friend in the middle of class"
1
2
. This seemingly innocent act of online curiosity quickly escalated into a serious incident, triggering an AI-powered monitoring system and leading to the student's arrest.Source: Economic Times
The student's query, entered into OpenAI's ChatGPT on a school-issued device, was instantly flagged by Gaggle, an AI-powered safety solution used by the school district
1
. Gaggle's system, designed to monitor students' online activity, immediately alerted school authorities and local law enforcement2
.When confronted, the student claimed he was "just trolling" a friend who had annoyed him
1
. However, authorities took the matter seriously, treating it as a potential threat. The Volusia County Sheriff's Office arrested the teenager, who was subsequently booked into the county jail3
.Gaggle, the monitoring system responsible for detecting the student's query, is part of a growing trend of surveillance technologies in schools
1
. These systems are designed to flag concerning behavior related to self-harm, violence, bullying, and other potential threats1
.Source: Gizmodo
The company's website states that when students use school-provided technology, there should be no expectation of privacy
1
. This stance is supported by federal law, specifically the Children's Internet Protection Act, which requires schools to protect children from accessing harmful content online1
.Related Stories
While these monitoring systems are credited with preventing potential threats, they have also sparked controversy
2
3
. Critics argue that such technologies create a "digital surveillance state" within learning spaces, potentially infringing on student privacy3
.Elizabeth Laird, a director at the Center for Democracy and Technology, expressed concern that these systems have "routinized law enforcement access and presence in students' lives, including in their home"
1
. Additionally, reports suggest that many of the safety alerts issued by such systems end up being false alarms1
.This incident highlights the complex challenges at the intersection of artificial intelligence, technology use, and student safety in modern educational settings
2
. It underscores the need for balanced approaches that ensure safety without compromising privacy or creating an overly restrictive learning environment.As AI continues to play a larger role in education and daily life, incidents like these prompt important discussions about responsible technology use, digital literacy, and the ethical implications of AI-powered surveillance in schools.
Summarized by
Navi