4 Sources
4 Sources
[1]
Teen Arrested After Asking ChatGPT How to Kill His Friend, Police Say
Over the past decade, as mass shootings have become depressingly common, school districts have increasingly invested in surveillance systems designed to monitor students' online activity. Recently, one of those systems pinged after a teen in Florida asked ChatGPT for advice about how to kill his friend, local police said. The episode occurred in Deland, Florida, where an unnamed 13-year-old student attending the city's Southwestern Middle School is alleged to have asked OpenAI's chatbot about "how to kill my friend in the middle of class.†The question immediately set off an alert within a system that was monitoring school-issued computers. That system was run by a company called Gaggle, which provides safety services to school districts throughout the country. Soon, police were interviewing the teen, reports local NBC-affiliate WFLA. The student told cops that he was “just trolling†a friend who had "annoyed him," the local outlet reports. Cops, of course, were less than enthused with the little troll. “Another â€~joke’ that created an emergency on campus,†the Volusia County Sheriff’s Office said. “Parents, please talk to your kids so they don’t make the same mistake.†The student was ultimately arrested and booked at the county jail, the outlet says. It's unclear what he has been charged with. Gizmodo reached out to the sheriff's office for more information. Gaggle's website describes itself as a safety solution for K-12 students, and it offers a variety of services. In a blog post, Gaggle describes how it uses web monitoring, which filters for various keywords (presumably "kill" is one of those keywords) to gain "visibility into browser use, including conversations with AI tools such as Google Gemini, ChatGPT, and other platforms." The company says that its system is designed to flag "concerning behavior tied to self-harm, violence, bullying, and more, and provides context with screen captures." Gaggle clearly prioritizes student safety over all other considerations. On its website, the company dispenses with the subject of student privacy thusly: "Most educators and attorneys will tell you that when your child is using school-provided technology, there should be no expectation of privacy. In fact, your child’s school is legally required by federal law (Children's Internet Protection Act) to protect children from accessing obscene or harmful content over the internet." Naturally, Gaggle has been criticized by privacy rights activists. “It has routinized law enforcement access and presence in students’ lives, including in their home,†Elizabeth Laird, a director at the Center for Democracy and Technology, recently told the Associated Press. The outlet also says that many of the safety alerts issued by Gaggle end up being false alarms. Increasingly, chatbots like ChatGPT are showing up in criminal cases involving mental health incidents. Episodes of so-called "AI psychosis," in which people with mental health problems engage with chatbots and seem to have their delusions exacerbated, have been on the rise. Several recent suicides have also been blamed on the chatbot. Gizmodo reached out to OpenAI for comment.
[2]
Student arrested for ChatGPT threat at Southwestern Middle
A Florida middle school student was arrested after entering a violent query into ChatGPT on a school device, triggering AI monitoring alerts and sparking renewed debate over surveillance, student privacy, and responsible technology use in schools. A 13-year-old student in Deland, Florida, was arrested after using a school-issued device to ask ChatGPT about harming a classmate. The query was immediately detected by a school monitoring system, which alerted security and local law enforcement. The incident took place at Southwestern Middle School. When the student entered the concerning message into OpenAI's ChatGPT, an AI-powered monitoring system called Gaggle flagged the content and notified school police officers. Officers from the Volusia County Sheriff's Office arrested the student, who claimed he was "just trolling" his friend. Authorities, however, treated the query as a serious threat. Social media footage showed the teenager being transported in restraints and booked into the county jail. "Another 'joke' that created an emergency on campus," the sheriff's office stated, warning parents to discuss responsible technology use and appropriate online behavior with their children. The Gaggle system used by the school is designed to detect and block inappropriate content on school devices. It identifies potentially harmful behavior, whether directed at oneself or others, to allow for rapid intervention from school authorities. However, such monitoring technologies are controversial. Gaggle has faced criticism for generating a high number of false alarms and fostering a surveillance-like environment in schools. Critics argue that these systems can infringe on student privacy in the name of enhancing safety. The incident highlights the complex challenges at the intersection of artificial intelligence, technology use, and student safety in modern educational settings.
[3]
Florida Teen's Arrest Exposes ChatGPT Privacy Concerns Most Users Might Ignore - Phandroid
A 13-year-old in DeLand, Florida learned a harsh lesson about the concerns of ChatGPT privacy. Police arrested the teen for asking the AI how to kill his friend. The teen used a school-issued laptop, which ran monitoring software called Gaggle that instantly flagged his query. Within hours, officers showed up and booked him. The friendly, conversational tone of AI chatbots tricks users into treating them like confidants. ChatGPT responds warmly, seems understanding, and feels personal. However, it's just software. Not a therapist bound by confidentiality. Not a friend who keeps secrets. Just a machine processing your prompts. And in many cases, those prompts get logged, monitored, or flagged. School monitoring software caught this incident, but the broader ChatGPT privacy concerns extend beyond educational settings. For instance, companies can monitor employee AI usage. Additionally, parents can check their kids' chat history. Meanwhile, law enforcement can request data. As a result, the conversational interface creates a false sense of privacy that simply doesn't exist. The teen claimed he was "just trolling" after his friend annoyed him. That excuse doesn't hold up in 2025. Especially when monitoring systems exist specifically to catch these kinds of statements. The Volusia County Sheriff's Office issued warnings to parents, making it clear that what kids think are jokes can quickly become felonies. This case highlights a critical gap in how people understand AI chatbot interactions. The warm, helpful responses from ChatGPT create an illusion of a private conversation. ChatGPT privacy concerns aren't limited to school computers. Any workplace device, shared computer, or monitored network can expose your AI conversations. The technology feels intimate but operates with zero confidentiality protections. Users need to remember that every prompt, every question, every conversation with an AI exists as data that can be accessed, reviewed, or used against them. The takeaway is simple: don't tell AI chatbots anything you wouldn't want read aloud in court or shown to your boss. Despite how friendly these AI tools feel, they're machines processing information, not confidential advisors protecting your secrets.
[4]
13-year-old boy asks ChatGPT a chilling question during class; minutes later, AI alert gets him arrested
A 13-year-old Florida student was arrested after typing "how to kill my friend in the middle of class" into ChatGPT on a school device, reports Futurism. The AI-powered monitoring tool Gaggle immediately flagged the message, alerting authorities who detained the boy at Southwestern Middle School in Deland. Though the teen claimed he was "just trolling," officials stressed the seriousness of the act, sparking debate over AI surveillance in schools. In what began as a seemingly ordinary day at a Florida middle school, a 13-year-old student's online curiosity took a shocking turn. According to a report by Futurism, the boy logged onto a school device and typed a disturbing query into OpenAI's ChatGPT: "How to kill my friend in the middle of class." Within moments, an AI-powered school safety program called Gaggle flagged the message and alerted authorities. A school resource officer immediately confronted the student at Southwestern Middle School in Deland, a city located about an hour north of Orlando, as reported by WFLA. The teenager allegedly told police he was "just trolling" his friend. However, school officials and local law enforcement didn't see humor in the statement -- particularly given the backdrop of America's recurring tragedies of school violence, including the 2018 Parkland shooting in Florida that claimed 17 lives. The Volusia County Sheriff's Office confirmed the student was arrested and booked into a juvenile detention facility. Video clips circulating on social media showed the boy in restraints as he was escorted from a police vehicle. The incident reignited debate around the use of surveillance technology in schools. Gaggle, the monitoring system responsible for detecting the student's ChatGPT query, is designed to track alarming behavior on school-issued devices and alert authorities in real time. While such tools are credited with preventing potential threats, critics argue they create a "digital surveillance state" within learning spaces. Gaggle has faced controversy for issuing false alarms and being accused of policing students' private thoughts rather than addressing root causes of behavior. Responding to the arrest, the Volusia County Sheriff's Office urged parents to discuss responsible online behavior with their children. "Another 'joke' that created an emergency on campus," the department said in a statement quoted by WFLA. "Parents, please talk to your kids so they don't make the same mistake.".
Share
Share
Copy Link
A 13-year-old student in Florida was arrested after asking ChatGPT how to kill his friend, sparking debates on AI surveillance in schools and privacy concerns in AI interactions.

A 13-year-old student at Southwestern Middle School in Deland, Florida, found himself in serious trouble after asking ChatGPT a disturbing question: "How to kill my friend in the middle of class." The query, made on a school-issued device, triggered an immediate alert from an AI-powered monitoring system called Gaggle, leading to the student's arrest and booking into a juvenile detention facility
1
2
.The incident has brought attention to the increasing use of AI surveillance systems in educational settings. Gaggle, the company behind the monitoring software, describes itself as a safety solution for K-12 students. It uses web monitoring to filter for specific keywords and gain visibility into browser use, including conversations with AI tools like ChatGPT
1
.While such systems are designed to enhance student safety, they have faced criticism from privacy advocates. Elizabeth Laird, a director at the Center for Democracy and Technology, expressed concerns about the routinization of law enforcement access in students' lives, including in their homes
1
.When confronted by police, the student claimed he was "just trolling" a friend who had annoyed him. However, authorities took the matter seriously, with the Volusia County Sheriff's Office stating, "Another 'joke' that created an emergency on campus"
1
4
.Related Stories
This incident highlights a critical gap in how people understand AI chatbot interactions. The conversational nature of AI tools like ChatGPT can create a false sense of privacy and confidentiality
3
.Experts warn that the friendly tone of AI chatbots can trick users into treating them like confidants. However, these are just machines processing information without any confidentiality protections. Users need to be aware that their interactions with AI can be logged, monitored, or flagged, and potentially used against them
3
.The incident has prompted calls for better education on responsible technology use. The Volusia County Sheriff's Office urged parents to discuss appropriate online behavior with their children to prevent similar incidents in the future
4
.As AI becomes increasingly integrated into daily life, understanding the limitations of AI privacy and the potential consequences of careless interactions with these tools becomes crucial for users of all ages.
Summarized by
Navi
[3]