3 Sources
3 Sources
[1]
Tumbler Ridge suspect's ChatGPT account banned before shooting
OpenAI banned a ChatGPT account owned by the suspect of a mass shooting in British Columbia more than half a year before the attack took place. The AI company said they had identified an account owned by Jesse Van Rootselaar in June 2025 under abuse and enforcement detection, which includes identifying accounts being used to further violence. OpenAI said it did not alert authorities to the account because its usage did not meet its threshold of a credible or imminent plan for serious physical harm to others. It said its thoughts were with everyone affected by the tragedy and that following the attack it had "proactively" contacted Canadian police with information on the suspect. Van Rootselaar is suspected of having shot and killed eight people in rural Tumbler Ridge on 12 February in one of the deadliest attacks in Canada's history. According to the Wall Street Journal, which first reported the story, "about a dozen staffers debated whether to take action on Van Rootselaar's posts." Some had identified the suspect's usage of the AI tool as an indication of real world violence and encouraged leaders to alert authorities, the US outlet reported. But, it said, leaders of the company decided not to do so. In a statement, a spokesperson for OpenAI said: "In June 2025, we proactively identified an account associated with this individual [Jesse Van Rootselaar] via our abuse detection and enforcement efforts, which include automated tools and human investigations to identify misuses of our models in furtherance of violent activities." They said the company would continue to support the police's investigation. The BBC has contacted the Royal Canadian Mounted Police for comment. OpenAI has said it will uphold its policy of alerting authorities only in cases of imminent risk because alerting them too broadly could cause unintended harm. It has also said that it trains ChatGPT to discourage imminent real-world harm when it identifies a dangerous situation and to refuse to help people that are attempting to use the service for illegal activities. The company added that it is constantly reviewing its referral criteria with experts and that it is reviewing the case for improvements. The deadly attack on Tumbler Ridge Secondary School saw a further 27 people injured. Van Rootselaar was found dead from a self-inflicted gunshot wound at the school. Police said the suspect was born a biological male but identified as a woman. Van Rootselaars's mother and step-brother were among the victims of the shooting. Both were found dead at a local residence, police said. The motive for the attack is not yet known.
[2]
OpenAI Flagged a Mass Shooter's Troubling Conversations With ChatGPT Before the Incident, Decided Not to Warn Police
Employees at OpenAI urged leaders to alert the police, but they opted not to. A grim scoop from the Wall Street Journal: an automated review system at OpenAI flagged disturbing conversations that a future mass shooter was having with the company's flagship AI ChatGPT -- but, despite being urged by employees at the company to warn law enforcement, OpenAI leadership opted not to. The 18-year-old Jesse Van Rootselaar ultimately killed eight people including herself and injured 25 more in British Columbia earlier this month, in a tragedy that shook Canada and the world. What we didn't know until today is that employees at OpenAI had already been aware of Van Rootselaar for months, and had debated alerting authorities because of the alarming nature of her conversations with ChatGPT. In the conversations with OpenAI's chatbot, according to sources at the company who spoke to the WSJ, Van Rootselaar "described scenarios involving gun violence." The sources say they recommended that the company warn authorities local authorities, but that leadership at the company decided against it. An OpenAI spokesperson didn't dispute those claims, telling the newspaper that it banned Van Rootselaar's account, but decided that her interactions with ChatGPT didn't meet its internal criteria for escalating a concern with a user to police. "Our thoughts are with everyone affected by the Tumbler Ridge tragedy," the company said in a statement to the paper. The spokesperson also said that the company had reached out to assist Canadian police after the shooting took place. We've known since last year that OpenAI is scanning users' conversations for signs that they're planning a violent crime, though it's not clear whether it's yet successfully headed off an incident before it happened. Its decision to engage in that monitoring in the first place reflects an increasingly long list of incidents in which ChatGPT users have fallen into severe mental health crises after becoming obsessed with the bot, sometimes resulting in involuntary commitment or jail -- as well as a growing number of suicides and murders, leading to numerous lawsuits. In a sense, questions of how to deal with threatening online conduct is a longstanding question that every social platform has grappled with. But AI brings difficult new questions to the topic, since chatbots can engage with users directly -- sometimes even encouraging bad bad behavior or otherwise behaving inappropriately. Like many mass shooters, Van Rootselaar left behind a complicated digital legacy -- including on Roblox -- that investigators are still wading through.
[3]
Canadian trans shooter's disturbing ChatGPT messages alarmed...
ChatGPT-maker OpenAI banned Canadian transgender school shooter Jesse Van Rootselaar's account over his violent behavior on the platform last year -- but never alerted authorities before he gunned down eight people in one of the country's deadliest mass killings. The 18-year-old high school dropout carried out the second-worst school shooting in Canadian history last week, first slaughtering his mother and stepbrother at home and then storming into Tumbler Ridge Secondary School, where he killed six people and injured 25 more before turning the gun on himself The San Francisco tech company said it spotted the deranged shooter's alarming profile in June through abuse-monitoring systems and considered referring the account to the Royal Canadian Mounted Police. However, OpenAI concluded the disturbing activity didn't meet the bar for notifying law enforcement and banned the account for policy violations. Following the mass killing, the company reached out to law enforcement. "Our thoughts are with everyone affected by the Tumbler Ridge tragedy," an OpenAI spokesperson said. "We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we'll continue to support their investigation." Canadian authorities confirmed they were contacted by the AI firm after the shooting, adding an investigation into Rootselaar's social media and online activity is underway. Rooselaar launched his horrifying attack at a private residence in the sleepy rural community of Tumbler Ridge on Feb. 10 before continuing the carnage at the school. Six people were found dead inside the school, and the bodies of Van Rootselaar's mother, 39, and stepbrother, 11, were discovered in a local residence, cops said. The victims included one female teacher, three female students and two male students.
Share
Share
Copy Link
OpenAI identified and banned Jesse Van Rootselaar's ChatGPT account in June 2025 after detecting violent content, but decided not to alert Canadian authorities. The suspect later killed eight people in the Tumbler Ridge shooting on February 12, 2026. Internal debates at OpenAI revealed employees urged leaders to contact police, but the company determined the activity didn't meet its threshold for imminent threat.

OpenAI banned a ChatGPT account belonging to Jesse Van Rootselaar more than half a year before the suspect carried out the deadly Tumbler Ridge shooting that killed eight people and injured 27 others on February 12, 2026. The AI company identified the account in June 2025 through its abuse detection systems, which use automated tools and human investigations to flag violent misuse of its platforms
1
. Despite the troubling conversations with ChatGPT that described scenarios involving gun violence, OpenAI did not alert authorities because the activity did not meet its internal threshold for a credible or imminent plan for serious physical harm2
.According to the Wall Street Journal, about a dozen OpenAI staffers debated whether to take action on Van Rootselaar's posts, with some employees identifying the suspect's usage of the AI tool as an indication of real-world violence and encouraging leaders to contact law enforcement
1
. However, company leadership decided against alerting the Royal Canadian Mounted Police at the time, opting instead to ban the account for policy violations3
.The decision highlights the ethical challenges tech companies face when monitoring user interactions for signs of potential violence. OpenAI maintains a policy of alerting authorities only in cases of imminent threats, arguing that alerting law enforcement too broadly could cause unintended harm
1
. The company has stated it trains ChatGPT to discourage imminent real-world harm and refuse assistance with illegal activities, while constantly reviewing referral criteria with experts1
.This incident adds to a growing list of cases where ChatGPT users have fallen into severe mental health crises after becoming obsessed with the bot, sometimes resulting in suicides and murders that have led to numerous lawsuits
2
. The mass shooter's conversations with the platform raised red flags internally, yet the content moderation system's threshold for escalation proved insufficient to prevent the tragedy.The 18-year-old Van Rootselaar, who was born a biological male but identified as a woman, first killed her mother, 39, and stepbrother, 11, at a local residence in British Columbia before attacking Tumbler Ridge Secondary School
1
3
. Six people were found dead inside the school, including one female teacher, three female students, and two male students, with 25 more injured in one of the deadliest attacks in Canadian history3
. Van Rootselaar was found dead from a self-inflicted gunshot wound at the school, and the motive for the mass killing remains unknown1
.Following the attack, OpenAI proactively contacted Canadian police with information about the suspect and pledged to support their investigation
1
. Canadian authorities confirmed they were contacted by the AI firm after the shooting and are investigating Van Rootselaar's social media and user activity3
.Related Stories
The case raises critical questions about when and how AI companies should intervene when monitoring user interactions reveals potential threats. While OpenAI has been scanning users' conversations for signs of planned violent crimes since last year, it remains unclear whether the company has successfully headed off any incidents before they occurred
2
. The situation differs from traditional social media content moderation because chatbots engage directly with users and can sometimes encourage bad behavior or respond inappropriately2
.OpenAI stated it is reviewing the case for improvements and continues to evaluate its referral criteria with experts
1
. The incident will likely intensify scrutiny of tech company responsibility in preventing real-world violence and may prompt calls for clearer guidelines on when AI companies should alert law enforcement about concerning user behavior. As AI systems become more sophisticated and widely used, the balance between user privacy, freedom of expression, and public safety will remain a contentious issue that affects both the technology industry and society at large.Summarized by
Navi
[3]
26 Aug 2025•Technology

23 Nov 2025•Policy and Regulation

11 Dec 2025•Policy and Regulation

1
Policy and Regulation

2
Policy and Regulation

3
Business and Economy
