OpenAI Flagged Jesse Van Rootselaar's ChatGPT Account Months Before Tumbler Ridge Shooting

Reviewed byNidhi Govil

3 Sources

Share

OpenAI identified and banned Jesse Van Rootselaar's ChatGPT account in June 2025 after detecting violent content, but decided not to alert Canadian authorities. The suspect later killed eight people in the Tumbler Ridge shooting on February 12, 2026. Internal debates at OpenAI revealed employees urged leaders to contact police, but the company determined the activity didn't meet its threshold for imminent threat.

News article

OpenAI Detected Violent Content But Did Not Alert Authorities

OpenAI banned a ChatGPT account belonging to Jesse Van Rootselaar more than half a year before the suspect carried out the deadly Tumbler Ridge shooting that killed eight people and injured 27 others on February 12, 2026. The AI company identified the account in June 2025 through its abuse detection systems, which use automated tools and human investigations to flag violent misuse of its platforms

1

. Despite the troubling conversations with ChatGPT that described scenarios involving gun violence, OpenAI did not alert authorities because the activity did not meet its internal threshold for a credible or imminent plan for serious physical harm

2

.

According to the Wall Street Journal, about a dozen OpenAI staffers debated whether to take action on Van Rootselaar's posts, with some employees identifying the suspect's usage of the AI tool as an indication of real-world violence and encouraging leaders to contact law enforcement

1

. However, company leadership decided against alerting the Royal Canadian Mounted Police at the time, opting instead to ban the account for policy violations

3

.

Internal Debate Over AI Company Policy and Tech Company Responsibility

The decision highlights the ethical challenges tech companies face when monitoring user interactions for signs of potential violence. OpenAI maintains a policy of alerting authorities only in cases of imminent threats, arguing that alerting law enforcement too broadly could cause unintended harm

1

. The company has stated it trains ChatGPT to discourage imminent real-world harm and refuse assistance with illegal activities, while constantly reviewing referral criteria with experts

1

.

This incident adds to a growing list of cases where ChatGPT users have fallen into severe mental health crises after becoming obsessed with the bot, sometimes resulting in suicides and murders that have led to numerous lawsuits

2

. The mass shooter's conversations with the platform raised red flags internally, yet the content moderation system's threshold for escalation proved insufficient to prevent the tragedy.

The Tumbler Ridge Attack and Its Aftermath

The 18-year-old Van Rootselaar, who was born a biological male but identified as a woman, first killed her mother, 39, and stepbrother, 11, at a local residence in British Columbia before attacking Tumbler Ridge Secondary School

1

3

. Six people were found dead inside the school, including one female teacher, three female students, and two male students, with 25 more injured in one of the deadliest attacks in Canadian history

3

. Van Rootselaar was found dead from a self-inflicted gunshot wound at the school, and the motive for the mass killing remains unknown

1

.

Following the attack, OpenAI proactively contacted Canadian police with information about the suspect and pledged to support their investigation

1

. Canadian authorities confirmed they were contacted by the AI firm after the shooting and are investigating Van Rootselaar's social media and user activity

3

.

Questions About AI Ethics and Future Implications

The case raises critical questions about when and how AI companies should intervene when monitoring user interactions reveals potential threats. While OpenAI has been scanning users' conversations for signs of planned violent crimes since last year, it remains unclear whether the company has successfully headed off any incidents before they occurred

2

. The situation differs from traditional social media content moderation because chatbots engage directly with users and can sometimes encourage bad behavior or respond inappropriately

2

.

OpenAI stated it is reviewing the case for improvements and continues to evaluate its referral criteria with experts

1

. The incident will likely intensify scrutiny of tech company responsibility in preventing real-world violence and may prompt calls for clearer guidelines on when AI companies should alert law enforcement about concerning user behavior. As AI systems become more sophisticated and widely used, the balance between user privacy, freedom of expression, and public safety will remain a contentious issue that affects both the technology industry and society at large.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo