OpenAI updates policies after flagging Canada shooter's ChatGPT account but not reporting to police

Reviewed byNidhi Govil

35 Sources

Share

Eight months before the Tumbler Ridge mass shooting that killed nine people, OpenAI flagged Jesse Van Rootselaar's ChatGPT account for gun violence scenarios but chose not to contact police. The company has now updated its law enforcement referral protocols and will meet with Canadian officials, including AI Minister Evan Solomon and CEO Sam Altman, as Canada considers government-imposed regulations to fill its AI governance vacuum.

OpenAI Flagged Shooter's Account But Didn't Meet Reporting Threshold

Eight months before Jesse Van Rootselaar killed eight people and herself in the Tumbler Ridge mass shooting on Feb. 10, OpenAI knew something was concerning. The company's automated review system had flagged Van Rootselaar's ChatGPT account for interactions involving gun violence scenarios

1

. Roughly a dozen employees were aware of the flagged content, and some advocated contacting police

1

. Instead, OpenAI banned the account in June 2025 but didn't refer it to law enforcement because it didn't meet the "threshold required" at the time

1

. The 18-year-old suspect killed her mother, her 11-year-old half-brother, and six others at Tumbler Ridge Secondary School before dying of a self-inflicted wound

1

.

Source: New York Post

Source: New York Post

Second Account Reveals Detection System Failures

The situation became more troubling when OpenAI revealed that Van Rootselaar had evaded the ban by creating a second ChatGPT account that went undetected until after police released her name

2

4

. This ban evasion exposed gaps in the company's detection systems designed to prevent banned users from creating new accounts

4

. OpenAI's vice president for global policy, Ann O'Leary, acknowledged the company only discovered the second account after the Royal Canadian Mounted Police announced Van Rootselaar's identity

4

. The tragedy has become Canada's deadliest rampage since 2020

4

.

Source: Market Screener

Source: Market Screener

OpenAI Updates Law Enforcement Referral Protocols

Following intense pressure from Canadian officials, OpenAI announced immediate changes to its AI safety protocols. "With the benefit of our continued learnings, under our enhanced law enforcement referral protocol, we would refer the account banned in June 2025 to law enforcement if it were discovered today," O'Leary wrote in a letter to Canada's AI Minister Evan Solomon

2

4

. The company committed to strengthening protocols about reporting potential threats when chatbot interactions cross the line into imminent and credible risk

4

. OpenAI will also develop direct communication with law enforcement to ensure Canadian authorities receive information quickly when the company identifies potential for real-world violence. Additionally, the company pledged to improve strengthened detection systems to catch attempts to evade safeguards and prioritize identifying the highest risk offenders

4

.

Canada Demands Concrete Action and Threatens Legislation

Canadian officials summoned OpenAI representatives to Ottawa and made clear their expectations for rapid changes

5

. "The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to be changes implemented, and if they're not forthcoming very quickly, the government is going to be making changes," Justice Minister Sean Fraser said

5

. Evan Solomon stated that while the company showed willingness to strengthen protocols, "we have not yet seen a detailed plan for how these commitments will be implemented in practice"

3

. Solomon will meet with Sam Altman next week to seek further clarity and ensure commitments translate into concrete action

3

. British Columbia Premier David Eby also secured a meeting with Altman, though he called OpenAI's assurances "cold comfort" for the families of Tumbler Ridge

4

.

Source: NYT

Source: NYT

Canada's AI Governance Framework Vacuum Exposed

The tragedy reveals a critical gap in Canada's AI governance framework. Federal AI Minister Evan Solomon said he was "deeply disturbed" by the revelations, adding the government is reviewing "a suite of measures" and that "all options are on the table"

3

. But those options remain undefined because critical legislative tools no longer exist. The Artificial Intelligence and Data Act, embedded in Bill C-27, was supposed to be Canada's answer to AI regulation, while the Online Harms Act would have addressed harmful digital content

1

. Both died when Parliament was prorogued in January 2025

1

. What remains is a voluntary code of conduct with no legal force and no consequences for non-compliance

1

.

The Challenge of Assessing Violent Ideation Through AI

The case highlights fundamental questions about corporate responsibility when AI companies detect violent ideation. Chatbot interactions differ fundamentally from social media—they're private, intimate, and designed to be accommodating, with users routinely disclosing fears, fantasies, and violent thoughts to systems engineered to respond with conversational warmth

1

. OpenAI's threat assessment was conducted by software engineers and content moderators, not forensic psychologists trained in distinguishing between ideation and intent

1

. The company cited risks of "over-enforcement" and distress from unannounced police visits for young people

1

. Canada's privacy legislation compounds the challenge—the Personal Information Protection and Electronic Documents Act permits disclosure without consent in emergencies, but this provision was drafted for clear-cut crises, not the probabilistic threat indicators that chatbot interactions generate

1

. OpenAI has faced multiple wrongful death lawsuits, including cases where ChatGPT allegedly encouraged paranoid beliefs before a man killed his mother and himself, and suits involving teenagers who planned suicides

5

. Experts argue Canada needs binding legislation with clear escalation thresholds developed with mental health professionals and law enforcement, an independent digital safety commission for threat assessment, and modernized privacy legislation providing explicit legal clarity for AI-specific disclosure.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo