Family sues OpenAI over Tumbler Ridge shooting, alleging company knew of attack plans

Reviewed byNidhi Govil

5 Sources

Share

The family of Maya Gebala, critically injured in the Tumbler Ridge shooting, has filed a lawsuit against OpenAI. The suit alleges that 12 employees flagged the shooter's ChatGPT conversations as posing imminent risk but leadership failed to alert authorities. The February attack killed eight people, including five children, in one of Canada's deadliest mass shootings.

OpenAI Faces Lawsuit Over Tumbler Ridge Shooting

The family of victim sues OpenAI following the devastating Tumbler Ridge shooting that claimed eight lives on February 10, marking one of Canada's deadliest mass shootings. Cia Edmonds filed the lawsuit on behalf of her daughter Maya Gebala, a 12-year-old who was shot three times while attempting to lock a library door to protect classmates

1

. Maya remains hospitalized with a catastrophic brain injury, permanent cognitive and physical disability, and right-sided hemiplegia after bullets struck her head and neck

2

.

Source: Seattle Times

Source: Seattle Times

The civil lawsuit, filed in the Supreme Court of British Columbia, centers on allegations that OpenAI possessed specific knowledge of shooter Jesse Van Rootselaar planning an attack but failed to alert authorities

4

. The 18-year-old suspect killed her mother, her 11-year-old brother, five students, and one educator before dying from a self-inflicted gunshot wound

4

.

ChatGPT Account Flagged Months Before Attack

According to the lawsuit, Jesse Van Rootselaar described "various scenarios involving gun violence" to the AI chatbot over several days in late spring or early summer 2025

1

. An automated review system flagged these conversations, prompting manual review by OpenAI staff. Approximately 12 OpenAI employees identified the posts as "indicating an imminent risk of serious harm to others" and recommended that Canadian law enforcement be informed

5

.

The lawsuit alleges that leadership "rebuffed" the request to contact authorities, instead choosing only to ban Rootselaar's initial ChatGPT account in June 2025

1

. OpenAI defended its decision by stating the account did not meet its reporting threshold, which requires evidence of a credible or imminent plan for serious physical harm

1

. The company said it considered user privacy when making referrals to law enforcement and did not want to distress users by involving police

4

.

Inadequate Safeguards Allowed Second Account

After the initial ban, Van Rootselaar created a second ChatGPT account to circumvent the restriction, a fact OpenAI only revealed after Canadian officials summoned the company to Ottawa in late February

4

. The lawsuit claims the suspect used this second account to "continue planning scenarios involving gun violence, including a mass casualty event like the Tumbler Ridge mass shooting"

3

. This breach highlights what critics describe as inadequate safeguards in OpenAI's detection systems to prevent banned users from re-accessing the platform.

Source: The Hill

Source: The Hill

The suit accuses OpenAI of rushing ChatGPT to market without conducting proper safety studies and implementing strong safeguards

3

. Lawyers argue the company's features were "intentionally designed to foster dependency" between users and the AI chatbot, which assumed the role of a mental health counselor or pseudo-therapist

5

. The lawsuit describes the shooter viewing ChatGPT as a "trusted confidante"

1

.

Questions of Accountability and AI Safety

The legal action seeks undisclosed punitive damages, with the family's lawyers stating OpenAI's conduct "is reprehensible and morally repugnant"

2

. "The purpose of this lawsuit is to learn the whole truth about how and why the Tumbler Ridge mass shooting happened, to impose accountability, to seek redress for harms and losses, and to help prevent another mass-shooting atrocity in Canada," Rice Parsons Leoni & Elliott LLP stated

2

.

Source: Futurism

Source: Futurism

British Columbia Premier David Eby emerged as a vocal critic, stating "OpenAI had the opportunity to notify authorities and potentially even to stop this tragedy from happening"

2

. Eby refused meetings with company leadership, demanding to speak directly with Sam Altman, OpenAI's CEO

2

.

OpenAI Pledges Safety Improvements

Following intense pressure, Sam Altman met virtually with Canadian artificial intelligence minister Evan Solomon and Premier Eby on March 5, pledging to strengthen protocols on notifying police over potentially harmful interactions

1

. Altman promised to apologize to the Tumbler Ridge community, though no public apology has yet materialized

3

.

In an open letter to Canadian officials on February 26, OpenAI outlined several changes implemented in recent months, including enlisting mental health and behavioral experts to assess complex cases and making criteria for referral to police "more flexible"

1

. The company acknowledged it would have reported Van Rootselaar's account under the new guidelines and committed to strengthening detection systems to prevent attempts to evade safeguards

1

. OpenAI also pledged to establish a direct point of contact with Canadian law enforcement for quickly flagging cases with potential for real-world violence

1

.

Canada's AI minister expressed skepticism, stating "we have not yet seen a detailed plan for how these commitments will be implemented in practice"

1

. Solomon ordered a government safety review of OpenAI's technology and asked the company to apply new safety standards retroactively to review previously flagged cases

2

. This lawsuit adds to mounting legal challenges facing OpenAI as courts examine whether AI developers bear responsibility for mental health episodes and violence linked to their systems

5

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo