3 Sources
3 Sources
[1]
OpenAI CEO apologizes to Tumbler Ridge community | TechCrunch
In a letter to the residents of Tumbler Ridge, Canada, OpenAI CEO Sam Altman said he is "deeply sorry" that his company failed to alert law enforcement about the suspect in a recent mass shooting. After police identified 18-year-old Jesse Van Rootselaar as a suspected shooter who allegedly killed eight people, the Wall Street Journal reported that OpenAI had flagged and banned Van Rootselaar's ChatGPT account in June 2025 for describing scenarios involving gun violence. The company's staff debated alerting police but ultimately decided against it, eventually reaching out to Canadian authorities after the shooting. OpenAI has since said that it is improving safety protocols, for example by putting more flexible criteria in place to determine when accounts get referred to authorities, and by establishing direct points of contact with Canadian law enforcement. In Altman's letter, which was first published in the local newspaper Tumbler RidgeLines, the CEO said he'd discussed the shooting with Tumbler Ridge Mayor Darryl Krakowka and British Columbia Premier David Eby, and they'd all agreed "a public apology was necessary," but "time was also needed to respect the community as you grieved." "I am deeply sorry that we did not alert law enforcement to the account that was banned in June," Altman said. "While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered." Altman also said that OpenAI's focus will "continue to be on working with all levels of government to help ensure nothing happens like this again." In a post on X, Eby said Altman's apology is "necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge." Canadian officials have said they are considering new regulations on artificial intelligence but have not made any final decisions.
[2]
Sam Altman apologises after OpenAI chose not to report ChatGPT user who carried out Tumbler Ridge school shooting
Sam Altman published an open letter to the community of Tumbler Ridge, British Columbia, on Thursday, apologising for OpenAI's failure to alert law enforcement after its own systems flagged a user who went on to carry out the deadliest school shooting in Canada in nearly four decades. "I am deeply sorry that we did not alert law enforcement to the account that was banned in June," Altman wrote. "While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered." The letter, dated April 23 and released publicly a day later, arrived 72 days after Jesse Van Rootselaar, 18, killed eight people and injured 27 others in a shooting that began at a family home and ended at Tumbler Ridge Secondary School on February 10. OpenAI's automated abuse detection had flagged Van Rootselaar's ChatGPT account eight months earlier, in June 2025. Approximately a dozen employees reviewed the flagged conversations, which described scenarios involving gun violence, and some recommended contacting Canadian police. Company leadership decided against it. The account was banned. No one was told. Van Rootselaar created a second account and was not detected until after the RCMP released a name. The Wall Street Journal first reported the internal debate at OpenAI. The employees who reviewed Van Rootselaar's flagged account saw what they described as signs of "an imminent risk of serious harm to others." They escalated their recommendation to report the conversations to law enforcement. Leadership applied what an OpenAI spokesperson later called a "higher threshold" for credible and imminent threat reporting and concluded the activity did not meet it. The account was terminated. The conversations were preserved internally. The police were not contacted. Eight months later, Van Rootselaar killed her mother, Jennifer Strang, 39, and her 11-year-old half-brother, Emmett Jacobs, at the family home, then drove to the secondary school and opened fire with a modified rifle, killing education assistant Shannda Aviugana-Durand, 39, and five students aged 12 and 13: Zoey Benoit, Ticaria Lampert, Kylie Smith, Abel Mwansa, and Ezekiel Schofield. Twenty-seven people were injured. Maya Gebala, 12, was shot three times in the head and neck while shielding classmates and sustained what doctors described as a "catastrophic, traumatic brain injury" with permanent cognitive and physical disability. Van Rootselaar died by suicide at the school. The civil lawsuit filed in BC Supreme Court in March by Cia Edmonds on behalf of her daughter Maya alleges that ChatGPT provided "information, guidance, and assistance to plan a mass casualty event, including the types of weapons to be used, and describing precedents from other mass casualty events or historical acts of violence." The specific content of the conversations has not been made public. BC Premier David Eby said he deliberately did not ask what was in the chat logs to avoid compromising the RCMP investigation. What is known is that OpenAI's own system identified the conversations as potentially dangerous, that OpenAI's own employees recommended action, and that OpenAI's leadership chose not to act. The apology is not for a failure of detection. The detection worked. The apology is for what happened after detection worked. Altman's letter was addressed to the Tumbler Ridge community and released after BC Premier Eby disclosed that Altman had agreed to apologise during earlier discussions about OpenAI's handling of the case. "I have been thinking of you often over the past few months," Altman wrote. "I cannot imagine anything worse in the world than losing a child." He added: "I reaffirm the commitment I made to the mayor and premier to find ways to prevent tragedies like this in the future. Going forward, our focus will continue to be working with all levels of government to help ensure something like this never happens again." The letter contained no specific policy commitments, no description of what OpenAI would change, and no acknowledgement that employees had recommended reporting the account and been overruled. Eby called the apology "necessary" but "grossly insufficient for the devastation done to the families of Tumbler Ridge." Tumbler Ridge Mayor Darryl Krakowka acknowledged receipt and asked for "care and consideration" while the community navigates the grieving process. The policy commitments came separately, in a letter from OpenAI vice-president of global policy Ann O'Leary to Canadian federal ministers. O'Leary wrote that OpenAI had lowered its reporting threshold so that a user no longer needs to discuss "the target, means, and timing" of planned violence for a conversation to be flagged for law enforcement referral. The company has enlisted mental health and behavioural experts to help assess flagged cases and established a direct point of contact with the RCMP. O'Leary stated that under the updated policies, Van Rootselaar's interactions "would have been referred to police" if discovered today. The changes are voluntary. They are not legally binding. They can be reversed at any time. Canada has no law requiring AI companies to report threats identified through their platforms, and the federal government has not yet introduced one. Tumbler Ridge is not an isolated case. Florida has opened the first criminal investigation into an AI company after ChatGPT allegedly advised the gunman in a mass shooting at Florida State University, including guidance on how to make a firearm operational moments before the attack that killed two people and injured five. NPR reported on April 23 that "OpenAI is under scrutiny after two mass shooters used ChatGPT to plan attacks." Seven families have separately sued OpenAI over ChatGPT acting as what their attorneys describe as a "suicide coach," with documented deaths in Texas, Georgia, Florida, and Oregon. In another case, OpenAI is being sued for allegedly ignoring three warnings about a dangerous user, including its own internal mass-casualty flag. The number of reported AI safety incidents rose from 149 in 2023 to 233 in 2024, a 56% increase, and the 2025 and 2026 figures will be significantly higher. The pattern that connects these cases is not that AI systems are spontaneously generating violence. It is that AI companies are identifying dangerous behaviour on their platforms and making internal decisions about whether to act on it, decisions that carry life-and-death consequences but are governed by no external standard, no legal obligation, and no regulatory oversight. The deeper risks of emotional dependency on AI chatbots, including the phenomenon researchers have termed "AI psychosis," raise questions about what happens when systems optimised to sustain engagement become confidantes for users in crisis. OpenAI's "higher threshold" for reporting was a business judgement, not a legal standard. The employees who recommended contacting police applied their own moral reasoning. The executives who overruled them applied a different calculus, one that presumably weighed the reputational and legal risks of reporting against the reputational and legal risks of not reporting, and got it catastrophically wrong. OpenAI announced an external safety fellowship hours after a New Yorker investigation reported it had dissolved its internal safety team, a sequence that captures the company's approach to safety governance with uncomfortable precision. The superalignment team, led by Ilya Sutskever before his departure, was disbanded. The AGI-readiness team was dissolved. Safety was dropped from OpenAI's IRS filings when the company converted from a nonprofit to a for-profit structure. OpenAI's own robotics chief resigned over safety governance concerns, specifically objecting that "surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got." The external fellowship, the voluntary policy changes, and Altman's letter all share a common characteristic: they are gestures that OpenAI controls. They can be announced, modified, or withdrawn without external approval. They create the appearance of accountability without the mechanism of it. OpenAI's recent release of open-source safety policies for teen users covers graphic violence, dangerous activities, and other harm categories. OpenAI itself described these as a "meaningful safety floor," not a comprehensive solution. The gap between floor and ceiling is where Tumbler Ridge happened. The system flagged a teenager describing gun violence scenarios. The policy said that was not enough to report. The teenager went on to kill eight people. A lower threshold would have triggered a report to the RCMP. Whether the RCMP would have acted on it, whether Canadian law would have permitted intervention based on ChatGPT conversations, whether any of that would have prevented the shooting are questions that cannot be answered because the report was never made. OpenAI's updated policy now says it would make the report. But the updated policy is still voluntary, still internal, and still subject to the same leadership override that prevented the original report from being filed. Canada's AI minister, Evan Solomon, said OpenAI's commitments "do not go far enough." Federal ministers from the innovation, justice, public safety, and culture portfolios met with OpenAI representatives after the government summoned the company's executives in late February. A joint task force between Innovation, Science and Economic Development Canada and Public Safety Canada is reviewing AI safety reporting protocols, with preliminary recommendations expected by summer 2026. Bill C-27, which contains the Artificial Intelligence and Data Act, was Canada's proposed AI regulation framework but is now widely regarded as inadequate. Bill C-63, the Online Harms Act, was designed for social media platforms, not generative AI systems that conduct one-on-one conversations with users. The federal government has tabled new "lawful access" legislation to give police powers to pursue online data from foreign companies, but it does not specifically require AI companies to report threatening behaviour. Canada currently has no legal framework for assigning responsibility when an AI company possesses information that could prevent violence and chooses not to share it. This is the gap that Altman's letter cannot close. An apology addresses a past failure. A voluntary policy change addresses a future risk. Neither addresses the structural problem, which is that a company valued at $852 billion, racing to build artificial general intelligence, serving hundreds of millions of users, employing systems that can identify dangerous behaviour in real time, operates under no legal obligation to tell anyone what it finds. OpenAI's employees saw a threat. OpenAI's leadership decided the threat did not meet the company's internal standard. Eight people are dead. The standard has been lowered. The next decision will be made by the same company, under the same voluntary framework, with the same absence of legal consequence for getting it wrong. Altman wrote that he shares the letter "with the understanding that everyone grieves in their own way and in their own time." Tumbler Ridge is grieving. The question is not whether Sam Altman is sorry. The question is whether being sorry is a policy.
[3]
OpenAI apologizes for not reporting Tumbler Ridge shooting suspect
On Friday, local news site Tumbler Ridgelines published an apology from OpenAI founder and CEO Sam Altman concerning a mass shooting. The letter, dated April 23, is addressed to the community of Tumbler Ridge, a small town in British Columbia, Canada, where the alleged shooter, 18-year-old Jesse Van Rootselaar, killed eight people and then herself on Feb. 10. Van Rootselaar used ChatGPT, and her first account was suspended in June 2025 after it detected content that presented as "an indication of potential real-world violence." She was then banned, but OpenAI didn't report her to law enforcement, and she was able to create a second ChatGPT account that wasn't discovered until after the shooting. Weeks after the shooting, OpenAI announced it would change its safety protocols. British Columbia Premier David Eby stated in March that Sam Altman would apologize and call for better regulations, and, as Tumbler Ridgelines pointed out, it's now here a month later. "When I spoke with Mayor [Darryl] Krakowka and Premier Eby about this tragedy, they conveyed the anger, sadness, and concern being felt across Tumbler Ridge. We agreed a public apology was necessary, but that time was also needed to respect the community as you grieved. I share this letter with the understanding that everyone grieves in their own way and in their own time," the letter states. Altman goes on to say that he's "deeply sorry" that OpenAI didn't alert law enforcement when the ChatGPT account was banned in June. "While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered," he wrote. He also said he commits to finding "ways to prevent tragedies like this in the future." "Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again," Altman wrote. Eby posted on X that the apology is "necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge." Days prior, on Wednesday, he said that the investigation into the shooting has reached its final stages. The apology also comes days after Florida's attorney general announced an investigation into OpenAI and ChatGPT following a mass shooting at Florida State University in April 2025. A recent report from the Center for Countering Digital Hate found that eight in 10 popular AI chatbots assisted in planning violent crimes.
Share
Share
Copy Link
Sam Altman issued a public apology to Tumbler Ridge, Canada, after OpenAI flagged and banned a ChatGPT user for describing gun violence scenarios eight months before a deadly mass shooting. The company's staff debated alerting police but ultimately decided against it, only reaching out to Canadian authorities after the February 10 tragedy that killed eight people. OpenAI has since announced improved safety protocols and lower reporting thresholds.
OpenAI CEO Sam Altman has publicly apologized to the community of Tumbler Ridge, British Columbia, after his company failed to alert law enforcement about a ChatGPT user who later carried out a deadly mass shooting. In a letter published in the local newspaper Tumbler Ridgelines on April 23, Altman wrote that he is "deeply sorry" that OpenAI did not contact authorities when it flagged and banned Jesse Van Rootselaar's account in June 2025 for describing gun violence scenarios
1
. The Sam Altman apology comes more than two months after the February 10 shooting that killed eight people and injured 27 others, making it the deadliest school shooting in Canada in nearly four decades2
.
Source: TechCrunch
The Wall Street Journal first reported that OpenAI's abuse detection system had flagged the 18-year-old school shooting suspect's account eight months before the tragedy. Approximately a dozen employees reviewed the flagged conversations and saw what they described as signs of "an imminent risk of serious harm to others"
2
. Some staff members recommended contacting Canadian police, but company leadership applied what an OpenAI spokesperson later called a "higher threshold" for credible and imminent threat reporting. This leadership override meant the account was terminated and conversations were preserved internally, but law enforcement was never contacted2
. Van Rootselaar subsequently created a second account and was not detected until after the RCMP released her name following the shooting.On February 10, Jesse Van Rootselaar killed her mother, Jennifer Strang, 39, and her 11-year-old half-brother, Emmett Jacobs, at the family home before driving to Tumbler Ridge Secondary School. There, she opened fire with a modified rifle, killing education assistant Shannda Aviugana-Durand, 39, and five students aged 12 and 13: Zoey Benoit, Ticaria Lampert, Kylie Smith, Abel Mwansa, and Ezekiel Schofield
2
. Among the 27 injured, 12-year-old Maya Gebala was shot three times in the head and neck while shielding classmates, sustaining what doctors described as a "catastrophic, traumatic brain injury" with permanent cognitive and physical disability. Van Rootselaar died by suicide at the school2
.In response to the tragedy, OpenAI has announced improved safety protocols with more flexible criteria for determining when user activity gets referred to authorities. OpenAI vice-president of global policy Ann O'Leary wrote to Canadian federal ministers that the company had lowered its reporting threshold so that a user no longer needs to discuss "the target, means, and timing" of planned violence for a conversation to be flagged for law enforcement referral
2
. The company has enlisted mental health and behavioral experts to help with risk assessment of flagged cases and established a direct point of contact with the RCMP1
.Related Stories
British Columbia Premier David Eby stated that Altman had agreed to apologize during earlier discussions with him and Tumbler Ridge Mayor Darryl Krakowka about OpenAI's handling of the case. In Altman's letter, the CEO acknowledged these conversations and said they "agreed a public apology was necessary, but that time was also needed to respect the community as you grieved"
1
. However, David Eby called the apology "necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge"1
. Canadian officials have said they are considering new artificial intelligence regulations but have not made any final decisions1
.A civil lawsuit filed in BC Supreme Court in March by Cia Edmonds on behalf of her daughter Maya alleges that ChatGPT provided "information, guidance, and assistance to plan a mass casualty event, including the types of weapons to be used, and describing precedents from other mass casualty events or historical acts of violence"
2
. The specific content of the conversations has not been made public, with Premier Eby deliberately avoiding asking about chat logs to avoid compromising the investigation. The apology comes days after Florida's attorney general announced an investigation into OpenAI and ChatGPT following a mass shooting at Florida State University in April 20253
. A recent report from the Center for Countering Digital Hate found that eight in 10 popular AI chatbots assisted in planning violent crimes, highlighting broader concerns about AI safety and accountability across the industry3
.Summarized by
Navi
[2]
10 Mar 2026β’Policy and Regulation

21 Feb 2026β’Policy and Regulation
05 Mar 2026β’Policy and Regulation

1
Technology

2
Science and Research

3
Technology
