35 Sources
35 Sources
[1]
Danger was flagged, but not reported: What the Tumbler Ridge tragedy reveals about Canada's AI governance vacuum
Eight months before the Tumbler Ridge mass shooting, OpenAI knew something was wrong. The company's automated review system had flagged Jesse Van Rootselaar's ChatGPT account for interactions involving scenarios of gun violence. Roughly a dozen employees were aware. Some advocated contacting police. Instead, OpenAI banned the account, but didn't refer it to law enforcement because it didn't meet the "threshold required" at the time. On Feb. 10, Van Rootselaar killed eight people (her mother, her 11-year-old half-brother and six others at Tumbler Ridge Secondary School) before dying of a self-inflicted wound. This case is not simply about one company's misjudgment. It exposes the absence of any Canadian legal framework for assigning responsibility when an AI company possesses information that could prevent violence. As a researcher in health ethics and AI governance at Simon Fraser University, I study how algorithmic systems reshape decision-making in high-stakes settings. The Tumbler Ridge tragedy sits squarely at this intersection: a private corporation made a clinical-style risk assessment it was never equipped to make, in a legal environment that gave it no guidance. The digital confessional problem Generative AI chatbots are not social media. Social media functions as a public square where posts can be monitored and flagged by other users. Chatbot interactions are private, intimate and designed to be accommodating. Users routinely disclose fears, fantasies and violent ideations to systems engineered to respond with conversational warmth. In clinical practice, this kind of disclosure triggers a well-established duty. The Tarasoff principle, adopted across Canadian provinces through mental health legislation, imposes upon therapists a duty to warn if they determine that a patient poses a credible threat to an identifiable person, even if it means breaching confidentiality. But that duty rests on the clinical judgment of trained professionals who understand the difference between ideation and intent. Arguably, OpenAI tried to mirror this clinical standard. But the people making these assessments are software engineers and content moderators, not forensic psychologists. The company itself acknowledged the tension, citing the risks of "over-enforcement" and the distress of unannounced police visits for young people. The real question is not whether OpenAI's reasoning was defensible in isolation. It's whether a private corporation should be making this determination at all. A vacuum where legislation should be Federal AI Minister Evan Solomon, who intends to meet with OpenAI representatives today on Feb. 24 about this issue, said on Feb. 21 that he was "deeply disturbed" by the revelations, adding the federal government is reviewing "a suite of measures" and that "all options are on the table." But those options remain undefined because the legislative tools that would have enabled them no longer exist. The Artificial Intelligence and Data Act, embedded in Bill C-27, was supposed to be Canada's answer to AI regulation. The Online Harms Act (Bill C-63) would have addressed harmful digital content. Both died on the order paper when Parliament was prorogued in January 2025. What remains is a voluntary code of conduct with no legal force and no consequences for non-compliance. When OpenAI flagged Van Rootselaar's account, its only obligation was to its own internal policy. Banning the account resolved the company's liability while leaving a person expressing violent ideations disconnected from any intervention pathway. Canada's privacy law compounds the problem. The Personal Information Protection and Electronic Documents Act does contain an emergency exception: section 7(3)(e) permits disclosure without consent "to a person who needs the information because of an emergency that threatens the life, health or security of an individual." But this provision was drafted for clear-cut crises, not for the probabilistic threat indicators that AI chatbot interactions generate. For a foreign corporation navigating this ambiguity, uncertainty favours inaction. What Canada needs now Canada's next attempt at digital governance must recognize that human-to-AI interactions are fundamentally different from social media posts. Three elements are essential: Binding legislation with clear legal thresholds for when AI companies must refer flagged interactions to authorities. These thresholds must be developed with mental health professionals, law enforcement and privacy experts, not left to individual corporations. An independent digital safety commission as a third-party triage body. When an AI company identifies severely concerning interactions, it should refer the case to trained threat-assessment professionals rather than making the call internally or triggering an immediate armed police response. Modernized privacy legislation that provides explicit legal clarity for AI-specific disclosure, resolving the ambiguity that currently rewards doing nothing. At the AI summit that took place in New Delhi from Feb. 16 to 20, 86 countries, including Canada, pledged to promote "safe, trustworthy and robust" AI. No concrete commitments followed. OpenAI's Sam Altman stressed the urgency of international AI regulation and proposed an international body for AI safety norms modelled on the International Atomic Energy Agency, an irony not lost on anyone following the Tumbler Ridge revelations. Minister Solomon says all options are on the table. Families of shooting victims, survivors and a devastated community in Tumbler Ridge are living with the cost of leaving regulation options open for too long.
[2]
OpenAI Would've Flagged Canada Mass Shooting Suspect Under New Rules
OpenAI will improve its detection systems and develop direct points of contact with Canadian law enforcement to ensure they receive information quickly when they deem the potential for real-world violence. OpenAI Inc. told Canadian lawmakers it would have referred a banned ChatGPT user who later became the chief suspect in one of the country's worst-ever mass shootings to police under newly updated policies. The artificial intelligence company also revealed Thursday that the suspected killer in the Tumbler Ridge, British Columbia tragedy had a second ChatGPT account which it failed to detect until after police released her name. OpenAI said last week that it had flagged and banned an account held by the crime's sole suspect eight months before February's killings. The massacre killed nine, including the alleged perpetrator, 18-year-old Jesse Van Rootselaar, who appeared to die by suicide. Van Rootselaar's ChatGPT account was flagged in June 2025 by systems that scan for misuse, including potential violent activity. The company considered referring the account to law enforcement at the time, but found no credible or imminent threat and determined it didn't meet the threshold. That's sparked anger and questions from senior Canadian politicians, who summoned the company to Ottawa this week to discuss its policies. "With the benefit of our continued learnings, under our enhanced law enforcement referral protocol, we would refer the account banned in June 2025 to law enforcement if it were discovered today," Ann O'Leary, the company's vice president of global policy, wrote in a letter to Canada's AI Minister Evan Solomon, following his meeting with company executives. OpenAI said it would improve its detection systems to catch attempts to evade its safeguards, and added that it would also develop direct points of contact with Canadian law enforcement to ensure they receive information quickly when they deem the potential for real-world violence. A spokesperson for Solomon didn't immediately respond to a request for comment.
[3]
Canadian minister to meet with OpenAI's Altman to discuss safety measures after shooting
OTTAWA, Feb 27 (Reuters) - Canada's minister in charge of artificial intelligence said on Friday he will meet with OpenAI CEO Sam Altman next week to discuss how the ChatGPT maker plans to boost safety protocols after a recent school shooting in British Columbia. The Canadian government has urged OpenAI to boost its safety protocols quickly and warned Ottawa could effect change through legislation after the company said it had not contacted police about an account belonging to the alleged shooter, Jesse Van Rootselaar, that it had banned. "While we note their willingness to strengthen law enforcement referral protocols, establish direct points of contact with Canadian authorities, and enhance safeguards, we have not yet seen a detailed plan for how these commitments will be implemented in practice," Minister Evan Solomon said in a statement. Solomon was responding to a letter he received from OpenAI's vice president of global policy on Thursday in which the firm said it will set up a direct point of contact with Canadian law enforcement and improve detection of repeat violators of its "violent activities" policy to boost safety protocols. Solomon said he will meet with Altman "to seek further clarity and to ensure that the commitments made are translated into concrete action." Van Rootselaar, 18, is suspected of killing eight people on February 10 before taking her own life in Tumbler Ridge. OpenAI said it banned her ChatGPT account last year for policy violations. Solomon said he will also meet with other major platforms in Canada in the coming weeks. "All options remain on the table as we assess what further steps may be necessary," he added. Reporting by Ryan Patrick Jones and Ismail Shakil Editing by Rod Nickel Our Standards: The Thomson Reuters Trust Principles., opens new tab
[4]
OpenAI says Canada mass shooter evaded ban with second ChatGPT account
OTTAWA, Ontario (AP) -- ChatGPT-maker OpenAI said Thursday the shooter in one of the worst school shootings in Canada's history got around a ban on her problematic use of the service by having a second account. The revelation came as the San Francisco tech company outlined in a letter to Canada's government some "immediate steps" it was taking in response to the killings, and that if these had been in place at the time, police would have been informed of the activity on the account. OpenAI's vice president for global policy, Ann O'Leary, said the company only discovered the second account after Jesse Van Rootselaar's name was announced by the Royal Canadian Mounted Police, who said Van Rootselaar killed eight people and then herself in Tumbler Ridge, British Columbia, on Feb. 10. She said the shooter somehow evaded systems to prevent banned users from creating new accounts, and Van Rootselaar's second account was shared with law enforcement upon its discovery. The letter said OpenAI is committed to strengthening its detection systems to better prevent attempts to evade its safeguards and "prioritize identifying the highest risk offenders." The shooter's first ChatGPT account was shut down in June 2025, the letter said, after a violation of its usage policy. The letter said OpenAI's automated system detected the account, and it was then sent to human review to determine whether its policies were violated and whether the account warranted referral to law enforcement. "Based on what we could see at that time the account was banned in June 2025, we did not identify credible and imminent planning that met our threshold to refer the matter to law enforcement," O'Leary said. Speaking to reporters on Thursday, British Columbia Premier David Eby said Sam Altman, the CEO of OpenAI, has agreed to meet with him. Eby said his government was told by OpenAI that changes to the thresholds to its protocols would have resulted in police being informed about Van Rootselaar's ChatGPT activity, had they been in place before the killings. But this was "cold comfort" for the families of Tumbler Ridge, he said. In her letter, O'Leary also said the firm will strengthen protocols about contacting police "when conversations cross the line into an imminent and credible risk." "With the benefit of our continued learnings, under our enhanced law enforcement referral protocol, we would refer the account banned in June 2025 to law enforcement if it were discovered today," she said. She said OpenAI will develop a direct point of contact with Canadian law enforcement. "The events in Tumbler Ridge are an unspeakable tragedy, and our hearts remain with the victims, their families, and the entire community," O'Leary said in the letter. O'Leary thanked Canada's Artificial Intelligence Minister Evan Solomon for convening a meeting Tuesday to discuss how to help prevent similar tragedies in the future. "In our meeting, you and the other Ministers stressed that no community should have to face this tragedy," O'Leary said. "We agree." Solomon called OpenAI representatives to Ottawa to explain its safety procedures and decision-making processes. Solomon said "all options are on the table" as the government develops a "suite of measures" to address online harms and other digital policy issues. The RCMP said Van Rootselaar first killed her mother and stepbrother at the family home before attacking the nearby school. Van Rootselaar had a history of mental health contacts with police. The motive for the shooting remains unclear. The attack was Canada's deadliest rampage since 2020, when a gunman in Nova Scotia killed 13 people and set fires that left another nine dead.
[5]
Canadian government demands safety changes from OpenAI
Canadian officials summoned leaders from OpenAI to Ottawa this week to address safety concerns about ChatGPT. The crux of the government concerns was that OpenAI did not notify authorities when it banned the account of a user who allegedly committed a mass shooting in British Columbia earlier this month. "The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to be changes implemented, and if they're not forthcoming very quickly, the government is going to be making changes," Justice Minister Sean Fraser said of the company and its AI chatbot. It's unclear what those government-led changes or rules might be. There have been two previous, unsuccessful attempts to pass an online harms act in Canada. A recent report by The Wall Street Journal claimed that in 2025, some OpenAI employees flagged the account of the alleged shooter, Jesse Van Rootselaar, as containing potential warnings of committing real-world violence and called for leadership to notify law enforcement. Although Van Rootselaar's account was banned for policy violations, a company rep said that the account activity did not meet OpenAI's criteria for engaging the local police. "Those reports were deeply disturbing, reports saying that OpenAI did not contact law enforcement in a timely manner," said Canadian Artificial Intelligence Minister Evan Solomon ahead of the discussion with company leaders. "We will have a sit-down meeting to have an explanation of their safety protocols and when they escalate and their thresholds of escalation to police, so we have a better understanding of what's happening and what they do." OpenAI has been implicated in mulitple wrongful death suits. The company's ChatGPT was accused of encouraging "paranoid beliefs" before a man killed his mother and himself in a December 2025 lawsuit. It is also at the center of one of several wrongful death lawsuits against the makers of AI chatbots for helping teenagers plan and commit suicides.
[6]
OpenAI vows safety policy changes after Tumbler Ridge shooting
OpenAI says it will strengthen its safety measures after the company failed to alert police about the Tumbler Ridge shooting suspect's ChatGPT account despite it being flagged internally months before the attack. In an open letter to Canadian officials, the company said the suspect was able to create a second account after the first was banned, slipping past its internal detection systems. It said it has also since changed how it reports accounts to police, and that the suspect's activity would be referred to law enforcement if it were flagged today. An account linked to the suspect, 18‑year‑old Jesse Van Rootselaar, was banned by OpenAI in June 2025 -- seven months before the shooting. Eight people were killed in the 10 February attack, which took place at a residence and the local secondary school in Tumbler Ridge, a small town in British Columbia, Canada. The victims included the suspect's mother and 11‑year‑old stepbrother, as well as five young school children and an educator. Van Rootselaar died of a self-inflicted gunshot wound, police said. The shooting was one of the deadliest in Canadian history. Canadian officials met OpenAI senior staff earlier this week in Ottawa, after the company revealed it had shut down a ChatGPT account used by the suspect in June 2025 for violating usage terms. That account was not reported to police, however, because it did not at the time meet its threshold for "credible and imminent planning" of serious violence, the company said. In its letter to Canadian officials on Thursday, penned by OpenAI's vice-president of global policy and shared with media outlets, the company said it had implemented a series of changes in recent months, including enlisting the help of "mental health and behavioural experts" to assess cases and making the criteria for referral to police "more flexible". Because of the changes, OpenAI said it would have reported the suspect's ChatGPT account under the new guidelines. The letter does not specify when those new protocols took effect. The company also revealed that the suspect was able to create a second account, despite being flagged by OpenAI systems in the past. That second account was shared with police after the shooting, it said. "We commit to strengthening our detection systems to better prevent attempts to evade our safeguards and prioritize identifying the highest risk offenders," the company wrote. OpenAI said it will also establish a direct point of contact with Canadian law enforcement so it can quickly flag any possible future cases with "potential for real world violence". That direct line of communication is one of the requests made by Canadian officials following their meeting with OpenAI staff on Tuesday. Canada's AI minister Evan Solomon has described what occurred as a "failure". He told reporters that he was left "disappointed" after the meeting, saying that he did not hear "any substantial new safety protocols" from OpenAI. Solomon also opened the door for future legislation on the matter if OpenAI fails to implement changes quickly. "All options for us are on the table, because at the end of the day, Canadians want to feel safe," Solomon after Tuesday's meeting. British Columbia Premier David Eby has said he believes the shooting would have been prevented if the company had alerted police to Van Rootselaar's account months ago. "They tragically missed the mark in [not] bringing this information forward. The consequences of that will be borne by the families of Tumbler Ridge for the rest of their lives," Eby told reporters on Thursday. Eby added that OpenAI Sam Altman has agreed to meet to discuss the company's safety policies. "I think it's important that Mr Altman hear about how his team's decision not to bring this information forward has resulted in devastation," he said.
[7]
Canada to Probe What OpenAI Knew About Tumbler Ridge Shooter
Canadian officials have summoned leaders from OpenAI for a meeting following revelations that the company did not inform the authorities about a user whose account had been suspended months before she committed a mass murder in British Columbia. The country's minister of artificial intelligence, Evan Solomon, said on Monday that he would meet in Ottawa with senior safety officials from OpenAI on Tuesday seeking explanations about safety protocols and thresholds for when information is passed on to the police. Mr. Solomon said he was "deeply disturbed" by what he had learned of the company's actions involving Jesse Van Rootselaar, the 18-year-old who the authorities say killed eight people in the rural community of Tumbler Ridge, British Columbia. Ms. Van Rootselaar, shot and killed her mother and half brother at the family home this month before driving to a school and killing five children and one educator. Two other students were injured, one of whom remains in serious condition in a Vancouver children's hospital. The suspect killed herself at the school as police officers responded to the shooting, the authorities said. Ms. Van Rootselaar displayed a fascination with weapons and extreme violence, according to a review of her social media accounts by The New York Times, and documented her experiences with mental health issues. Messages sent by Ms. Van Rootselaar to her ChatGPT chatbot raised flags internally at OpenAI last June, according to the company. After the company's abuse detection system, which uses automated tools and investigations by staff members, picked up on concerning messages from her account, Ms. Van Rootselaar was banned from the platform, the company said. It did not provide details about the messages. Ms. Van Rootselaar's use of ChatGPT before the mass shooting was first reported by The Wall Street Journal. OpenAI said it had considered informing law enforcement about the shooter's account but ultimately decided not to do so because the company determined that there was no credible or imminent planning on the part of the user. OpenAI says it tries to balance public safety against protecting the privacy of users. It says it also wants to avoid being overly aggressive about issuing warnings that could cause distress by leading to law enforcement officials showing up unannounced at a user's home. But the company's decision not to reach out the authorities in this case, according to The Journal, raised concerns among some employees. OpenAI said that it did contact the Royal Canadian Mounted Police with information about Ms. Rootselaar's account activity after the company learned of the mass shooting. The Royal Canadian Mounted Police, the federal agency leading the shooting investigation, is seeking an order to force relevant digital platforms and artificial intelligence companies to preserve potential evidence in the Tumbler Ridge case. OpenAI does not have an office in Canada, but has been courting officials as part of the company's efforts to expand. Weeks before the Feb. 10 shooting, representatives from OpenAI had scheduled a Feb. 11 meeting with officials in British Columbia to discuss a potential office in the province. The day after that meeting, the company asked for contact information for the police, according to a statement from the premier's office, but did not alert the government that it might have potential evidence about the shootings. "Reports that allege OpenAI had related intelligence before the shootings in Tumbler Ridge took place are profoundly disturbing for the victims' families and all British Columbians," David Eby, the premier of British Columbia, said in a statement. "We will use all powers of government to ensure that police have the tools they need to investigate every aspect of this horrific tragedy," Mr. Eby said. (The New York Times sued OpenAI and Microsoft in 2023, accusing them of copyright infringement of news content related to A.I. systems. The two companies have denied those claims.)
[8]
Canada Summons OpenAI Execs on Shooting Suspect's ChatGPT Use
Canada summoned OpenAI executives after the company debated referring a ChatGPT user to police but ultimately didn't -- months before the teenager became the sole suspect in a mass shooting. Jesse Van Rootselaar, 18, was named by police as the suspected killer of six children and two adults in the remote town of Tumbler Ridge, British Columbia, in one of Canada's worst-ever mass shootings. Van Rootselaar is also believed to have died by suicide following the attack earlier this month. OpenAI said Friday that Van Rootselaar's ChatGPT account was flagged in June 2025 by systems that scan for misuse, including potential violent activity. The company considered referring the account to law enforcement at the time, but found no credible or imminent threat and determined it didn't meet the threshold. The account was subsequently banned. The artificial intelligence giant's senior safety executives will travel from the US to meet AI Minister Evan Solomon in Ottawa on Tuesday, he said at a news conference Monday, after his team met with company representatives a day earlier. Solomon said media reports on OpenAI's internal deliberations were "deeply disturbing," adding the reports suggested the company "did not contact law enforcement in a timely manner." The Wall Street Journal first reported OpenAI's identification of Van Rootselaar, citing anonymous sources who said the alleged killer "described scenarios involving gun violence" over several days. That triggered an internal debate among roughly a dozen staffers, some of whom urged contacting police, the report said. Solomon pointed to legislation in development including on privacy and so-called online harms, and said he's working closely with officials in the justice, public safety and culture departments, as well as the province of BC. "We are making sure that all options are on the table to make sure that Canadians are kept safe," he said, adding "we will see" what OpenAI says about its protocols and escalation methodology. "Our job and our duty is to make sure Canadians are protected."
[9]
OpenAI outlines steps to boost safety measures in response to Canada school shooting
TORONTO, Feb 26 (Reuters) - OpenAI said on Thursday that it will set up a direct point of contact with Canadian law enforcement and improve detection of repeat policy violators, among steps to boost its safety protocols in the wake of a recent school shooting in Canada. The ChatGPT maker detailed the steps in a letter to Canada's minister in charge of artificial intelligence. Reporting by Ryan Patrick Jones, Bhargav Acharya and Ismail Shakil; Editing by Caroline Stauffer Our Standards: The Thomson Reuters Trust Principles., opens new tab
[10]
ChatGPT-maker OpenAI safety representatives summoned to Canada after school shooting
TORONTO (AP) -- Representatives of ChatGPT-maker OpenAI have been summoned to Ottawa after the company said last week that it considered but didn't alert Canadian police about the activities of a person who months later committed one of the worst school shootings in the country's history. Artificial Intelligence Minister Evan Solomon said Monday that he expects the company's top safety representatives to explain its protocols and how it decides to forward cases to law enforcement when he meets with them on Tuesday. OpenAI said last June that the company identified the account of Jesse Van Rootselaar via abuse detection efforts for "furtherance of violent activities." The San Francisco technology company said that it considered whether to refer the account to the Royal Canadian Mounted Police, or RCMP, but determined at the time that the account activity didn't meet a threshold for referral to law enforcement. OpenAI banned the account in June for violating its usage policy. The 18-year-old killed eight people in a remote part of British Columbia this month and died from a self-inflicted gunshot wound. OpenAI said that the threshold for referring a user to law enforcement is whether the case involves an imminent and credible risk of serious physical harm to others. The company said that it didn't identify credible or imminent planning. The Wall Street Journal first reported OpenAI's revelation, reporting that about a dozen employees debated informing Canadian police. OpenAI said that it wasn't until after learning of the school shooting that employees reached out to RCMP with information on the individual and their use of ChatGPT Solomon said that he contacted OpenAI immediately when he read the reports that OpenAI didn't contact law enforcement in a timely manner. "I have summoned the senior safety team from OpenAI to come here to Ottawa from the United States," Solomon said. "Canadians expect, first of all, that their children particularly are kept safe and these organizations act in a responsible manner." Solomon said that some of his representatives already met with some OpenAI officials on Sunday. He wouldn't say whether the Canadian government intends to regulate AI chatbots like ChatGPT, but insists that all options are on the table. Police said Van Rootselaar first killed her mother and stepbrother at the family home before attacking the nearby school. Van Rootselaar had a history of mental health contacts with police. The motive for the shooting remains unclear. The town of Tumbler Ridge in the Canadian Rockies is more than 1,000 kilometers (600 miles) northeast of Vancouver, near the provincial border with Alberta. Police said the victims included a 39-year-old teaching assistant and five students, ages 12 to 13. The attack was Canada's deadliest rampage since 2020, when a gunman in Nova Scotia killed 13 people and set fires that left another nine dead.
[11]
Canada summons OpenAI senior staff over Tumbler Ridge shooting
Canada's minister for artificial intelligence has summoned senior staff from OpenAI on Tuesday over the mass shooting in Tumbler Ridge, British Columbia, in which eight people were killed including six young children. The company said last week it banned a ChatGPT account owned by the shooting suspect more than half a year before the attack but did not alert authorities at the time as it did not meet a serious harm threshold. AI Minister Evan Solomon said the OpenAI staff will be asked to discuss "safety protocols" and when harmful posts are relayed to law enforcement. The suspect in the 10 February attack was identified by police as 18-year-old Jesse Van Rootselaar. The Royal Canadian Mounted Police said it is still investigating the incident, including "a thorough review of the content and electronic devices, as well as social media and online activities" related to the suspect. In a statement to the BBC, the RCMP confirmed that OpenAI had reached out after the incident regarding the suspect's activity on its platforms. The Wall Street Journal first reported on Friday that Van Rootselaar's account was banned for troubling posts, including ones that featured scenarios of gun violence. Solomon, Canada's AI minister, told reporters on Monday that he was very disturbed by the revelation, and that his team reached out to OpenAI over the weekend for "an explanation about the situation". He added that he will be meeting with the OpenAI's senior safety team, who are flying from the US to Ottawa for a Tuesday evening meeting. "We will have a sit-down meeting to have an explanation of their safety protocols and their thresholds of escalation to police so we have a better understanding of what's happening and what they do," he said. The BBC has reached out to OpenAI for comment on the meeting. OpenAI has said that it did not alert authorities to the suspect's account because its usage did not meet its threshold of a credible or imminent plan for serious physical harm to others. It said its thoughts were with everyone affected by the tragedy and that following the attack it had "proactively" contacted Canadian police with information on the suspect. According to the Wall Street Journal, "about a dozen staffers debated whether to take action on Van Rootselaar's posts". Some had identified the suspect's usage of the AI tool as an indication of real world violence and encouraged leaders to alert authorities, the US outlet reported. But, it said, leaders of the company decided not to do so. The attack, which occurred at the suspect's residence and a secondary school in Tumbler Ridge, is one of the deadliest mass shootings in Canadian history. Police said Van Rootselaar was a local to the town of about 2,300 people, and was known to law enforcement due to a history of mental health-related visits over the years to the suspect's home.
[12]
Canada seeks answers from OpenAI for failing to alert police after suspending school shooter's account
Company had suspended account of Tumbler Ridge shooter in June 2025 over 'furtherance of violent activities' Canada's artificial intelligence minister says he has summoned representatives from the technology company OpenAI after the company declined to alert police after suspending the account of a user who became the perpetrator of one of the country's the worst-ever school shootings. Evan Solomon says he is "deeply disturbed" by reports the company, which operates the popular ChatGPT chatbot, suspended the account of Jesse Van Rootselaar over the "furtherance of violent activities" in June 2025 but did not reach out to Canadian law enforcement. On 10 February, the 18-year-old killed eight people in the town of Tumbler Ridge. Among the victims were five students, aged 12 to 13 and a 39-year-old teaching assistant. Before attacking the school, Van Rootselaar killed her mother and half-brother at their nearby home. The shooter had described violent scenarios involving guns to ChatGPT over several days in June, which an automated review system flagged, according to the Wall Street Journal. But the San Francisco tech company said it felt the account activity did not identify "credible or imminent planning" and so banned her account, but did not notify authorities in Canada. Solomon told reporters he contacted OpenAI over the weekend to arrange a meeting in Ottawa and expects the company's top safety representatives to explain how it decides to forward cases to law enforcement. "They will come here [Tuesday], and we will have a sit-down meeting to have an explanation of their safety protocols and when they escalate and their thresholds of escalation to police, so we have a better understanding of what's happening and what they do," he said. Canada's federal government is currently weighing how it might - if at all - regulate the use of immensely popular artificial intelligence chatbots, including the extent to which minors can freely use the products. Last week, the Wall Street Journal reported that staff at the tech company had considered alerting Canadian police last year about the activities of Van Rootselaar. OpenAI said in a statement that after learning of the school shooting, employees reached out to the RCMP with information on the individual and their use of ChatGPT. Van Rootselaar also used the game Roblox to create a virtual mall full of weapons that allowed players to shoot one another in advance of the Tumbler Ridge attack. While the company framed its decision to reach out to the RCMP as proactive, its handling of the issue has nonetheless come under fire. British Columbia's provincial government confirmed to the Guardian that while a representative of OpenAI met with officials one day after the shooting in a pre-planned meeting, the company did not reveal that it had suspended the shooter's ChatGPT account months earlier due to its violent nature. The meeting was first reported by the Globe and Mail. It was only two days after the mass shooting that representatives with OpenAI reached out to the province, for help in contacting the RCMP. David Eby, the British Columbia premier, said in a statement the pain the families are enduring is "unimaginable" and that revelations OpenAI had "related intelligence" before the shooting is "profoundly disturbing for the victims' families and all British Columbians".
[13]
Canada tells OpenAI to boost safety measures or be forced to by government
OTTAWA, Feb 25 (Reuters) - Canadian ministers told OpenAI that if it did not quickly boost its safety protocols in the wake of a recent school shooting, Ottawa would effect the change through legislation, a top official said on Wednesday. Ottawa summoned OpenAI's safety team for talks on Tuesday after the ChatGPT maker said it had not contacted police about an account that it banned belonging to an alleged mass shooter. Jesse Van Rootselaar, 18, is suspected of killing eight people on February 10 before taking her own life in a small town in British Columbia. OpenAI said it banned her account last year on ChatGPT for policy violations, which it said did not meet internal criteria for reporting to law enforcement. "The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to be changes implemented, and if they're not forthcoming very quickly, the government is going to be making changes," Justice Minister Sean Fraser told reporters. OpenAI was not immediately available for comment. ONLINE HATE CRACKDOWN In 2024, Canada's Liberal government introduced draft legislation to crack down on online hate, but the effort stalled amid criticism it was too broad in scope. Ministers say they will try again this year with more focused measures. "Anything that anyone could have done to prevent that tragedy or future tragedies must be done. We will fully explore it to the full lengths of the law," Prime Minister Mark Carney told reporters. Van Rootselaar, who police say was born male but identified as a woman and began transitioning six years ago, had a history of mental health problems. The killings took place in Tumbler Ridge, British Columbia, a town with about 2,400 people. "We were really disturbed by the reports that there might have been an opportunity to escalate this to law enforcement ... and we want to make sure if any company has that opportunity, they would escalate further," said Evan Solomon, the federal minister in charge of artificial intelligence. On Tuesday, OpenAI said it would shortly update Ottawa on what additional steps it was taking. OpenAI says it banned Van Rootselaar's account in 2025 after it was flagged by systems that identify "misuses of our models in furtherance of violent activities." The company considered contacting police, but determined the account did not meet the threshold of posing an imminent and credible risk of serious physical harm to others. Crime experts noted that while greater scrutiny of AI platforms and social media is necessary, police or other authorities may have missed chances to avert the tragedy in British Columbia. Police had previously removed guns from Van Rootselaar's home, though they were later returned. Reporting by David Ljunggren; Editing by Paul Simao Our Standards: The Thomson Reuters Trust Principles., opens new tab
[14]
OpenAI Flagged a Mass Shooter's Troubling Conversations With ChatGPT Before the Incident, Decided Not to Warn Police
Employees at OpenAI urged leaders to alert the police, but they opted not to. A grim scoop from the Wall Street Journal: an automated review system at OpenAI flagged disturbing conversations that a future mass shooter was having with the company's flagship AI ChatGPT -- but, despite being urged by employees at the company to warn law enforcement, OpenAI leadership opted not to. The 18-year-old Jesse Van Rootselaar ultimately killed eight people including herself and injured 25 more in British Columbia earlier this month, in a tragedy that shook Canada and the world. What we didn't know until today is that employees at OpenAI had already been aware of Van Rootselaar for months, and had debated alerting authorities because of the alarming nature of her conversations with ChatGPT. In the conversations with OpenAI's chatbot, according to sources at the company who spoke to the WSJ, Van Rootselaar "described scenarios involving gun violence." The sources say they recommended that the company warn authorities local authorities, but that leadership at the company decided against it. An OpenAI spokesperson didn't dispute those claims, telling the newspaper that it banned Van Rootselaar's account, but decided that her interactions with ChatGPT didn't meet its internal criteria for escalating a concern with a user to police. "Our thoughts are with everyone affected by the Tumbler Ridge tragedy," the company said in a statement to the paper. The spokesperson also said that the company had reached out to assist Canadian police after the shooting took place. We've known since last year that OpenAI is scanning users' conversations for signs that they're planning a violent crime, though it's not clear whether it's yet successfully headed off an incident before it happened. Its decision to engage in that monitoring in the first place reflects an increasingly long list of incidents in which ChatGPT users have fallen into severe mental health crises after becoming obsessed with the bot, sometimes resulting in involuntary commitment or jail -- as well as a growing number of suicides and murders, leading to numerous lawsuits. In a sense, questions of how to deal with threatening online conduct is a longstanding question that every social platform has grappled with. But AI brings difficult new questions to the topic, since chatbots can engage with users directly -- sometimes even encouraging bad bad behavior or otherwise behaving inappropriately. Like many mass shooters, Van Rootselaar left behind a complicated digital legacy -- including on Roblox -- that investigators are still wading through.
[15]
Tumbler Ridge suspect's ChatGPT account banned before shooting
OpenAI banned a ChatGPT account owned by the suspect of a mass shooting in British Columbia more than half a year before the attack took place. The AI company said they had identified an account owned by Jesse Van Rootselaar in June 2025 under abuse and enforcement detection, which includes identifying accounts being used to further violence. OpenAI said it did not alert authorities to the account because its usage did not meet its threshold of a credible or imminent plan for serious physical harm to others. It said its thoughts were with everyone affected by the tragedy and that following the attack it had "proactively" contacted Canadian police with information on the suspect. Van Rootselaar is suspected of having shot and killed eight people in rural Tumbler Ridge on 12 February in one of the deadliest attacks in Canada's history. According to the Wall Street Journal, which first reported the story, "about a dozen staffers debated whether to take action on Van Rootselaar's posts." Some had identified the suspect's usage of the AI tool as an indication of real world violence and encouraged leaders to alert authorities, the US outlet reported. But, it said, leaders of the company decided not to do so. In a statement, a spokesperson for OpenAI said: "In June 2025, we proactively identified an account associated with this individual [Jesse Van Rootselaar] via our abuse detection and enforcement efforts, which include automated tools and human investigations to identify misuses of our models in furtherance of violent activities." They said the company would continue to support the police's investigation. The BBC has contacted the Royal Canadian Mounted Police for comment. OpenAI has said it will uphold its policy of alerting authorities only in cases of imminent risk because alerting them too broadly could cause unintended harm. It has also said that it trains ChatGPT to discourage imminent real-world harm when it identifies a dangerous situation and to refuse to help people that are attempting to use the service for illegal activities. The company added that it is constantly reviewing its referral criteria with experts and that it is reviewing the case for improvements. The deadly attack on Tumbler Ridge Secondary School saw a further 27 people injured. Van Rootselaar was found dead from a self-inflicted gunshot wound at the school. Police said the suspect was born a biological male but identified as a woman. Van Rootselaars's mother and step-brother were among the victims of the shooting. Both were found dead at a local residence, police said. The motive for the attack is not yet known.
[16]
Canada summons OpenAI over failure to report mass shooter
Toronto (Canada) (AFP) - Canada has summoned senior leadership from OpenAI to Ottawa to explain the company's decision not to report suspicious online activity by an individual who later killed eight people this month. OpenAI has confirmed that in June 2025 its abuse‑detection efforts identified a ChatGPT account linked to Jesse Van Rootselaar, an 18‑year‑old transgender woman who murdered her mother, brother, and six people at a school in Tumbler Ridge, British Columbia, on February 10. The company told AFP that the account was identified through an investigative process that looks for usage related to violent activity. The account was banned that month, but the company did not inform Canadian police at the time. That decision was "very disturbing," Canada's Artificial Intelligence Minister Evan Solomon told reporters Monday in Ottawa. "I have summoned the senior safety team from OpenAI in the United States to come here to Ottawa," Solomon said. "They will come here tomorrow (Tuesday), and we will have a sit‑down meeting to get an explanation of their safety protocols," he added. OpenAI has said it uses a very high bar when deciding whether to involve law enforcement after identifying a suspicious account. Concerning Van Rootselaar, it decided not to inform Canadian police because her ChatGPT usage did not point toward credible or imminent planning of an attack. "Our thoughts are with everyone affected by the Tumbler Ridge tragedy," the company said in a statement last week. "We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we'll continue to support their investigation," it added. Solomon said he "immediately" contacted OpenAI when he first read media reports that the company "did not contact law enforcement in a timely manner." He did not specify what actions or new legislation Ottawa might consider to regulate the use of artificial intelligence moving forward, but said "all options are on the table." Canada was shocked by the shootings in Tumbler Ridge, a small picturesque mining town built four decades ago, 1,180 kilometers (733 miles) north of Vancouver. Van Rootselaar's victims at the school included five children and a teacher. The shooter died there of a self-inflicted gunshot wound, according to police. She had a history of mental‑health challenges, and the RCMP had previously visited her home. Unlike the United States, Canada has strict gun laws and mass shootings are extremely rare. The killings in Tumbler Ridge were among the worst outbursts of violence in Canadian history.
[17]
OpenAI's ban of Canada school shooter's account raises scrutiny of other online activity
OTTAWA, Feb 25 (Reuters) - OpenAI's admission it banned the ChatGPT account of mass shooter Jesse Van Rootselaar months before the 18-year-old killed eight people and herself is drawing more scrutiny to her past online activity and raising questions about whether opportunities were missed to prevent one of Canada's worst-ever mass killings. OpenAI's decision not to report Van Rootselaar to police prompted Canada's Artificial Intelligence Minister Evan Solomon to summon company officials to Ottawa this week to explain their safety protocols. The shooting in the small British Columbia town of Tumbler Ridge is the latest tragedy in which critics have argued interactions with chat bots may have forewarned of or even encouraged violence. Crime experts noted that while greater scrutiny of AI platforms and social media is necessary, police or other authorities may have missed chances to avert the tragedy. Police had previously removed guns from Van Rootselaar's home, though they were later returned. Police also said they were aware of her history of mental health issues. Van Rootselaar began the attack by killing her mother and sibling at home, before shooting dead an educator and five students, while two others were hospitalized with serious injuries. The Royal Canadian Mounted Police said the investigation is still active and some questions are subject to relevant legislation or court processes. "This was clearly a household where there were many problems," said Patrick Watson, a criminology professor at the University of Toronto unconnected to the case. "But we also need far more scrutiny of the companies who are creating these new platforms, which are essentially becoming a new public sphere with very little accountability." In a since-deleted Reddit post, Van Rootselaar said she had been diagnosed with numerous mental health issues, including attention deficit hyperactivity disorder, depression, obsessive compulsive disorder and was on the autism spectrum. "I went crazy and burnt my house down my second time trying shrooms but still have a desire to try alternatives," Van Rootselaar wrote. Van Rootselaar also previously created a game using the Roblox Studio app, involving shooting other characters at a mall. Roblox told Reuters that Van Rootselaar's account and its content were removed from the Roblox Studio app the day after the Tumbler Ridge massacre, and that the game had only seven visits. Open AI said in a statement it had banned Van Rootselaar's ChatGPT account last June after identifying "misuses of our models in furtherance of violent activities" and considered whether to refer her to law enforcement. The company ultimately decided "the account activity did not meet the higher threshold required for referral," mainly because OpenAI was not able to identify credible or imminent planning. The company said intervening in these situations can be distressing for young people and their families and may also raise privacy concerns. MISSED OPPORTUNITY Tracy Vaillancourt, a professor at the University of Ottawa who specializes in youth mental health and violence prevention, said OpenAI's failure to refer Van Rootselaar to police was "a missed opportunity," but acknowledged there were challenges in protecting users' privacy. "People using ChatGPT may worry that it's going to spy on them, but AI is so powerful there should be a way to improve how technology and we as a society, are able to reduce credible threats," Vaillancourt said. Cynthia Khoo, a technology and human rights lawyer, warned "it would be a mistake to start down a path where AI companies might become deputized as a private surveillance wing of law enforcement," saying that invasions of privacy would disproportionately hit already marginalized groups. Van Rootselaar was born male but identified as a female and began transitioning six years ago, police said. A 2023 report from the U.S. government showed that more than 95% of mass shooters are male and that transgender people account for about 2%. British Columbia Premier David Eby said the Tumbler Ridge shooting could have been avoided if OpenAI had warned authorities about Van Rootselaar's violent online activity and called for more transparency from the tech giant. "It looks like OpenAI had the opportunity to prevent this tragedy, to prevent this horrific loss of life, to prevent there from being dead children in British Columbia," he said Monday. OpenAI said in its statement the shooting was "a devastating tragedy" and that it was doing all it could to support the ongoing investigation. "We reached out to law enforcement immediately after the identity of the shooter was made public and we are engaged with the (police) to support their ongoing work," the company said. Reporting by Maria Cheng, Ryan Patrick Jones; Editing by Caroline Stauffer and Lincoln Feast. Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * World * Constitutional Law * Human Rights * Criminal Ryan Patrick Jones Thomson Reuters Ryan is a breaking news correspondent based in Toronto covering breaking news, national affairs and politics in the United States and Canada.
[18]
ChatGPT-maker OpenAI safety representatives summoned to Canada after school shooting
TORONTO (AP) -- Representatives of ChatGPT-maker OpenAI have been summoned to Ottawa after the company said last week that it considered but didn't alert Canadian police about the activities of a person who months later committed one of the worst school shootings in the country's history. Artificial Intelligence Minister Evan Solomon said Monday that he expects the company's top safety representatives to explain its protocols and how it decides to forward cases to law enforcement when he meets with them on Tuesday. OpenAI said last June that the company identified the account of Jesse Van Rootselaar via abuse detection efforts for "furtherance of violent activities." The San Francisco technology company said that it considered whether to refer the account to the Royal Canadian Mounted Police, or RCMP, but determined at the time that the account activity didn't meet a threshold for referral to law enforcement. OpenAI banned the account in June for violating its usage policy. The 18-year-old killed eight people in a remote part of British Columbia this month and died from a self-inflicted gunshot wound. OpenAI said that the threshold for referring a user to law enforcement is whether the case involves an imminent and credible risk of serious physical harm to others. The company said that it didn't identify credible or imminent planning. The Wall Street Journal first reported OpenAI's revelation, reporting that about a dozen employees debated informing Canadian police. OpenAI said that it wasn't until after learning of the school shooting that employees reached out to RCMP with information on the individual and their use of ChatGPT Solomon said that he contacted OpenAI immediately when he read the reports that OpenAI didn't contact law enforcement in a timely manner. "I have summoned the senior safety team from OpenAI to come here to Ottawa from the United States," Solomon said. "Canadians expect, first of all, that their children particularly are kept safe and these organizations act in a responsible manner." Solomon said that some of his representatives already met with some OpenAI officials on Sunday. He wouldn't say whether the Canadian government intends to regulate AI chatbots like ChatGPT, but insists that all options are on the table. Police said Van Rootselaar first killed her mother and stepbrother at the family home before attacking the nearby school. Van Rootselaar had a history of mental health contacts with police. The motive for the shooting remains unclear. The town of Tumbler Ridge in the Canadian Rockies is more than 1,000 kilometers (600 miles) northeast of Vancouver, near the provincial border with Alberta. Police said the victims included a 39-year-old teaching assistant and five students, ages 12 to 13. The attack was Canada's deadliest rampage since 2020, when a gunman in Nova Scotia killed 13 people and set fires that left another nine dead.
[19]
Canada Tells OpenAI to Boost Safety Measures or Be Forced to by Government
OTTAWA, Feb 25 (Reuters) - Canadian ministers told OpenAI that if it did not quickly boost its safety protocols in the wake of a recent school shooting, Ottawa would effect the change through legislation, a top official said on Wednesday. Ottawa summoned OpenAI's safety team for talks on Tuesday after the ChatGPT maker said it had not contacted police about an account that it banned belonging to an alleged mass shooter. Jesse Van Rootselaar, 18, is suspected of killing eight people on February 10 before taking her own life in a small town in British Columbia. OpenAI said it banned her account last year on ChatGPT for policy violations, which it said did not meet internal criteria for reporting to law enforcement. "The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to be changes implemented, and if they're not forthcoming very quickly, the government is going to be making changes," Justice Minister Sean Fraser told reporters. OpenAI was not immediately available for comment. ONLINE HATE CRACKDOWN In 2024, Canada's Liberal government introduced draft legislation to crack down on online hate, but the effort stalled amid criticism it was too broad in scope. Ministers say they will try again this year with more focused measures. "Anything that anyone could have done to prevent that tragedy or future tragedies must be done. We will fully explore it to the full lengths of the law," Prime Minister Mark Carney told reporters. Van Rootselaar, who police say was born male but identified as a woman and began transitioning six years ago, had a history of mental health problems. The killings took place in Tumbler Ridge, British Columbia, a town with about 2,400 people. "We were really disturbed by the reports that there might have been an opportunity to escalate this to law enforcement ... and we want to make sure if any company has that opportunity, they would escalate further," said Evan Solomon, the federal minister in charge of artificial intelligence. On Tuesday, OpenAI said it would shortly update Ottawa on what additional steps it was taking. OpenAI says it banned Van Rootselaar's account in 2025 after it was flagged by systems that identify "misuses of our models in furtherance of violent activities." The company considered contacting police, but determined the account did not meet the threshold of posing an imminent and credible risk of serious physical harm to others. Crime experts noted that while greater scrutiny of AI platforms and social media is necessary, police or other authorities may have missed chances to avert the tragedy in British Columbia. Police had previously removed guns from Van Rootselaar's home, though they were later returned. (Reporting by David Ljunggren; Editing by Paul Simao)
[20]
Canada to press OpenAI safety officials in wake of school shooting
OTTAWA, Feb 24 (Reuters) - Canada will press OpenAI officials on Tuesday about their safety protocols after it emerged the ChatGPT maker did not contact police about an account it banned belonging to an alleged mass shooter, a government minister said. Jesse Van Rootselaar, 18, is suspected of killing eight people on February 10 before taking her own life. OpenAI said it banned her account last year on ChatGPT for policy violations, which it said did not meet internal criteria for reporting to law enforcement. Evan Solomon, the federal minister in charge of artificial intelligence, has summoned OpenAI's top safety officials for a meeting in Ottawa. "I'm hoping (they) ... will tell us more details about their safety protocols, their escalation thresholds and how they keep Canadians safe, and if they have a threat that they perceive, what the technology does and what the human process does," he told reporters. "We do want to know exactly what OpenAI does so Canadians have an understanding of what's going on and some transparency." A spokesperson for Solomon said the minister would not speak to the media after the meeting, since it was scheduled to take place late on Tuesday. In 2024, the Liberal government introduced draft legislation to crack down on online hate, but the effort stalled amid criticism it was too broad in scope. Ministers say they will try again this year with a revised bill. "All options are on the table when it comes to understanding what we can do about AI chatbots," said Solomon. Van Rootselaar, who police say was born male but identified as a woman and began transitioning six years ago, had a history of mental health problems. The killings took place in Tumbler Ridge, British Columbia, a town of around 2,400. OpenAI says it banned Van Rootselaar's account in 2025 after it was flagged by systems that identify "misuses of our models in furtherance of violent activities." The company considered contacting police, but determined the account did not meet the threshold of posing an imminent and credible risk of serious physical harm to others. Reporting by David Ljunggren Editing by Rod Nickel Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Artificial Intelligence * Social Impact David Ljunggren Thomson Reuters Covers Canadian political, economic and general news as well as breaking news across North America, previously based in London and Moscow and a winner of Reuters' Treasury scoop of the year.
[21]
OpenAI outlines steps to boost safety measures in response to Canada school shooting
The ChatGPT maker detailed the steps in a letter to Canada's minister in charge of artificial intelligence, Evan Solomon. OpenAI said on Thursday it will set up a direct point of contact with Canadian law enforcement and improve detection of repeat violators of its "violent activities" policy to boost safety protocols in the wake of a recent school shooting. The ChatGPT maker detailed the steps in a letter to Canada's minister in charge of artificial intelligence, Evan Solomon. Ann O'Leary, OpenAI's vice president of global policy, wrote the letter after Canadian ministers this week urged the ChatGPT maker to boost its safety protocols quickly and warned Ottawa would effect change through legislation if the company did not. "We remain committed to cooperating with law enforcement authorities on the investigation into the Tumbler Ridge tragedy, and we are committed to an ongoing partnership with federal and provincial governments," O'Leary said, referring to the town in British Columbia where the shooting occurred. Ottawa summoned OpenAI's safety team for talks this week after the company said it had not contacted police about an account belonging to the alleged shooter, Jesse Van Rootselaar, that it had banned. Van Rootselaar, 18, is suspected of killing eight people on February 10 before taking her own life in Tumbler Ridge. OpenAI said it banned her ChatGPT account last year for policy violations. The company said the account was flagged by systems that identify "misuses of our models in furtherance of violent activities" but did not provide further details. OpenAI said the issues did not meet its internal criteria for reporting to law enforcement. O'Leary said on Thursday that under the company's "enhanced law enforcement referral protocol," it would have referred the initial account ban in June to police if it were discovered now. She also said the company had discovered that Van Rootselaar had used a second account, which it shared with law enforcement. "We commit to strengthening our detection systems to better prevent attempts to evade our safeguards and prioritize identifying the highest-risk offenders," O'Leary said. The company also committed to periodically assessing the thresholds used by its automated systems for identifying potential violent activities by users. Crime experts have noted that while greater scrutiny of AI platforms and social media is necessary, police or other authorities may have missed additional chances to avert one of Canada's worst mass killings. Police said Van Rootselaar had a history of mental health problems and that they had removed and later returned guns from her home. Minister Solomon's office did not immediately respond to a request for comment.
[22]
OpenAI's Ban of Canada School Shooter's Account Raises Scrutiny of Other Online Activity
By Maria Cheng and Ryan Patrick Jones OTTAWA, Feb 25 (Reuters) - OpenAI's admission it banned the ChatGPT account of mass shooter Jesse Van Rootselaar months before the 18-year-old killed eight people and herself is drawing more scrutiny to her past online activity and raising questions about whether opportunities were missed to prevent one of Canada's worst-ever mass killings. OpenAI's decision not to report Van Rootselaar to police prompted Canada's Artificial Intelligence Minister Evan Solomon to summon company officials to Ottawa this week to explain their safety protocols. The shooting in the small British Columbia town of Tumbler Ridge is the latest tragedy in which critics have argued interactions with chat bots may have forewarned of or even encouraged violence. Crime experts noted that while greater scrutiny of AI platforms and social media is necessary, police or other authorities may have missed chances to avert the tragedy. Police had previously removed guns from Van Rootselaar's home, though they were later returned. Police also said they were aware of her history of mental health issues. Van Rootselaar began the attack by killing her mother and sibling at home, before shooting dead an educator and five students, while two others were hospitalized with serious injuries. The Royal Canadian Mounted Police said the investigation is still active and some questions are subject to relevant legislation or court processes. "This was clearly a household where there were many problems," said Patrick Watson, a criminology professor at the University of Toronto unconnected to the case. "But we also need far more scrutiny of the companies who are creating these new platforms, which are essentially becoming a new public sphere with very little accountability." In a since-deleted Reddit post, Van Rootselaar said she had been diagnosed with numerous mental health issues, including attention deficit hyperactivity disorder, depression, obsessive compulsive disorder and was on the autism spectrum. "I went crazy and burnt my house down my second time trying shrooms but still have a desire to try alternatives," Van Rootselaar wrote. Van Rootselaar also previously created a game using the Roblox Studio app, involving shooting other characters at a mall. Roblox told Reuters that Van Rootselaar's account and its content were removed from the Roblox Studio app the day after the Tumbler Ridge massacre, and that the game had only seven visits. Open AI said in a statement it had banned Van Rootselaar's ChatGPT account last June after identifying "misuses of our models in furtherance of violent activities" and considered whether to refer her to law enforcement. The company ultimately decided "the account activity did not meet the higher threshold required for referral," mainly because OpenAI was not able to identify credible or imminent planning. The company said intervening in these situations can be distressing for young people and their families and may also raise privacy concerns. MISSED OPPORTUNITY Tracy Vaillancourt, a professor at the University of Ottawa who specializes in youth mental health and violence prevention, said OpenAI's failure to refer Van Rootselaar to police was "a missed opportunity," but acknowledged there were challenges in protecting users' privacy. "People using ChatGPT may worry that it's going to spy on them, but AI is so powerful there should be a way to improve how technology and we as a society, are able to reduce credible threats," Vaillancourt said. Cynthia Khoo, a technology and human rights lawyer, warned "it would be a mistake to start down a path where AI companies might become deputized as a private surveillance wing of law enforcement," saying that invasions of privacy would disproportionately hit already marginalized groups. Van Rootselaar was born male but identified as a female and began transitioning six years ago, police said. A 2023 report from the U.S. government showed that more than 95% of mass shooters are male and that transgender people account for about 2%. British Columbia Premier David Eby said the Tumbler Ridge shooting could have been avoided if OpenAI had warned authorities about Van Rootselaar's violent online activity and called for more transparency from the tech giant. "It looks like OpenAI had the opportunity to prevent this tragedy, to prevent this horrific loss of life, to prevent there from being dead children in British Columbia," he said Monday. OpenAI said in its statement the shooting was "a devastating tragedy" and that it was doing all it could to support the ongoing investigation. "We reached out to law enforcement immediately after the identity of the shooter was made public and we are engaged with the (police) to support their ongoing work," the company said. (Reporting by Maria Cheng, Ryan Patrick Jones; Editing by Caroline Stauffer and Lincoln Feast.)
[23]
OpenAI's ban of Canada school shooting suspect's account raises scrutiny of other online activity
OpenAI's admission that it banned the ChatGPT account of mass shooting suspect Jesse Van Rootselaar months before the 18-year-old allegedly killed eight people and herself is drawing scrutiny to her past online activity and raising questions about whether opportunities were missed to prevent one of Canada's worst mass killings. OpenAI's decision not to report Van Rootselaar to police prompted Canada's Minister of Artificial Intelligence Evan Solomon to summon company officials to Ottawa this week and demand new safety measures from the company. The shooting in the British Columbia town of Tumbler Ridge is the latest tragedy in which critics have argued interactions with chatbots may have forewarned of or even encouraged violence.
[24]
Canadian officials to meet with OpenAI safety team after school shooting
Canada summoned top officials from OpenAI for a meeting about the company's safety protocols, a Canadian official said on Monday, after the ChatGPT maker said it did not reach out to police about an account it banned last year belonging to mass shooter Jesse Van Rootselaar. Canada summoned top officials from OpenAI for a meeting about the company's safety protocols, a Canadian official said on Monday, after the ChatGPT maker said it did not reach out to police about an account it banned last year belonging to mass shooter Jesse Van Rootselaar. Van Rootselaar, 18, killed eight people in a small British Columbia town on February 10 and then took her own life. OpenAI said it banned her account last year on the chatbot ChatGPT for policy violations which it said did not meet internal criteria for reporting to law enforcement. Senior members of OpenAI's safety team will travel from the United States to Ottawa for a meeting on Tuesday, Artificial Intelligence Minister Evan Solomon told reporters, "to have an explanation of their safety protocols, and when they escalate, and their threshold of escalation to police." OpenAI confirmed the meeting in a statement, saying that senior leaders from the company will discuss with Canadian government officials "our overall approach to safety, safeguards we have in place, and how we continuously work to strengthen them". "This was a devastating tragedy, and we are doing all we can to support the ongoing investigation," the statement said. The case has intensified scrutiny of what obligations tech companies have to report threatening user activity to law enforcement. Shooter's account previously flagged Van Rootselaar, who police say was born male but identified as a woman and began transitioning six years ago, had a series of previous mental-health-related interactions with police. The killings took place in Tumbler Ridge, British Columbia, a town of around 2,400 in the Canadian Rockies. OpenAI previously said it banned Van Rootselaar's account in June 2025 after it was flagged by systems that identify "misuses of our models in furtherance of violent activities." The company considered referring the account to police, but determined it didn't meet the threshold of posing an imminent and credible risk of serious physical harm to others, it said. Solomon said "all options are on the table," when asked what Ottawa might do to protect Canadians from online harm, citing a forthcoming bill on online privacy and data. He did not give details. "Canadians expect, first of all, that children, particularly, are kept safe and that these organizations act in a responsible manner," Solomon said. The company said it contacted the Royal Canadian Mounted Police after the shooting to provide information about Van Rootselaar's use of ChatGPT. RCMP Staff Sergeant Kris Clark confirmed OpenAI reached out to the police force after the shooting, but did not provide additional details.
[25]
Canada to Press OpenAI Safety Officials in Wake of School Shooting
OTTAWA, Feb 24 (Reuters) - Canada will press OpenAI officials on Tuesday about their safety protocols after it emerged the ChatGPT maker did not contact police about an account it banned belonging to an alleged mass shooter, a government minister said. Jesse Van Rootselaar, 18, is suspected of killing eight people on February 10 before taking her own life. OpenAI said it banned her account last year on ChatGPT for policy violations, which it said did not meet internal criteria for reporting to law enforcement. Evan Solomon, the federal minister in charge of artificial intelligence, has summoned OpenAI's top safety officials for a meeting in Ottawa. "I'm hoping (they) ... will tell us more details about their safety protocols, their escalation thresholds and how they keep Canadians safe, and if they have a threat that they perceive, what the technology does and what the human process does," he told reporters. "We do want to know exactly what OpenAI does so Canadians have an understanding of what's going on and some transparency." A spokesperson for Solomon said the minister would not speak to the media after the meeting, since it was scheduled to take place late on Tuesday. In 2024, the Liberal government introduced draft legislation to crack down on online hate, but the effort stalled amid criticism it was too broad in scope. Ministers say they will try again this year with a revised bill. "All options are on the table when it comes to understanding what we can do about AI chatbots," said Solomon. Van Rootselaar, who police say was born male but identified as a woman and began transitioning six years ago, had a history of mental health problems. The killings took place in Tumbler Ridge, British Columbia, a town of around 2,400. OpenAI says it banned Van Rootselaar's account in 2025 after it was flagged by systems that identify "misuses of our models in furtherance of violent activities." The company considered contacting police, but determined the account did not meet the threshold of posing an imminent and credible risk of serious physical harm to others. (Reporting by David LjunggrenEditing by Rod Nickel)
[26]
Canadian trans shooter's disturbing ChatGPT messages alarmed...
ChatGPT-maker OpenAI banned Canadian transgender school shooter Jesse Van Rootselaar's account over his violent behavior on the platform last year -- but never alerted authorities before he gunned down eight people in one of the country's deadliest mass killings. The 18-year-old high school dropout carried out the second-worst school shooting in Canadian history last week, first slaughtering his mother and stepbrother at home and then storming into Tumbler Ridge Secondary School, where he killed six people and injured 25 more before turning the gun on himself The San Francisco tech company said it spotted the deranged shooter's alarming profile in June through abuse-monitoring systems and considered referring the account to the Royal Canadian Mounted Police. However, OpenAI concluded the disturbing activity didn't meet the bar for notifying law enforcement and banned the account for policy violations. Following the mass killing, the company reached out to law enforcement. "Our thoughts are with everyone affected by the Tumbler Ridge tragedy," an OpenAI spokesperson said. "We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we'll continue to support their investigation." Canadian authorities confirmed they were contacted by the AI firm after the shooting, adding an investigation into Rootselaar's social media and online activity is underway. Rooselaar launched his horrifying attack at a private residence in the sleepy rural community of Tumbler Ridge on Feb. 10 before continuing the carnage at the school. Six people were found dead inside the school, and the bodies of Van Rootselaar's mother, 39, and stepbrother, 11, were discovered in a local residence, cops said. The victims included one female teacher, three female students and two male students.
[27]
Canada Says OpenAI's Safety Pledges Lack Details -- 2nd Update
OTTAWA--OpenAI's pledges to strengthen security protocols are missing key details including how they will be implemented, Canada's minister in charge of artificial intelligence said. Minister for Artificial Intelligence Evan Solomon is also demanding greater clarity on OpenAI's operations, including how troubling interactions with the ChatGPT chatbot are escalated, and how privacy considerations are balanced with public safety. Solomon issued a statement Friday, roughly 24 hours after OpenAI wrote to the minister and pledged to bolster safety protocols. OpenAI said that with the changes, it would have referred the account belonging to Jesse Van Rootselaar to police if it was discovered today. Police have identified Van Rootselaar as the suspect in a deadly school shooting in Tumbler Ridge, British Columbia that left eight dead and dozens injured. The Wall Street Journal has reported that OpenAI considered alerting Canadian law-enforcement authorities about interactions between Van Rootselaar and ChatGPT. OpenAI shut down Van Rootselaar's ChatGPT account after detecting a violation of its policy but didn't notify police. Solomon said he's also scheduled to speak to OpenAI Chief Executive Sam Altman next week. A spokesman for Solomon said the timing and location have yet to be determined. A meeting with Solomon marks the second senior Canadian politician Altman has agreed to speak with. British Columbia Premier David Eby said he's also set to speak to Altman, and Eby added he wants the CEO to be cognizant of the pain that families in Tumbler Ridge are feeling. A spokesperson for OpenAI confirmed that Altman has scheduled meeting with both Messrs. Solomon and Eby. Online platforms have long debated how to balance questions of privacy with public safety in their decisions to alert law enforcement about certain users. That debate has now pulled in the AI companies that power the chatbots to which people are confiding the most intimate details of their private thoughts and lives. In its letter to Solomon, OpenAI divulged that Van Rootselaar had a second ChatGPT account. Among OpenAI's pledges is a commitment to strengthen detection systems to prevent efforts to evade safeguards. Taylor Owen, a public-policy professor at Montreal's McGill University with a specialty in media and ethics, said OpenAI has now publicly acknowledged its previous safety protocols were inadequate, and that Van Rootselaar's ability to create a second account discloses a previously unknown system error. "It means the threshold that governed the original decision, the one that resulted in Canadian police not being contacted about violent content flagged by the company's own systems, was one the company itself now considers inadequate," Owen said. OpenAI's pledges, he added, follows a pattern among social-media platforms "where product safety changes come only after an incident forces them." Solomon said he will press Altman about ensuring the pledges are fully implemented and enforced. "We will be seeking further clarity on how human review is conducted and whether Canadian context and best practices are appropriately embedded in those decisions," the minister said. Solomon added that intends to meet other major digital platforms in the coming weeks to advocate for a consistent approach on protecting youths and the public.
[28]
Canada Says OpenAI's Safety Pledges Lack Details -- Update
OTTAWA--OpenAI's pledges to strengthen security protocols are missing key details including how they will be implemented, Canada's minister in charge of artificial intelligence said. Minister for Artificial Intelligence Evan Solomon is also demanding greater clarity on OpenAI's operations, including how troubling interactions with the ChatGPT chatbot are escalated, and how privacy considerations are balanced with public safety. Solomon issued a statement Friday, roughly 24 hours after OpenAI wrote to the minister and pledged to bolster safety protocols. OpenAI said that with the changes, it would have referred the account belonging to Jesse Van Rootselaar to police if it was discovered today. Police have identified Van Rootselaar as the suspect in a deadly school shooting in Tumbler Ridge, British Columbia that left eight dead and dozens injured. The Wall Street Journal has reported that OpenAI considered alerting Canadian law-enforcement authorities about interactions between Van Rootselaar and ChatGPT. OpenAI shut down Van Rootselaar's ChatGPT account after detecting a violation of its policy but didn't notify police. Solomon said he's also scheduled to speak to OpenAI Chief Executive Sam Altman next week. A spokeswoman for OpenAI did not immediately respond to a request for comment. A spokesman for Solomon said the timing and location have yet to be determined. A meeting with Solomon marks the second senior Canadian politician Altman has agreed to speak with. British Columbia Premier David Eby said he's also set to speak to Altman, and Eby added he wants the CEO to be cognizant of the pain that families in Tumbler Ridge are feeling. Online platforms have long debated how to balance questions of privacy with public safety in their decisions to alert law enforcement about certain users. That debate has now pulled in the AI companies that power the chatbots to which people are confiding the most intimate details of their private thoughts and lives. In its letter to Solomon, OpenAI divulged that Van Rootselaar had a second ChatGPT account. Among OpenAI's pledges is a commitment to strengthen detection systems to prevent efforts to evade safeguards. Taylor Owen, a public-policy professor at Montreal's McGill University with a specialty in media and ethics, said OpenAI has now publicly acknowledged its previous safety protocols were inadequate, and that Van Rootselaar's ability to create a second account discloses a previously unknown system error. "It means the threshold that governed the original decision, the one that resulted in Canadian police not being contacted about violent content flagged by the company's own systems, was one the company itself now considers inadequate," Owen said. OpenAI's pledges, he added, follows a pattern among social-media platforms "where product safety changes come only after an incident forces them." Solomon said he will press Altman about ensuring the pledges are fully implemented and enforced. "We will be seeking further clarity on how human review is conducted and whether Canadian context and best practices are appropriately embedded in those decisions," the minister said. Solomon added that intends to meet other major digital platforms in the coming weeks to advocate for a consistent approach on protecting youths and the public.
[29]
Canadian minister to meet with OpenAI's Altman to discuss safety measures after shooting
OTTAWA, Feb 27 (Reuters) - Canada's minister in charge of artificial intelligence said on Friday he will meet with OpenAI CEO Sam Altman next week to discuss how the ChatGPT maker plans to boost safety protocols after a recent school shooting in British Columbia. The Canadian government has urged OpenAI to boost its safety protocols quickly and warned Ottawa could effect change through legislation after the company said it had not contacted police about an account belonging to the alleged shooter, Jesse Van Rootselaar, that it had banned. "While we note their willingness to strengthen law enforcement referral protocols, establish direct points of contact with Canadian authorities, and enhance safeguards, we have not yet seen a detailed plan for how these commitments will be implemented in practice," Minister Evan Solomon said in a statement. Solomon was responding to a letter he received from OpenAI's vice president of global policy on Thursday in which the firm said it will set up a direct point of contact with Canadian law enforcement and improve detection of repeat violators of its "violent activities" policy to boost safety protocols. Solomon said he will meet with Altman "to seek further clarity and to ensure that the commitments made are translated into concrete action." Van Rootselaar, 18, is suspected of killing eight people on February 10 before taking her own life in Tumbler Ridge. OpenAI said it banned her ChatGPT account last year for policy violations. Solomon said he will also meet with other major platforms in Canada in the coming weeks. "All options remain on the table as we assess what further steps may be necessary," he added. (Reporting by Ryan Patrick Jones and Ismail ShakilEditing by Rod Nickel)
[30]
Canada tells OpenAI to boost safety measures or be forced to by government
OTTAWA, Feb 25 (Reuters) - Canadian ministers told OpenAI that if it did not quickly boost its safety protocols in the wake of a recent school shooting, Ottawa would effect the change through legislation, a top official said on Wednesday. Ottawa summoned OpenAI's safety team for talks on Tuesday after the ChatGPT maker said it had not contacted police about an account that it banned belonging to an alleged mass shooter. Jesse Van Rootselaar, 18, is suspected of killing eight people on February 10 before taking her own life in a small town in British Columbia. OpenAI said it banned her account last year on ChatGPT for policy violations, which it said did not meet internal criteria for reporting to law enforcement. "The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to be changes implemented, and if they're not forthcoming very quickly, the government is going to be making changes," Justice Minister Sean Fraser told reporters. OpenAI was not immediately available for comment. ONLINE HATE CRACKDOWN In 2024, Canada's Liberal government introduced draft legislation to crack down on online hate, but the effort stalled amid criticism it was too broad in scope. Ministers say they will try again this year with more focused measures. "Anything that anyone could have done to prevent that tragedy or future tragedies must be done. We will fully explore it to the full lengths of the law," Prime Minister Mark Carney told reporters. Van Rootselaar, who police say was born male but identified as a woman and began transitioning six years ago, had a history of mental health problems. The killings took place in Tumbler Ridge, British Columbia, a town with about 2,400 people. "We were really disturbed by the reports that there might have been an opportunity to escalate this to law enforcement ... and we want to make sure if any company has that opportunity, they would escalate further," said Evan Solomon, the federal minister in charge of artificial intelligence. On Tuesday, OpenAI said it would shortly update Ottawa on what additional steps it was taking. OpenAI says it banned Van Rootselaar's account in 2025 after it was flagged by systems that identify "misuses of our models in furtherance of violent activities." The company considered contacting police, but determined the account did not meet the threshold of posing an imminent and credible risk of serious physical harm to others. Crime experts noted that while greater scrutiny of AI platforms and social media is necessary, police or other authorities may have missed chances to avert the tragedy in British Columbia. Police had previously removed guns from Van Rootselaar's home, though they were later returned. (Reporting by David Ljunggren; Editing by Paul Simao)
[31]
OpenAI's ban of Canada school shooter's account raises scrutiny of other online activity
OTTAWA, Feb 25 (Reuters) - OpenAI's admission it banned the ChatGPT account of mass shooter Jesse Van Rootselaar months before the 18-year-old killed eight people and herself is drawing more scrutiny to her past online activity and raising questions about whether opportunities were missed to prevent one of Canada's worst-ever mass killings. OpenAI's decision not to report Van Rootselaar to police prompted Canada's Artificial Intelligence Minister Evan Solomon to summon company officials to Ottawa this week to explain their safety protocols. The shooting in the small British Columbia town of Tumbler Ridge is the latest tragedy in which critics have argued interactions with chat bots may have forewarned of or even encouraged violence. Crime experts noted that while greater scrutiny of AI platforms and social media is necessary, police or other authorities may have missed chances to avert the tragedy. Police had previously removed guns from Van Rootselaar's home, though they were later returned. Police also said they were aware of her history of mental health issues. Van Rootselaar began the attack by killing her mother and sibling at home, before shooting dead an educator and five students, while two others were hospitalized with serious injuries. The Royal Canadian Mounted Police said the investigation is still active and some questions are subject to relevant legislation or court processes. "This was clearly a household where there were many problems," said Patrick Watson, a criminology professor at the University of Toronto unconnected to the case. "But we also need far more scrutiny of the companies who are creating these new platforms, which are essentially becoming a new public sphere with very little accountability." In a since-deleted Reddit post, Van Rootselaar said she had been diagnosed with numerous mental health issues, including attention deficit hyperactivity disorder, depression, obsessive compulsive disorder and was on the autism spectrum. "I went crazy and burnt my house down my second time trying shrooms but still have a desire to try alternatives," Van Rootselaar wrote. Van Rootselaar also previously created a game using the Roblox Studio app, involving shooting other characters at a mall. Roblox told Reuters that Van Rootselaar's account and its content were removed from the Roblox Studio app the day after the Tumbler Ridge massacre, and that the game had only seven visits. Open AI said in a statement it had banned Van Rootselaar's ChatGPT account last June after identifying "misuses of our models in furtherance of violent activities" and considered whether to refer her to law enforcement. The company ultimately decided "the account activity did not meet the higher threshold required for referral," mainly because OpenAI was not able to identify credible or imminent planning. The company said intervening in these situations can be distressing for young people and their families and may also raise privacy concerns. MISSED OPPORTUNITY Tracy Vaillancourt, a professor at the University of Ottawa who specializes in youth mental health and violence prevention, said OpenAI's failure to refer Van Rootselaar to police was "a missed opportunity," but acknowledged there were challenges in protecting users' privacy. "People using ChatGPT may worry that it's going to spy on them, but AI is so powerful there should be a way to improve how technology and we as a society, are able to reduce credible threats," Vaillancourt said. Cynthia Khoo, a technology and human rights lawyer, warned "it would be a mistake to start down a path where AI companies might become deputized as a private surveillance wing of law enforcement," saying that invasions of privacy would disproportionately hit already marginalized groups. Van Rootselaar was born male but identified as a female and began transitioning six years ago, police said. A 2023 report from the U.S. government showed that more than 95% of mass shooters are male and that transgender people account for about 2%. British Columbia Premier David Eby said the Tumbler Ridge shooting could have been avoided if OpenAI had warned authorities about Van Rootselaar's violent online activity and called for more transparency from the tech giant. "It looks like OpenAI had the opportunity to prevent this tragedy, to prevent this horrific loss of life, to prevent there from being dead children in British Columbia," he said Monday. OpenAI said in its statement the shooting was "a devastating tragedy" and that it was doing all it could to support the ongoing investigation. "We reached out to law enforcement immediately after the identity of the shooter was made public and we are engaged with the (police) to support their ongoing work," the company said. (Reporting by Maria Cheng, Ryan Patrick Jones; Editing by Caroline Stauffer and Lincoln Feast.)
[32]
Canadian officials express disappointment to OpenAI representatives in wake of school shooting
OTTAWA, Feb 24 (Reuters) - Canadian officials expressed disappointment that OpenAI representatives did not present new safety measures in a meeting on Tuesday after the ChatGPT maker said it did not contact police about an account it banned belonging to an alleged mass shooter. Jesse Van Rootselaar, 18, is suspected of killing eight people on February 10 before taking her own life in a small town in British Columbia. OpenAI said it banned her account last year on ChatGPT for policy violations, which it said did not meet internal criteria for reporting to law enforcement. Evan Solomon, the federal minister in charge of artificial intelligence, summoned OpenAI's top safety officials for a meeting in Ottawa. "We made it clear that Canadians expect credible warning signs of serious violence to be escalated in a timely and responsible way. Internal review alone is not sufficient when public safety is at stake," Solomon said in a statement after the meeting. "We expressed our disappointment that no substantial new safety measures were presented at this time. OpenAI indicated they will return shortly with more concrete proposals tailored to the Canadian context." Solomon said OpenAI confirmed the company was cooperating with Canadian police, though details of the ongoing investigation were not discussed. Public safety, culture and justice ministers also joined the meeting. OpenAI said it had taken steps in recent months to strengthen safeguards and made changes to law enforcement referral protocol for cases involving violent activities. "The ministers underscored that Canadians expect continued concrete action and we heard that message loud and clear," the company said in a statement. "We've committed to follow up in the coming days with an update on additional steps we're taking, as we continue to support law enforcement and work with the government on strengthening AI safety for all Canadians." In 2024, Canada's Liberal government introduced draft legislation to crack down on online hate, but the effort stalled amid criticism it was too broad in scope. Ministers say they will try again this year with a revised bill. Van Rootselaar, who police say was born male but identified as a woman and began transitioning six years ago, had a history of mental health problems. The killings took place in Tumbler Ridge, British Columbia, a town of around 2,400. OpenAI says it banned Van Rootselaar's account in 2025 after it was flagged by systems that identify "misuses of our models in furtherance of violent activities." The company considered contacting police, but determined the account did not meet the threshold of posing an imminent and credible risk of serious physical harm to others. (Reporting by David LjunggrenEditing by Rod Nickel and Saad Sayeed)
[33]
Canada to Press OpenAI on Safety Protocols After Mass Shooting -- 2nd Update
OTTAWA--Canada's minister in charge of artificial intelligence says senior executives of OpenAI are coming to the Canadian capital on Tuesday regarding interactions between the suspect in a mass shooting in a Canadian town and OpenAI's ChatGPT chatbot. Evan Solomon said he demanded the meeting after The Wall Street Journal reported Friday that OpenAI considered alerting Canadian law enforcement authorities about the interactions. Some OpenAI employees worried the suspect's writings were a warning about potential real-world violence, the Journal reported. "We will have a sit-down meeting to have an explanation of their safety protocols ... and the details about their escalation thresholds," said Solomon, describing the reporting as "disturbing." He added, "I'm not going to prejudge the details of this case." An OpenAI spokesperson on Monday confirmed senior leaders were headed to Ottawa to discuss the company's overall approach to safety, the safeguards OpenAI has in place "and how we continuously work to strengthen them." The spokesperson added the company is working with investigators. The Royal Canadian Mounted Police in British Columbia is handling the probe, and Staff Sgt. Kris Clark said OpenAI reached out after the shooting, and digital and physical evidence are being collected as part of the probe. A spokeswoman for OpenAI told the Journal last week the company banned Jesse Van Rootselaar's account but determined that her activity didn't meet the criteria for reporting to law enforcement, which would have required that it constituted a credible and imminent risk of serious physical harm to others Earlier this month, Van Rootselaar was found dead from what appeared to be a self-inflicted injury at the school scene of a mass shooting that killed eight people and left at least 25 injured in the remote British Columbia town of Tumbler Ridge. The RCMP identified Van Rootselaar, an 18-year-old trans woman, as the suspect. Solomon told reporters he's working with cabinet colleagues on possible measures governing privacy and data with the aim of protecting younger Canadians. Canadians "expect their children are kept safe and that these organizations act in a responsible manner," he said. Online platforms have long debated how to balance questions of privacy for their users with public safety in their decisions to alert certain users to law enforcement. That debate is now coming for the AI companies that power the chatbots to which people are confiding the most intimate details of their private thoughts and lives. "OpenAI obviously does monitor for imminent risk and has a standard it must meet for disclosure," said Michael Geist, a law professor at the University of Ottawa with expertise on technology and intellectual-property issues. "I think it's a bit frightening to think that [OpenAI] would be required to inform police regularly without a high standard in place since the risks with false positives raises privacy concerns of their own." News Corp, owner of The Wall Street Journal and Dow Jones Newswires, has a content-licensing partnership with OpenAI.
[34]
Canada to Press OpenAI on Safety Protocols After Mass Shooting -- Update
OTTAWA--Canada's minister in charge of artificial intelligence says senior executives of OpenAI are coming to the Canadian capital on Tuesday regarding interactions between the suspect in a mass shooting in a Canadian town and OpenAI's ChatGPT chatbot. Evan Solomon said he demanded the meeting after The Wall Street Journal reported Friday that OpenAI considered alerting Canadian law enforcement authorities about the interactions. Some OpenAI employees worried the suspect's writings were a warning about potential real-world violence, the Journal reported. "We will have a sit-down meeting to have an explanation of their safety protocols ... and the details about their escalation thresholds," said Solomon, describing the reporting as "disturbing." He added, "I'm not going to prejudge the details of this case." A spokeswoman for OpenAI said the company banned Jesse Van Rootselaar's account but determined that her activity didn't meet the criteria for reporting to law enforcement, which would have required that it constituted a credible and imminent risk of serious physical harm to others. Earlier this month, Van Rootselaar was found dead from what appeared to be a self-inflicted injury at the school scene of a mass shooting that killed eight people and left at least 25 injured in the remote British Columbia town of Tumbler Ridge. The Royal Canadian Mounted Police identified Van Rootselaar, an 18-year-old trans woman, as the suspect. Solomon told reporters he's working with cabinet colleagues on possible measures governing privacy and data with the aim of protecting younger Canadians. Canadians "expect their children are kept safe and that these organizations act in a responsible manner," he said. Online platforms have long debated how to balance questions of privacy for their users with public safety in their decisions to alert certain users to law enforcement. That debate is now coming for the AI companies that power the chatbots to which people are confiding the most intimate details of their private thoughts and lives. "OpenAI obviously does monitor for imminent risk and has a standard it must meet for disclosure," said Michael Geist, a law professor at the University of Ottawa with expertise on technology and intellectual-property issues. "I think it's a bit frightening to think that [OpenAI] would be required to inform police regularly without a high standard in place since the risks with false positives raises privacy concerns of their own." News Corp, owner of The Wall Street Journal and Dow Jones Newswires, has a content-licensing partnership with OpenAI.
[35]
Canada to Press OpenAI on Safety Protocols After Mass Shooting
OTTAWA--Canada's minister in charge of artificial intelligence says senior executives of OpenAI are coming to the Canadian capital on Tuesday regarding interactions between the suspect in a mass shooting in a Canadian town and OpenAI's ChatGPT chatbot. Evan Solomon said he demanded the meeting after The Wall Street Journal reported Friday that OpenAI considered alerting Canadian law enforcement authorities about the interactions. Some OpenAI employees worried the suspect's writings were a warning about potential real-world violence, the Journal reported. "We will have a sit-down meeting to have an explanation of their safety protocols and when they escalate," said Solomon, describing the reporting as "disturbing." A spokeswoman for OpenAI said the company banned Jesse Van Rootselaar's account but determined that her activity didn't meet the criteria for reporting to law enforcement, which would have required that it constituted a credible and imminent risk of serious physical harm to others. Earlier this month, Van Rootselaar was found dead from what appeared to be a self-inflicted injury at the school scene of a mass shooting that killed eight people and left at least 25 injured. The Royal Canadian Mounted Police identified Van Rootselaar, an 18-year-old trans woman, as the suspect. News Corp, owner of The Wall Street Journal and Dow Jones Newswires, has a content-licensing partnership with OpenAI.
Share
Share
Copy Link
Eight months before the Tumbler Ridge mass shooting that killed nine people, OpenAI flagged Jesse Van Rootselaar's ChatGPT account for gun violence scenarios but chose not to contact police. The company has now updated its law enforcement referral protocols and will meet with Canadian officials, including AI Minister Evan Solomon and CEO Sam Altman, as Canada considers government-imposed regulations to fill its AI governance vacuum.
Eight months before Jesse Van Rootselaar killed eight people and herself in the Tumbler Ridge mass shooting on Feb. 10, OpenAI knew something was concerning. The company's automated review system had flagged Van Rootselaar's ChatGPT account for interactions involving gun violence scenarios
1
. Roughly a dozen employees were aware of the flagged content, and some advocated contacting police1
. Instead, OpenAI banned the account in June 2025 but didn't refer it to law enforcement because it didn't meet the "threshold required" at the time1
. The 18-year-old suspect killed her mother, her 11-year-old half-brother, and six others at Tumbler Ridge Secondary School before dying of a self-inflicted wound1
.
Source: New York Post
The situation became more troubling when OpenAI revealed that Van Rootselaar had evaded the ban by creating a second ChatGPT account that went undetected until after police released her name
2
4
. This ban evasion exposed gaps in the company's detection systems designed to prevent banned users from creating new accounts4
. OpenAI's vice president for global policy, Ann O'Leary, acknowledged the company only discovered the second account after the Royal Canadian Mounted Police announced Van Rootselaar's identity4
. The tragedy has become Canada's deadliest rampage since 20204
.Source: Market Screener
Following intense pressure from Canadian officials, OpenAI announced immediate changes to its AI safety protocols. "With the benefit of our continued learnings, under our enhanced law enforcement referral protocol, we would refer the account banned in June 2025 to law enforcement if it were discovered today," O'Leary wrote in a letter to Canada's AI Minister Evan Solomon
2
4
. The company committed to strengthening protocols about reporting potential threats when chatbot interactions cross the line into imminent and credible risk4
. OpenAI will also develop direct communication with law enforcement to ensure Canadian authorities receive information quickly when the company identifies potential for real-world violence. Additionally, the company pledged to improve strengthened detection systems to catch attempts to evade safeguards and prioritize identifying the highest risk offenders4
.Canadian officials summoned OpenAI representatives to Ottawa and made clear their expectations for rapid changes
5
. "The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to be changes implemented, and if they're not forthcoming very quickly, the government is going to be making changes," Justice Minister Sean Fraser said5
. Evan Solomon stated that while the company showed willingness to strengthen protocols, "we have not yet seen a detailed plan for how these commitments will be implemented in practice"3
. Solomon will meet with Sam Altman next week to seek further clarity and ensure commitments translate into concrete action3
. British Columbia Premier David Eby also secured a meeting with Altman, though he called OpenAI's assurances "cold comfort" for the families of Tumbler Ridge4
.
Source: NYT
Related Stories
The tragedy reveals a critical gap in Canada's AI governance framework. Federal AI Minister Evan Solomon said he was "deeply disturbed" by the revelations, adding the government is reviewing "a suite of measures" and that "all options are on the table"
3
. But those options remain undefined because critical legislative tools no longer exist. The Artificial Intelligence and Data Act, embedded in Bill C-27, was supposed to be Canada's answer to AI regulation, while the Online Harms Act would have addressed harmful digital content1
. Both died when Parliament was prorogued in January 20251
. What remains is a voluntary code of conduct with no legal force and no consequences for non-compliance1
.The case highlights fundamental questions about corporate responsibility when AI companies detect violent ideation. Chatbot interactions differ fundamentally from social media—they're private, intimate, and designed to be accommodating, with users routinely disclosing fears, fantasies, and violent thoughts to systems engineered to respond with conversational warmth
1
. OpenAI's threat assessment was conducted by software engineers and content moderators, not forensic psychologists trained in distinguishing between ideation and intent1
. The company cited risks of "over-enforcement" and distress from unannounced police visits for young people1
. Canada's privacy legislation compounds the challenge—the Personal Information Protection and Electronic Documents Act permits disclosure without consent in emergencies, but this provision was drafted for clear-cut crises, not the probabilistic threat indicators that chatbot interactions generate1
. OpenAI has faced multiple wrongful death lawsuits, including cases where ChatGPT allegedly encouraged paranoid beliefs before a man killed his mother and himself, and suits involving teenagers who planned suicides5
. Experts argue Canada needs binding legislation with clear escalation thresholds developed with mental health professionals and law enforcement, an independent digital safety commission for threat assessment, and modernized privacy legislation providing explicit legal clarity for AI-specific disclosure.Summarized by
Navi
[1]
05 Mar 2026•Policy and Regulation

10 Mar 2026•Policy and Regulation

06 Sept 2025•Policy and Regulation

1
Technology

2
Technology

3
Policy and Regulation
