2 Sources
2 Sources
[1]
Canadian government says OpenAI will take further steps to strengthen safety protocols
The Canadian government says that OpenAI CEO Sam Altman has agreed to take steps to immediately strengthen safety protocols, . This follows a at a high school in which OpenAI and suspended his account, but did not alert authorities. These changes look to primarily involve law enforcement, with commitments to notify police about potentially suspicious use of ChatGPT. We don't have any confirmation from the company at this time, but Canada's Artificial Intelligence Minister Evan Solomon says he "asked OpenAI to take several actions, which Altman has agreed to do." Solomon attended a virtual meeting with Altman to discuss how the company "would include Canadian privacy, mental health and law enforcement experts into the process to identify and review high-risk cases involving Canadian users." He says OpenAI has pledged to provide a report to outline these new protocols. He also asked Altman to make these changes retroactively and to review previous suspicious incidents on the platform, providing law enforcement with data when necessary. We don't know if OpenAI has consented to that part. Engadget has reached out to OpenAI to ask about these changes and if they'll be exclusive to Canada. We'll update this post if we hear back. This isn't the first step the company has made to make things right with Canada. Ann O'Leary, OpenAI's VP of global policy, recently suggested that the company to better prevent banned users from returning to the platform. The company banned the alleged shooter's original account due to "potential warnings of committing real-world violence" but he was able to make another one.
[2]
Canada Says OpenAI CEO Altman Pledged to Toughen Safety Protocols
OTTAWA--Canada says OpenAI Chief Executive Sam Altman agreed to take immediate steps to strengthen safety protocols regarding notifying police about potentially suspicious use of the company's ChatGPT chatbot. Artificial Intelligence Minister Evan Solomon added that he also asked Altman to apply these changes retroactively and to review previous incidents that may have been referred to law enforcement for further investigation. Solomon's office issued a statement late Wednesday night summarizing a virtual meeting earlier in the day between the minister and the OpenAI CEO. The government is seeking changes to how OpenAI and other digital platforms operate following Wall Street Journal reporting that indicated company employees raised alarm bells about interactions with ChatGPT in 2025 involving an individual whom police identified last month as a suspect in a fatal school shooting in Tumbler Ridge, British Columbia. In 2025, OpenAI shut down Jesse Van Rootselaar's account, but it didn't notify the police. The company last week pledged to modify its protocols on alerting police, and acknowledged that under the changes the company would have alerted police about Van Rootselaar's interactions. A spokesperson for OpenAI did not respond to a request for comment about Solomon's version of events. "I asked OpenAI to take several actions, which Altman has agreed to do," Solomon said. His statement added that OpenAI committed to assess "how they would include Canadian privacy, mental health and law enforcement experts into the process to identify and review high-risk cases involving Canadian users." Solomon said OpenAI also pledged to provide a report outlining the new protocols it is developing to identify high-risk offenders and repeat policy violators. OpenAI has agreed to cooperate with investigators and with authorities in British Columbia who are set to lead a public inquest into the shooting at the remote town of Tumbler Ridge, which left eight people dead and dozens injured. Police found Van Rootselaar dead at the scene.
Share
Share
Copy Link
Canadian Artificial Intelligence Minister Evan Solomon secured commitments from OpenAI CEO Sam Altman to strengthen safety protocols following a fatal school shooting in British Columbia. The company will now involve Canadian privacy, mental health, and law enforcement experts to identify high-risk cases and improve mechanisms for notifying police about suspicious ChatGPT usage.
The Canadian government has secured commitments from OpenAI to implement immediate changes to its safety protocols following a tragic incident that exposed gaps in how the company handles potentially dangerous user behavior. OpenAI CEO Sam Altman agreed to take several actions during a virtual meeting with Artificial Intelligence Minister Evan Solomon, primarily focused on notifying law enforcement about suspicious ChatGPT activity
1
.
Source: Engadget
The intervention follows Wall Street Journal reporting that revealed company employees raised concerns in 2025 about interactions involving an individual later identified by police as a suspect in a fatal school shooting at Tumbler Ridge, British Columbia. The incident left eight people dead and dozens injured. While OpenAI suspended the user's account due to "potential warnings of committing real-world violence," the company did not alert authorities at the time
2
.Solomon's office issued a statement late Wednesday night outlining the specific actions requested from Sam Altman during their meeting. "I asked OpenAI to take several actions, which Altman has agreed to do," Solomon said. The commitments center on establishing clearer mechanisms for notifying law enforcement when the platform identifies potentially dangerous user behavior
2
.OpenAI has committed to assess how it would include Canadian privacy, mental health, and law enforcement experts into the process to identify and review high-risk cases involving Canadian users. This collaborative approach aims to balance safety concerns with privacy considerations while ensuring appropriate authorities receive timely information about potential threats
1
.Beyond forward-looking changes, Solomon also asked Altman to apply these protocols retroactively and conduct a retroactive review of past suspicious incidents on the platform. The minister requested that OpenAI provide law enforcement with data when necessary from previous cases that may warrant further investigation. However, it remains unclear whether OpenAI has fully consented to this aspect of the request
1
.The company has pledged to provide a report outlining the new protocols it is developing to identify high-risk offenders and repeat policy violators. This documentation will detail how OpenAI plans to prevent similar incidents from occurring in the future
2
.Related Stories
The Tumbler Ridge case highlighted another critical vulnerability in OpenAI's systems. The suspect, Jesse Van Rootselaar, had his original account banned by the company, but he was able to create another account and continue using the platform. Ann O'Leary, OpenAI's VP of global policy, recently suggested the company would implement measures to better prevent banned users from circumventing account suspensions
1
.OpenAI has also agreed to cooperate with investigators and authorities in British Columbia who are set to lead a public inquest into the school shooting. Police found Van Rootselaar dead at the scene of the incident
2
.Whether these changes will extend beyond Canada remains uncertain. OpenAI has not yet responded to requests for comment about whether the new safety protocols will apply globally or remain exclusive to Canadian users. The company's response to this incident may set precedents for how AI platforms worldwide handle potentially dangerous user interactions and their obligations to public safety.🟡 training_data=🟡
Summarized by
Navi
[2]
21 Feb 2026•Policy and Regulation
18 Dec 2025•Technology

06 Sept 2025•Policy and Regulation

1
Policy and Regulation

2
Technology

3
Technology
