2 Sources
[1]
Canadian officials claim OpenAI violated federal and provincial privacy laws - Engadget
Philippe Dufresne, the Privacy Commissioner of Canada, has found OpenAI was "not compliant with" Canadian federal and provincial privacy laws in the training of its AI models. Following an investigation, Dufresne and his counterparts in Alberta, Quebec and British Columbia say OpenAI's approach to things like data collection and consent stepped on multiple laws, including Canada's Personal Information Protection and Electronic Documents Act (PIPEDA), which governs how companies collect and use personal information during the normal course of business. The commissioners participating in the investigation identified multiple privacy issues with OpenAI's approach, including that the company "gathered vast amounts of personal information without adequate safeguards to prevent use of that information to train its models," and that it failed to acquire consent to collect and use that personal information in the first place. Warnings in ChatGPT note that interactions with the AI could be used in training, but third-party data OpenAI has purchased or scraped also includes personal details people likely aren't even aware of. The fact that ChatGPT users have no way to access, correct or delete that data was another issue that the commissioners identified, according to a summary of the investigation's findings, along with OpenAI's lackluster attempts to acknowledge the inaccuracy of some of ChatGPT's responses. Canada's Privacy Commissioner contends that OpenAI was open and responsive to the investigation, and has already committed to making multiple changes to ChatGPT to follow Canadian privacy laws. OpenAI has retired earlier models that violated Canadian privacy regulation, and now uses "a filtering tool to detect and mask personal information (such as names or phone numbers) in publicly accessible internet data and licensed datasets used to train its models," the Commissioner says. The company has also agreed within the next three months to add a new notice to the signed-out version of ChatGPT explaining that chats can be used for training and sensitive information shouldn't be shared, and within the next six months: While Canada's investigation into OpenAI's privacy policies was opened in 2023, the company has received scrutiny from regulators more recently because of its connection to the mass shooting that occurred in Tumbler Ridge in February 2026. OpenAI had reportedly flagged the alleged shooter's account in 2025 for containing warnings of real-world violence, but failed to escalate those concerns to Canadian law enforcement. Following the shooting, regulators demanded the company change its approach to safety, and OpenAI ultimately agreed to be more collaborative with Canadian law enforcement and health agencies in the future.
[2]
Report on OpenAI expected from federal, provincial privacy watchdogs today
OTTAWA -- Privacy watchdogs plan to release a report today on OpenAI, the company behind the popular artificial intelligence-powered chatbot ChatGPT. Federal privacy commissioner Philippe Dufresne said just over three years ago that his office was investigating a complaint alleging the collection, use and disclosure of personal information without consent. The findings will be delivered by Dufresne and his counterparts from British Columbia, Alberta and Quebec, who collaborated on a joint probe. They plan to discuss their conclusions at a news conference in Ottawa. Upon announcing the investigation in April 2023, Dufresne said AI technology and its effects on privacy are priorities for his office. Dufresne stressed the importance of keeping up with -- and staying ahead of -- fast-moving technological advances. This report by The Canadian Press was first published May 6, 2026.
Share
Copy Link
Canadian Privacy Commissioner Philippe Dufresne found OpenAI violated federal and provincial privacy laws in training its AI models. The joint investigation revealed the company collected vast amounts of personal information without consent or adequate safeguards, prompting commitments to implement filtering tools and new user notices within the next six months.
Philippe Dufresne, the Privacy Commissioner of Canada, has determined that OpenAI was "not compliant with" Canadian federal and provincial privacy laws in the training of its AI models
1
. The findings emerged from a joint investigation conducted alongside privacy watchdogs from Alberta, Quebec and British Columbia, examining how the company behind ChatGPT handles personal information2
. The investigation, launched in April 2023 following a complaint about the unauthorized collection of personal information without consent, concluded that OpenAI's practices violated Canada's Personal Information Protection and Electronic Documents Act (PIPEDA), which governs how companies collect and use personal information during normal business operations1
.
Source: Engadget
The Canadian privacy officials identified several critical issues with OpenAI's approach to data collection and safeguards. Investigators found the company "gathered vast amounts of personal information without adequate safeguards to prevent use of that information to train its models"
1
. Beyond the OpenAI privacy violation related to training data, commissioners discovered that third-party data the company purchased or scraped contained personal details that individuals likely weren't aware were being collected. While ChatGPT includes warnings that user interactions could be used in training, this transparency didn't extend to the broader data collection practices1
.A significant concern raised by the joint investigation centered on user data rights. ChatGPT users have no way to access, correct or delete personal information collected by OpenAI, representing a fundamental gap in data protection
1
. The commissioners also flagged OpenAI's inadequate attempts to acknowledge the inaccuracy of some ChatGPT responses, raising questions about the AI technology privacy implications when users cannot verify or correct information about themselves that the system may have learned1
.
Source: BNN
Related Stories
Despite the violations, Canada's Privacy Commissioner noted that OpenAI was open and responsive throughout the investigation and has committed to multiple changes. The company has retired earlier models that violated Canadian privacy regulation and now uses "a filtering tool to detect and mask personal information (such as names or phone numbers) in publicly accessible internet data and licensed datasets used to train its models"
1
. Within the next three months, OpenAI agreed to add a new notice to the signed-out version of ChatGPT explaining that chats can be used for training and that sensitive information shouldn't be shared, with additional compliance measures expected within six months1
.The privacy investigation, which began over three years ago, has taken on added significance following a mass shooting in Tumbler Ridge in February 2026
1
. OpenAI had reportedly flagged the alleged shooter's account in 2025 for containing warnings of real-world violence but failed to escalate those concerns to law enforcement. This incident prompted regulators to demand changes to the company's approach to safety, and OpenAI ultimately agreed to be more collaborative with Canadian law enforcement and health agencies in the future1
. When announcing the investigation in April 2023, Dufresne emphasized that AI technology and its effects on privacy are priorities for his office, stressing the importance of keeping up with fast-moving technological advances2
.Summarized by
Navi
05 Mar 2026•Policy and Regulation

21 Feb 2026•Policy and Regulation
28 Feb 2025•Policy and Regulation

1
Health

2
Technology

3
Technology
