3 Sources
3 Sources
[1]
AI image tools must follow privacy rules, watchdogs say
Watchdogs warn models that can generate realistic images of people must comply with data protection laws A global coalition of privacy watchdogs has fired a warning shot at the generative AI industry, saying companies churning out realistic synthetic images can't pretend that data protection rules don't apply. The joint statement [PDF] signed by more than 60 regulators, including the UK Information Commissioner's Office (ICO) and Ireland's Data Protection Commission (DPC), boils down to a simple point: if your model can convincingly fake a person, you don't get to pretend data protection law doesn't exist. "Recent developments - particularly AI image and video generation integrated into widely accessible social media platforms - have enabled the creation of non-consensual intimate imagery, defamatory depictions, and other harmful content featuring real individuals," said the signatories. "We are especially concerned about potential harms to children and other vulnerable groups, such as cyberbullying and/or exploitation." The warning lands weeks after the ICO and DPC opened formal probes into Elon Musk's xAI following reports that its Grok chatbot produced sexual images of real people without their consent. The group says organizations dabbling in generative AI need to build safeguards from the start and think carefully about risks such as non-consensual imagery, misuse of someone's likeness, and potential harms to children - all areas where the tech has raced ahead of social norms and, in some cases, common sense. The regulators stress that the law already covers this, and that firms don't get a free pass just because the content came from a machine. William Malcolm, executive director of Regulatory Risk & Innovation at the ICO, said: "People should be able to benefit from AI without fearing that their identity, dignity or safety are under threat. AI already plays a large role in all our lives, and everybody has a right to expect that AI systems handling their personal data will do so with respect. Responsible innovation means putting people first: anticipating the risks and building in meaningful safeguards to ensure autonomy, transparency, and control. "Public trust is foundational to the successful adoption and use of AI. Joint regulatory initiatives like this show global commitment to high standards of data protection in AI systems and help provide regulatory certainty. We expect those developing and deploying AI to act responsibly. Where we find that obligations have not been met, we will take action to protect the public." The joint statement on AI-generated imagery suggests that if companies want to keep pushing ever more realistic AI into everyday products, they should expect regulators to keep asking awkward questions about how it all works. ®
[2]
UK privacy watchdog warns over AI-generated images in joint statement
LONDON, Feb 23 (Reuters) - Britain's privacy watchdog published a joint statement with dozens of international authorities on Monday, setting out concerns over images generated by artificial intelligence which depict identifiable individuals without their consent. "We call on organisations to engage proactively with regulators, implement robust safeguards from the outset, and ensure that technological advancement does not come at the expense of privacy, dignity, safety," the statement published by the Information Commissioner's Office said. The signatories are especially concerned about potential harms to children, the ICO added. Reporting by Sam Tabahriti, writing by Sarah Young, editing by William James Our Standards: The Thomson Reuters Trust Principles., opens new tab
[3]
UK privacy watchdog warns over AI-generated images in joint statement
Britain's privacy watchdog published a joint statement with dozens of international authorities on Monday, setting out concerns over images generated by artificial intelligence which depict identifiable individuals without their consent. "We call on organisations to engage proactively with regulators, implement robust safeguards from the outset, and ensure that technological advancement does not come at the expense of privacy, dignity, safety," the statement published by the Information Commissioner's Office said. The signatories are especially concerned about potential harms to children, the ICO added.
Share
Share
Copy Link
A global coalition of more than 60 privacy watchdogs has issued a stark warning to the generative AI industry: companies creating realistic synthetic images of people must comply with existing data protection laws. The joint statement, signed by regulators including the UK's ICO and Ireland's DPC, addresses growing concerns about non-consensual intimate imagery, defamatory content, and exploitation of vulnerable groups, particularly children.
A global coalition of more than 60 regulators has sent a clear message to the generative AI industry: creating realistic synthetic images of people doesn't exempt companies from data protection obligations. The joint statement, signed by privacy watchdogs including the Information Commissioner's Office (ICO) and Ireland's Data Protection Commission (DPC), emphasizes that organizations developing AI image tools must comply with existing data protection laws
1
. The declaration comes as AI-generated images become increasingly sophisticated and accessible, raising urgent questions about consent, dignity, and safety.
Source: The Register
The regulators state that if a model can convincingly depict identifiable individuals without consent, standard privacy protections apply regardless of whether the content originated from a machine
2
. This position directly challenges any notion that AI companies operate in a regulatory gray zone when producing realistic depictions of real people.The signatories highlight specific risks that have emerged as AI image generation becomes integrated into widely accessible social media platforms. Recent developments have enabled the creation of non-consensual intimate imagery, defamatory content, and other harmful material featuring real individuals
1
. The statement emphasizes particular concern about harm to vulnerable groups, specifically noting risks of cyberbullying and exploitation targeting children.These warnings arrive weeks after the ICO and DPC opened formal investigations into Elon Musk's xAI following reports that its Grok chatbot produced sexual images of real people without their consent
1
. The timing underscores how quickly theoretical risks have materialized into real-world harms, prompting regulators to act decisively.The joint statement calls on organizations to engage proactively with regulators and implement robust safeguards from the outset, ensuring that technological advancement does not come at the expense of privacy, dignity, and safety
3
. This emphasis on responsible innovation reflects a growing expectation that companies must anticipate risks and build meaningful protections into AI systems before deployment, rather than addressing problems reactively.
Source: ET
William Malcolm, executive director of Regulatory Risk & Innovation at the ICO, stressed that people should benefit from AI without fearing threats to their identity, dignity, or safety. He noted that public trust is foundational to successful AI adoption and that joint regulatory initiatives demonstrate global commitment to high standards of data protection in AI systems
1
.Related Stories
The coordinated statement from dozens of international authorities signals that regulators view AI-generated imagery as a priority enforcement area. Companies developing generative AI tools should expect continued scrutiny over how their systems handle personal data, particularly regarding consent mechanisms and safeguards against misuse. The regulators make clear they will take action where obligations have not been met, providing regulatory certainty while demanding accountability.
For users, this intervention offers some assurance that existing legal protections extend to AI-generated content depicting them. The emphasis on protecting children and vulnerable groups suggests future regulatory attention may focus on age verification systems, content moderation capabilities, and transparency around how AI models are trained and deployed. As AI image generation becomes more realistic and ubiquitous, the tension between technological capability and ethical deployment will likely intensify, making proactive compliance and meaningful safeguards essential for any organization in this space.
Summarized by
Navi
[1]
18 Oct 2024•Technology

22 Jul 2024

29 Apr 2025•Policy and Regulation

1
Technology

2
Technology

3
Science and Research
