Privacy watchdogs tell AI image makers: data protection laws apply to synthetic content

Reviewed byNidhi Govil

3 Sources

Share

A global coalition of more than 60 privacy watchdogs has issued a stark warning to the generative AI industry: companies creating realistic synthetic images of people must comply with existing data protection laws. The joint statement, signed by regulators including the UK's ICO and Ireland's DPC, addresses growing concerns about non-consensual intimate imagery, defamatory content, and exploitation of vulnerable groups, particularly children.

Privacy Watchdogs Issue Global Warning on AI-Generated Images

A global coalition of more than 60 regulators has sent a clear message to the generative AI industry: creating realistic synthetic images of people doesn't exempt companies from data protection obligations. The joint statement, signed by privacy watchdogs including the Information Commissioner's Office (ICO) and Ireland's Data Protection Commission (DPC), emphasizes that organizations developing AI image tools must comply with existing data protection laws

1

. The declaration comes as AI-generated images become increasingly sophisticated and accessible, raising urgent questions about consent, dignity, and safety.

Source: The Register

Source: The Register

The regulators state that if a model can convincingly depict identifiable individuals without consent, standard privacy protections apply regardless of whether the content originated from a machine

2

. This position directly challenges any notion that AI companies operate in a regulatory gray zone when producing realistic depictions of real people.

Growing Concerns About Non-Consensual Intimate Imagery and Exploitation

The signatories highlight specific risks that have emerged as AI image generation becomes integrated into widely accessible social media platforms. Recent developments have enabled the creation of non-consensual intimate imagery, defamatory content, and other harmful material featuring real individuals

1

. The statement emphasizes particular concern about harm to vulnerable groups, specifically noting risks of cyberbullying and exploitation targeting children.

These warnings arrive weeks after the ICO and DPC opened formal investigations into Elon Musk's xAI following reports that its Grok chatbot produced sexual images of real people without their consent

1

. The timing underscores how quickly theoretical risks have materialized into real-world harms, prompting regulators to act decisively.

Call to Implement Robust Safeguards from the Outset

The joint statement calls on organizations to engage proactively with regulators and implement robust safeguards from the outset, ensuring that technological advancement does not come at the expense of privacy, dignity, and safety

3

. This emphasis on responsible innovation reflects a growing expectation that companies must anticipate risks and build meaningful protections into AI systems before deployment, rather than addressing problems reactively.

Source: ET

Source: ET

William Malcolm, executive director of Regulatory Risk & Innovation at the ICO, stressed that people should benefit from AI without fearing threats to their identity, dignity, or safety. He noted that public trust is foundational to successful AI adoption and that joint regulatory initiatives demonstrate global commitment to high standards of data protection in AI systems

1

.

What This Means for AI Companies and Users

The coordinated statement from dozens of international authorities signals that regulators view AI-generated imagery as a priority enforcement area. Companies developing generative AI tools should expect continued scrutiny over how their systems handle personal data, particularly regarding consent mechanisms and safeguards against misuse. The regulators make clear they will take action where obligations have not been met, providing regulatory certainty while demanding accountability.

For users, this intervention offers some assurance that existing legal protections extend to AI-generated content depicting them. The emphasis on protecting children and vulnerable groups suggests future regulatory attention may focus on age verification systems, content moderation capabilities, and transparency around how AI models are trained and deployed. As AI image generation becomes more realistic and ubiquitous, the tension between technological capability and ethical deployment will likely intensify, making proactive compliance and meaningful safeguards essential for any organization in this space.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo