EU backs nudify apps ban while delaying AI Act compliance deadlines until 2027

Reviewed byNidhi Govil

3 Sources

Share

European Parliament voted to ban AI nudification tools that create non-consensual deepfake images, following the Grok chatbot scandal. The vote also delays compliance deadlines for high-risk AI systems until December 2027, extending uncertainty for businesses operating in Europe as negotiations with member states continue.

European Parliament Approves Major Changes to EU AI Act

The European Parliament has voted to implement significant amendments to the EU AI Act, approving both a ban on AI nudification tools andå»¶delays to critical compliance deadlines. The vote, which passed by an overwhelming majority of 569 to 45 with 23 abstentions, marks a decisive response to growing concerns about AI-enabled cyberviolence while acknowledging the challenges businesses face in meeting regulatory requirements

3

.

Source: BreakingNews.ie

Source: BreakingNews.ie

The nudify apps ban specifically targets artificial intelligence systems that create or manipulate sexually explicit images resembling identifiable individuals without their consent. However, the prohibition would not apply to AI systems with effective safety measures preventing users from creating such images

1

. This European Parliament vote follows widespread outrage over sexualised AI deepfakes generated by Elon Musk's Grok chatbot on X earlier this year, which prompted both EU Commission and Irish Data Protection Commission investigations

2

.

Source: France 24

Source: France 24

Compliance Deadlines Extended Amid Implementation Challenges

Alongside the ban on AI deepfakes, lawmakers approved substantial delays to AI compliance deadlines for high-risk AI systems—those deemed to pose serious risks to health, safety, or fundamental rights. The new timeline pushes compliance back to December 2027, while companies developing AI systems covered by sector-specific safety rules for toys or medical devices would have until August 2028

1

. Requirements for providers to watermark AI-generated content would also be delayed until November 2026, all significantly later than the original August 2026 target.

These extensions reflect mounting pressure from businesses struggling to meet regulatory requirements amid ongoing uncertainty. The EU has already missed its own deadlines to publish key guidance and made changes to legal frameworks, creating confusion for companies trying to prepare

1

. The delays extend this period of uncertainty, though they may provide companies with much-needed breathing room to develop compliant systems.

Targeting Digital Violence Against Women and Girls

The prohibition on non-consensual deepfake images addresses what lawmakers describe as a rapidly escalating form of digital violence disproportionately affecting women and girls. MEP Maria Walsh, a member of the European Parliament's Gender Equality Committee, emphasized that nudification apps "are not harmless tools; they are a form of digital violence that can have devastating and lifelong consequences for victims"

3

.

The urgency became apparent when Irish authorities confirmed they were investigating up to 200 reports of sexual abuse material related to minors generated using the Grok chatbot

3

. X announced in January it would implement changes to prevent the creation of such content, but the scandal had already triggered formal EU investigations into whether the platform breached rules by disseminating illegal content.

What Happens Next for AI Regulation

While the European Parliament has approved these measures, they cannot become law unilaterally. Parliament must now negotiate with the European Council, comprising ministers from all 27 member states, to finalize the text through the Digital Omnibus package

1

. Member states have already given their green light to the proposals, and negotiations are expected to proceed smoothly

2

.

However, uncertainty remains about whether these changes can be implemented before the original August deadline. Businesses operating in Europe should monitor negotiations closely, as the final compliance timelines will determine when they must have systems in place. The ban on tools that generate explicit images of identifiable persons without consent signals the EU's commitment to addressing cyberviolence, while the extended deadlines acknowledge the practical challenges of implementing comprehensive AI regulation across diverse sectors and artificial intelligence systems.🟡 untrained_text=🟡### European Parliament Approves Major Changes to EU AI Act

The European Parliament has voted to implement significant amendments to the EU AI Act, approving both a ban on AI nudification tools andå»¶delays to critical compliance deadlines. The vote, which passed by an overwhelming majority of 569 to 45 with 23 abstentions, marks a decisive response to growing concerns about AI-enabled cyberviolence while acknowledging the challenges businesses face in meeting regulatory requirements

3

.

The nudify apps ban specifically targets artificial intelligence systems that create or manipulate sexually explicit images resembling identifiable individuals without their consent. However, the prohibition would not apply to AI systems with effective safety measures preventing users from creating such images

1

. This European Parliament vote follows widespread outrage over sexualised AI deepfakes generated by Elon Musk's Grok chatbot on X earlier this year, which prompted both EU Commission and Irish Data Protection Commission investigations

2

.

Compliance Deadlines Extended Amid Implementation Challenges

Alongside the ban on AI deepfakes, lawmakers approved substantial delays to AI compliance deadlines for high-risk AI systems—those deemed to pose serious risks to health, safety, or fundamental rights. The new timeline pushes compliance back to December 2027, while companies developing AI systems covered by sector-specific safety rules for toys or medical devices would have until August 2028

1

. Requirements for providers to watermark AI-generated content would also be delayed until November 2026, all significantly later than the original August 2026 target.

These extensions reflect mounting pressure from businesses struggling to meet regulatory requirements amid ongoing uncertainty. The EU has already missed its own deadlines to publish key guidance and made changes to legal frameworks, creating confusion for companies trying to prepare

1

. The delays extend this period of uncertainty, though they may provide companies with much-needed breathing room to develop compliant systems.

Targeting Digital Violence Against Women and Girls

The prohibition on non-consensual deepfake images addresses what lawmakers describe as a rapidly escalating form of digital violence disproportionately affecting women and girls. MEP Maria Walsh, a member of the European Parliament's Gender Equality Committee, emphasized that nudification apps "are not harmless tools; they are a form of digital violence that can have devastating and lifelong consequences for victims"

3

.

The urgency became apparent when Irish authorities confirmed they were investigating up to 200 reports of sexual abuse material related to minors generated using the Grok chatbot

3

. X announced in January it would implement changes to prevent the creation of such content, but the scandal had already triggered formal EU investigations into whether the platform breached rules by disseminating illegal content.

What Happens Next for AI Regulation

While the European Parliament has approved these measures, they cannot become law unilaterally. Parliament must now negotiate with the European Council, comprising ministers from all 27 member states, to finalize the text through the Digital Omnibus package

1

. Member states have already given their green light to the proposals, and negotiations are expected to proceed smoothly

2

.

However, uncertainty remains about whether these changes can be implemented before the original August deadline. Businesses operating in Europe should monitor negotiations closely, as the final compliance timelines will determine when they must have systems in place. The ban on tools that generate explicit images of identifiable persons without consent signals the EU's commitment to addressing cyberviolence, while the extended deadlines acknowledge the practical challenges of implementing comprehensive AI regulation across diverse sectors and artificial intelligence systems.🟡 untrained_text=🟡### European Parliament Approves Major Changes to EU AI Act

The European Parliament has voted to implement significant amendments to the EU AI Act, approving both a ban on AI nudification tools andå»¶delays to critical compliance deadlines. The vote, which passed by an overwhelming majority of 569 to 45 with 23 abstentions, marks a decisive response to growing concerns about AI-enabled cyberviolence while acknowledging the challenges businesses face in meeting regulatory requirements

3

.

Source: BreakingNews.ie

Source: BreakingNews.ie

The nudify apps ban specifically targets artificial intelligence systems that create or manipulate sexually explicit images resembling identifiable individuals without their consent. However, the prohibition would not apply to AI systems with effective safety measures preventing users from creating such images

1

. This European Parliament vote follows widespread outrage over sexualised AI deepfakes generated by Elon Musk's Grok chatbot on X earlier this year, which prompted both EU Commission and Irish Data Protection Commission investigations

2

.

Source: France 24

Source: France 24

Compliance Deadlines Extended Amid Implementation Challenges

Alongside the ban on AI deepfakes, lawmakers approved substantial delays to AI compliance deadlines for high-risk AI systems—those deemed to pose serious risks to health, safety, or fundamental rights. The new timeline pushes compliance back to December 2027, while companies developing AI systems covered by sector-specific safety rules for toys or medical devices would have until August 2028

1

. Requirements for providers to watermark AI-generated content would also be delayed until November 2026, all significantly later than the original August 2026 target.

These extensions reflect mounting pressure from businesses struggling to meet regulatory requirements amid ongoing uncertainty. The EU has already missed its own deadlines to publish key guidance and made changes to legal frameworks, creating confusion for companies trying to prepare

1

. The delays extend this period of uncertainty, though they may provide companies with much-needed breathing room to develop compliant systems.

Targeting Digital Violence Against Women and Girls

The prohibition on non-consensual deepfake images addresses what lawmakers describe as a rapidly escalating form of digital violence disproportionately affecting women and girls. MEP Maria Walsh, a member of the European Parliament's Gender Equality Committee, emphasized that nudification apps "are not harmless tools; they are a form of digital violence that can have devastating and lifelong consequences for victims"

3

.

The urgency became apparent when Irish authorities confirmed they were investigating up to 200 reports of sexual abuse material related to minors generated using the Grok chatbot

3

. X announced in January it would implement changes to prevent the creation of such content, but the scandal had already triggered formal EU investigations into whether the platform breached rules by disseminating illegal content.

What Happens Next for AI Regulation

While the European Parliament has approved these measures, they cannot become law unilaterally. Parliament must now negotiate with the European Council, comprising ministers from all 27 member states, to finalize the text through the Digital Omnibus package

1

. Member states have already given their green light to the proposals, and negotiations are expected to proceed smoothly

2

.

However, uncertainty remains about whether these changes can be implemented before the original August deadline. Businesses operating in Europe should monitor negotiations closely, as the final compliance timelines will determine when they must have systems in place. The ban on tools that generate explicit images of identifiable persons without consent signals the EU's commitment to addressing cyberviolence, while the extended deadlines acknowledge the practical challenges of implementing comprehensive AI regulation across diverse sectors and artificial intelligence systems.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo