EU AI Act: Comprehensive Regulatory Framework for AI Takes Shape

3 Sources

Share

The European Union's AI Act, a risk-based rulebook for artificial intelligence, is nearing implementation with the release of draft guidelines for general-purpose AI models. This landmark legislation aims to foster innovation while ensuring AI remains human-centered and trustworthy.

News article

EU AI Act: A Comprehensive Regulatory Framework

The European Union is on the brink of implementing its landmark AI Act, a comprehensive regulatory framework designed to govern the development and use of artificial intelligence across the bloc. This legislation, years in the making, aims to foster innovation while ensuring AI technologies remain "human-centered" and trustworthy

1

.

Risk-Based Approach and Key Requirements

The AI Act adopts a risk-based approach, categorizing AI applications into different risk levels:

  1. Unacceptable Risk: Certain AI uses, such as harmful subliminal techniques or unacceptable social scoring, are banned with some exceptions

    1

    .
  2. High-Risk: Applications in critical infrastructure, law enforcement, education, and healthcare require conformity assessments and ongoing compliance monitoring

    1

    .
  3. Medium-Risk: Transparency obligations apply to systems like chatbots and synthetic media generators

    1

    .
  4. Low/Minimal Risk: Most AI uses fall into this category and are not directly regulated

    1

    .

General Purpose AI Models

The Act includes specific provisions for General Purpose AI (GPAI) models, recognizing their growing influence. A draft Code of Practice for GPAI providers has been published, outlining expectations in areas such as transparency, copyright compliance, and risk assessment

2

.

Key Focus Areas for GPAI Providers

  1. Transparency: Providers must disclose details about web crawlers used for model training

    3

    .
  2. Copyright: A single point of contact for rights holders' grievances and documentation of data sources used in training

    2

    .
  3. Systemic Risk Mitigation: Identification and management of risks such as cyber offenses, discrimination, and potential loss of AI control

    3

    .
  4. Technical Measures: Implementation of data protection, access controls, and continuous reassessment of effectiveness

    3

    .
  5. Governance: Ongoing risk assessment and involvement of external experts when necessary

    3

    .

Compliance Timelines and Penalties

Key compliance deadlines include:

  • August 1, 2025: Transparency requirements for GPAI providers

    2

    .
  • August 1, 2027: Risk assessment and mitigation requirements for GPAIs with "systemic risk"

    2

    .

Non-compliance can result in fines of up to €35 million or 7% of global annual profits, whichever is higher

3

.

Ongoing Development and Feedback

The draft Code of Practice is open for stakeholder feedback until November 28, 2024, with the final version expected by May 1, 2025

3

. This collaborative approach aims to refine the guidelines and ensure they are practical and effective for the rapidly evolving AI landscape.

Impact on AI Innovation and Trust

While some concerns persist about potential impacts on European AI innovation, the EU maintains that the Act will boost citizen trust and AI adoption. The regulation seeks to strike a balance between fostering a thriving AI ecosystem and protecting individual rights and societal interests

1

.

As the AI Act moves closer to full implementation, it is set to become a global benchmark for AI regulation, potentially influencing policy approaches worldwide and shaping the future of AI development and deployment.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo