EU Commission Issues Guidelines for AI Models to Comply with New AI Act

Reviewed byNidhi Govil

2 Sources

Share

The European Commission has released guidelines to help AI models with systemic risks comply with the EU's new AI Act, aiming to clarify regulations and address industry concerns.

EU Commission's New Guidelines for AI Compliance

The European Commission has taken a significant step in regulating artificial intelligence by issuing guidelines to help AI models with systemic risks comply with the European Union's new AI Act. This move comes as a response to industry concerns about regulatory burdens and aims to provide clarity on the implementation of the AI Act, which became law last year

1

2

.

Source: Reuters

Source: Reuters

Scope and Timeline of the AI Act

The AI Act will come into effect on August 2, 2024, for AI models deemed to have systemic risks and foundation models. This includes AI systems developed by major tech companies such as Google, OpenAI, Meta Platforms, Anthropic, and Mistral. However, companies have been given a grace period until August 2, 2025, to ensure full compliance with the legislation

1

2

.

Definition of AI Models with Systemic Risks

The Commission has defined AI models with systemic risks as those possessing very advanced computing capabilities that could significantly impact public health, safety, fundamental rights, or society at large. This broad definition encompasses a wide range of AI applications that have the potential to influence critical aspects of human life and social structures

1

2

.

Compliance Requirements for High-Risk AI Models

AI models classified as having systemic risks will be subject to stringent compliance requirements. These include:

  1. Conducting model evaluations
  2. Assessing and mitigating risks
  3. Performing adversarial testing
  4. Reporting serious incidents to the Commission
  5. Ensuring adequate cybersecurity protection against theft and misuse

    1

    2

Transparency Requirements for General-Purpose AI

General-purpose AI (GPAI) or foundation models will face additional transparency requirements. These include:

  1. Drawing up technical documentation
  2. Adopting copyright policies
  3. Providing detailed summaries about the content used for algorithm training

    1

    2

Source: Economic Times

Source: Economic Times

Penalties for Non-Compliance

To ensure adherence to the new regulations, the EU has established significant penalties for violations. Fines range from 7.5 million euros ($8.7 million) or 1.5% of turnover to 35 million euros or 7% of global turnover, depending on the severity of the infringement

1

2

.

Industry Response and Commission's Stance

The release of these guidelines is partly a response to criticism from some companies regarding the AI Act and its regulatory burden. By providing more clarity, the Commission aims to facilitate a smoother implementation process while maintaining strict oversight of AI development and deployment

1

2

.

EU tech chief Henna Virkkunen emphasized the Commission's supportive approach, stating, "With today's guidelines, the Commission supports the smooth and effective application of the AI Act"

1

2

.

As the deadline for compliance approaches, the AI industry and regulatory bodies will be closely watching how these guidelines shape the development and deployment of AI technologies in the European Union.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo