2 Sources
[1]
AI models with systemic risks given pointers on how to comply with EU AI rules
BRUSSELS, July 18 (Reuters) - The European Commission set out guidelines on Friday to help AI models it has determined have systemic risks and face tougher obligations to mitigate potential threats comply with European Union artificial intelligence regulation (AI Act). The move aims to counter criticism from some companies about the AI Act and the regulatory burden while providing more clarity to businesses which face fines ranging from 7.5 million euros ($8.7 million) or 1.5% of turnover to 35 million euros or 7% of global turnover for violations. The AI Act, which became law last year, will apply on Aug. 2 for AI models with systemic risks and foundation models such as those made by Google (GOOGL.O), opens new tab, OpenAI, Meta Platforms (META.O), opens new tab, Anthropic and Mistral. Companies have until August 2 next year to comply with the legislation. The Commission defines AI models with systemic risk as those with very advanced computing capabilities that could have a significant impact on public health, safety, fundamental rights or society. The first group of models will have to carry out model evaluations, assess and mitigate risks, conduct adversarial testing, report serious incidents to the Commission and ensure adequate cybersecurity protection against theft and misuse. General-purpose AI (GPAI) or foundation models will be subject to transparency requirements such as drawing up technical documentation, adopt copyright policies and provide detailed summaries about the content used for algorithm training. "With today's guidelines, the Commission supports the smooth and effective application of the AI Act," EU tech chief Henna Virkkunen said in a statement. ($1 = 0.8597 euros) Reporting by Foo Yun Chee;Editing by Elaine Hardcastle Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Artificial Intelligence Foo Yun Chee Thomson Reuters An agenda-setting and market-moving journalist, Foo Yun Chee is a 21-year veteran at Reuters. Her stories on high profile mergers have pushed up the European telecoms index, lifted companies' shares and helped investors decide on their next move. Her knowledge and experience of European antitrust laws and developments helped her break stories on Microsoft, Google, Amazon, Meta and Apple, numerous market-moving mergers and antitrust investigations. She has previously reported on Greek politics and companies, when Greece's entry into the eurozone meant it punched above its weight on the international stage, as well as on Dutch corporate giants and the quirks of Dutch society and culture that never fail to charm readers.
[2]
AI models with systemic risks given pointers on how to comply with EU AI rules - The Economic Times
The move aims to counter criticism from some companies about the AI Act and the regulatory burden while providing more clarity to businesses which face fines ranging from 7.5 million euros ($8.7 million) or 1.5% of turnover to 35 million euros or 7% of global turnover for violations.The European Commission set out guidelines on Friday to help AI models it has determined have systemic risks and face tougher obligations to mitigate potential threats comply with European Union artificial intelligence regulation (AI Act). The move aims to counter criticism from some companies about the AI Act and the regulatory burden while providing more clarity to businesses which face fines ranging from 7.5 million euros ($8.7 million) or 1.5% of turnover to 35 million euros or 7% of global turnover for violations. The AI Act, which became law last year, will apply on Aug. 2 for AI models with systemic risks and foundation models such as those made by Google, OpenAI, Meta Platforms, Anthropic and Mistral. Companies have until August 2 next year to comply with the legislation. The Commission defines AI models with systemic risk as those with very advanced computing capabilities that could have a significant impact on public health, safety, fundamental rights or society. The first group of models will have to carry out model evaluations, assess and mitigate risks, conduct adversarial testing, report serious incidents to the Commission and ensure adequate cybersecurity protection against theft and misuse. General-purpose AI (GPAI) or foundation models will be subject to transparency requirements such as drawing up technical documentation, adopt copyright policies and provide detailed summaries about the content used for algorithm training. "With today's guidelines, the Commission supports the smooth and effective application of the AI Act," EU tech chief Henna Virkkunen said in a statement.
Share
Copy Link
The European Commission has released guidelines to help AI models with systemic risks comply with the EU's new AI Act, aiming to clarify regulations and address industry concerns.
The European Commission has taken a significant step in regulating artificial intelligence by issuing guidelines to help AI models with systemic risks comply with the European Union's new AI Act. This move comes as a response to industry concerns about regulatory burdens and aims to provide clarity on the implementation of the AI Act, which became law last year 12.
Source: Reuters
The AI Act will come into effect on August 2, 2024, for AI models deemed to have systemic risks and foundation models. This includes AI systems developed by major tech companies such as Google, OpenAI, Meta Platforms, Anthropic, and Mistral. However, companies have been given a grace period until August 2, 2025, to ensure full compliance with the legislation 12.
The Commission has defined AI models with systemic risks as those possessing very advanced computing capabilities that could significantly impact public health, safety, fundamental rights, or society at large. This broad definition encompasses a wide range of AI applications that have the potential to influence critical aspects of human life and social structures 12.
AI models classified as having systemic risks will be subject to stringent compliance requirements. These include:
General-purpose AI (GPAI) or foundation models will face additional transparency requirements. These include:
Source: Economic Times
To ensure adherence to the new regulations, the EU has established significant penalties for violations. Fines range from 7.5 million euros ($8.7 million) or 1.5% of turnover to 35 million euros or 7% of global turnover, depending on the severity of the infringement 12.
The release of these guidelines is partly a response to criticism from some companies regarding the AI Act and its regulatory burden. By providing more clarity, the Commission aims to facilitate a smoother implementation process while maintaining strict oversight of AI development and deployment 12.
EU tech chief Henna Virkkunen emphasized the Commission's supportive approach, stating, "With today's guidelines, the Commission supports the smooth and effective application of the AI Act" 12.
As the deadline for compliance approaches, the AI industry and regulatory bodies will be closely watching how these guidelines shape the development and deployment of AI technologies in the European Union.
Meta, under Mark Zuckerberg's leadership, is making a massive investment in AI, aiming to develop "superintelligence" with a new elite team and billions in infrastructure spending.
2 Sources
Technology
5 hrs ago
2 Sources
Technology
5 hrs ago
Perplexity AI, an Nvidia-backed startup, is negotiating with mobile device manufacturers to pre-install its AI-powered Comet browser on smartphones, aiming to challenge Google's Chrome dominance and expand its user base.
5 Sources
Technology
13 hrs ago
5 Sources
Technology
13 hrs ago
NVIDIA's next-generation GB300 'Blackwell Ultra' AI servers are now in production, with large-scale shipments expected to begin in September 2025. The new servers feature design improvements and reuse elements from the current GB200 platform, easing supply chain pressures.
3 Sources
Technology
21 hrs ago
3 Sources
Technology
21 hrs ago
Rep. John Moolenaar, chair of the House Select Committee on China, criticizes the Trump administration's decision to allow Nvidia to resume H20 GPU shipments to China, citing concerns over potential military and AI advancements.
7 Sources
Policy and Regulation
21 hrs ago
7 Sources
Policy and Regulation
21 hrs ago
NVIDIA CEO Jensen Huang suggests that if he were a young graduate today, he would focus on physical sciences rather than software sciences, highlighting the importance of understanding the physical world for the next wave of AI development.
3 Sources
Technology
21 hrs ago
3 Sources
Technology
21 hrs ago