2 Sources
[1]
AI models with systemic risks given pointers on how to comply with EU AI rules
BRUSSELS, July 18 (Reuters) - The European Commission set out guidelines on Friday to help AI models it has determined have systemic risks and face tougher obligations to mitigate potential threats comply with European Union artificial intelligence regulation (AI Act). The move aims to counter criticism from some companies about the AI Act and the regulatory burden while providing more clarity to businesses which face fines ranging from 7.5 million euros ($8.7 million) or 1.5% of turnover to 35 million euros or 7% of global turnover for violations. The AI Act, which became law last year, will apply on Aug. 2 for AI models with systemic risks and foundation models such as those made by Google (GOOGL.O), opens new tab, OpenAI, Meta Platforms (META.O), opens new tab, Anthropic and Mistral. Companies have until August 2 next year to comply with the legislation. The Commission defines AI models with systemic risk as those with very advanced computing capabilities that could have a significant impact on public health, safety, fundamental rights or society. The first group of models will have to carry out model evaluations, assess and mitigate risks, conduct adversarial testing, report serious incidents to the Commission and ensure adequate cybersecurity protection against theft and misuse. General-purpose AI (GPAI) or foundation models will be subject to transparency requirements such as drawing up technical documentation, adopt copyright policies and provide detailed summaries about the content used for algorithm training. "With today's guidelines, the Commission supports the smooth and effective application of the AI Act," EU tech chief Henna Virkkunen said in a statement. ($1 = 0.8597 euros) Reporting by Foo Yun Chee;Editing by Elaine Hardcastle Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Artificial Intelligence Foo Yun Chee Thomson Reuters An agenda-setting and market-moving journalist, Foo Yun Chee is a 21-year veteran at Reuters. Her stories on high profile mergers have pushed up the European telecoms index, lifted companies' shares and helped investors decide on their next move. Her knowledge and experience of European antitrust laws and developments helped her break stories on Microsoft, Google, Amazon, Meta and Apple, numerous market-moving mergers and antitrust investigations. She has previously reported on Greek politics and companies, when Greece's entry into the eurozone meant it punched above its weight on the international stage, as well as on Dutch corporate giants and the quirks of Dutch society and culture that never fail to charm readers.
[2]
AI models with systemic risks given pointers on how to comply with EU AI rules - The Economic Times
The move aims to counter criticism from some companies about the AI Act and the regulatory burden while providing more clarity to businesses which face fines ranging from 7.5 million euros ($8.7 million) or 1.5% of turnover to 35 million euros or 7% of global turnover for violations.The European Commission set out guidelines on Friday to help AI models it has determined have systemic risks and face tougher obligations to mitigate potential threats comply with European Union artificial intelligence regulation (AI Act). The move aims to counter criticism from some companies about the AI Act and the regulatory burden while providing more clarity to businesses which face fines ranging from 7.5 million euros ($8.7 million) or 1.5% of turnover to 35 million euros or 7% of global turnover for violations. The AI Act, which became law last year, will apply on Aug. 2 for AI models with systemic risks and foundation models such as those made by Google, OpenAI, Meta Platforms, Anthropic and Mistral. Companies have until August 2 next year to comply with the legislation. The Commission defines AI models with systemic risk as those with very advanced computing capabilities that could have a significant impact on public health, safety, fundamental rights or society. The first group of models will have to carry out model evaluations, assess and mitigate risks, conduct adversarial testing, report serious incidents to the Commission and ensure adequate cybersecurity protection against theft and misuse. General-purpose AI (GPAI) or foundation models will be subject to transparency requirements such as drawing up technical documentation, adopt copyright policies and provide detailed summaries about the content used for algorithm training. "With today's guidelines, the Commission supports the smooth and effective application of the AI Act," EU tech chief Henna Virkkunen said in a statement.
Share
Copy Link
The European Commission has released guidelines to help AI models with systemic risks comply with the EU's new AI Act, aiming to clarify regulations and address industry concerns.
The European Commission has taken a significant step in regulating artificial intelligence by issuing guidelines to help AI models with systemic risks comply with the European Union's new AI Act. This move comes as a response to industry concerns about regulatory burdens and aims to provide clarity on the implementation of the AI Act, which became law last year 12.
Source: Reuters
The AI Act will come into effect on August 2, 2024, for AI models deemed to have systemic risks and foundation models. This includes AI systems developed by major tech companies such as Google, OpenAI, Meta Platforms, Anthropic, and Mistral. However, companies have been given a grace period until August 2, 2025, to ensure full compliance with the legislation 12.
The Commission has defined AI models with systemic risks as those possessing very advanced computing capabilities that could significantly impact public health, safety, fundamental rights, or society at large. This broad definition encompasses a wide range of AI applications that have the potential to influence critical aspects of human life and social structures 12.
AI models classified as having systemic risks will be subject to stringent compliance requirements. These include:
General-purpose AI (GPAI) or foundation models will face additional transparency requirements. These include:
Source: Economic Times
To ensure adherence to the new regulations, the EU has established significant penalties for violations. Fines range from 7.5 million euros ($8.7 million) or 1.5% of turnover to 35 million euros or 7% of global turnover, depending on the severity of the infringement 12.
The release of these guidelines is partly a response to criticism from some companies regarding the AI Act and its regulatory burden. By providing more clarity, the Commission aims to facilitate a smoother implementation process while maintaining strict oversight of AI development and deployment 12.
EU tech chief Henna Virkkunen emphasized the Commission's supportive approach, stating, "With today's guidelines, the Commission supports the smooth and effective application of the AI Act" 12.
As the deadline for compliance approaches, the AI industry and regulatory bodies will be closely watching how these guidelines shape the development and deployment of AI technologies in the European Union.
NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather, potentially improving the protection of Earth's critical infrastructure from solar storms.
5 Sources
Technology
7 hrs ago
5 Sources
Technology
7 hrs ago
Meta introduces an AI-driven voice translation feature for Facebook and Instagram creators, enabling automatic dubbing of content from English to Spanish and vice versa, with plans for future language expansions.
8 Sources
Technology
23 hrs ago
8 Sources
Technology
23 hrs ago
OpenAI CEO Sam Altman reveals plans for GPT-6, focusing on memory capabilities to create more personalized and adaptive AI interactions. The upcoming model aims to remember user preferences and conversations, potentially transforming the relationship between humans and AI.
2 Sources
Technology
23 hrs ago
2 Sources
Technology
23 hrs ago
Chinese AI companies DeepSeek and Baidu are making waves in the global AI landscape with their open-source models, challenging the dominance of Western tech giants and potentially reshaping the AI industry.
2 Sources
Technology
7 hrs ago
2 Sources
Technology
7 hrs ago
A comprehensive look at the emerging phenomenon of 'AI psychosis', its impact on mental health, and the growing concerns among experts and tech leaders about the psychological risks associated with AI chatbots.
3 Sources
Technology
7 hrs ago
3 Sources
Technology
7 hrs ago