3 Sources
[1]
Is Europe ready to police AI? Supervision and sanctions start soon
A range of provisions under the EU's AI rulebook will enter into force, dealing with national oversight, penalties and general purpose AI models. Significant changes in terms of oversight and penalties are round the corner for AI suppliers in Europe as new provisions of the EU's AI Act enter into force from 2 August. Here's what will change month regarding the EU's rulebook on AI, which has been in force exactly one year this month, but which has been implemented gradually. National oversight On 2 August, member states will have to notify the European Commission about which market surveillance authorities they appoint to oversee businesses' compliance with the AI Act. That means that providers of AI systems will face scrutiny as of then. Euronews reported in May that with just three months to go until the early August deadline, it remained unclear in at least half of the member states which authority will be nominated. The EU executive did not want to comment back in March on which countries are ready yet, but expectations are that member states that recently went through elections will be delayed in setting up these regulators. According to a Commission official, some notifications have now been received, and they are under consideration. Laura Lazaro Cabrera, Programme Director for Equity and Data at the Centre for Democracy and Technology, told Euronews that many member states are set to miss the 2 August deadline to appoint their regulators. She said it's "crucial" that national authorities are appointed as soon as possible, and "that they are competent and properly resourced to oversee the broad range of risks posed by AI systems, including those to fundamental rights." Artur Bogucki, an associate researcher at the Centre for European Policy Studies (CEPS), echoed the likely delays. "This isn't surprising when you consider the sheer complexity of what's required. Countries need to establish market surveillance authorities, set up notifying bodies, define sanction regimes, and somehow find staff with expertise spanning AI, data computing, cybersecurity, fundamental rights, and sector-specific knowledge. That's a tall order in today's competitive tech talent market," he said. Bogucki said it doesn't stop there, because it remains to be seen how multiple bodies at both EU and national levels need to coordinate together. "This complexity becomes even more challenging when you consider how the AI Act must interact with existing regulations like GDPR, the Digital Services Act, and the Digital Markets Act. We're already seeing potential for overlaps and conflicts, reminiscent of how different data protection authorities across Europe have taken divergent approaches to regulating tech companies," he said. Penalties Also entering into force are provisions enabling penalties. Companies may be fined up to €35 million for breaches of the AI Act, or up to 7%of total worldwide annual turnover, whichever is higher. EU countries will need to adopt implementing laws that set out penalties for breaches and empower their authorities. For smaller companies, lower fines will apply. The AI Act sets a ceiling not a floor, for fines. According to Lazaro Cabrera, there is likely going to be "significant variability on how member states choose to fine their public authorities for non-compliance of the AI Act, if at all." She said that while there will be some divergence in how member states set the level of fines applicable, "forum-shopping in this context has its limits." "Ultimately market surveillance authorities have jurisdiction to act in connection to any product entering the EU market as a whole, and fines are only one of many tools at their disposal," she said. Bogucki said that the governance structure also needs to grapple with questions about prohibited AI practices, for example when it comes to biometric identification. "Different member states may have very different political appetites for enforcement in these areas, and without strong coordination mechanisms at the EU level, we could see the same fragmentation that has plagued GDPR enforcement," he said. GPAI Lastly, the rules on general purpose AI systems - which include large language models such as X's Grok, Google's Gemini, and OpenAI's ChatGPT - will enter into force. In July the Commission released a much-debated Code of Practice on GPAI. This voluntary set of rules that touches on transparency, copyright, and safety and security issues, aims to help providers of GPAI models comply with the AI Act. The Commission has recently said that those who don't sign can expect more scrutiny, whereas signatories are deemed compliant with the AI Act. But companies that sign the code will still need to comply with the AI rulebook. US tech giant Meta said last week that it will not sign, having slammed the rules for stifling innovation, others like Google and OpenAI said they will sign up. To make things more complicated, all the products that were placed on the market before 2 August have a two-year period to implement the rules, and all new tools launched after that date have to comply straight away. The EU AI Act continues to roll out in phases, each with new obligations for providers and deployers. Two years from now, on 2 August 2027, the AI Act will be applicable in full.
[2]
What does EU's general purpose AI code mean for businesses?
'Make no mistake, there will be action in the next few months,' warns Forrester analyst Enza Iannopollo. Tomorrow (2 August), the European Union's AI Act rules on general purpose AI will come into effect. To help industry comply with the new rules, the EU has developed the General-Purpose Artificial Intelligence (GPAI) Code of Practice. This voluntary tool is designed to help the industry comply with the AI Act's obligations when it comes to models with wide-ranging capabilities able to complete a variety of tasks and which can be implemented in different systems or for different applications. Examples include commonly used AI models such as ChatGPT, Gemini or Claude. The code has published rules regarding copyright and transparency, with certain advanced models deemed to have "systemic risk" facing additional voluntary obligations surrounding safety and security. Signatories have committed to respect any restriction of access to data to train their models, such as those imposed by subscription models or paywalls. They also commit to implement technical safeguards that prevent their models from generating outputs reproducing content protected by EU law. The signatories, which include the likes of Anthropic, OpenAI, Google, Amazon and IBM, are also required to draw up and implement a copyright policy that complies with EU law. The Elon Musk-owned xAI has also signed the GPAI Code, although only the section that applies to safety and security. The GPAI Code asks that signatories continuously assess and mitigate systematic risks associated with AI models and take appropriate risk management measures throughout the model's life cycle. They are also asked to report serious incidents to the EU. In addition, companies will be required to publicly disclose information on new AI models at launch, as well as give it to the EU AI Office, relevant national authorities and those who integrate the models in their systems upon request. "Providers of generative AI (GenAI) models are directly responsible for meeting these new rules, however it's worth noting that any company using GenAI models and systems - those directly purchased from GenAI providers or embedded in other technologies - will feel the impact of these requirements on their value chain and on their third-party risk management practices," said Forrester VP principal analyst Enza Iannopollo. Although, even as this regulation expands on accountability and enforcement around general purpose AI models, many copyright holders in the region have expressed their dissatisfaction. In a statement, 40 signatories - including news publications, artist collectives, translators, and TV and film producers, among others - say that the GPAI Code "does not deliver on the promise of the EU AI Act itself." Representing the coalition, the European Writers' Council said that the code is a "missed opportunity to provide meaningful protection of intellectual property" when it comes to AI. "We strongly reject any claim that the Code of Practice strikes a fair and workable balance. This is simply untrue and is a betrayal of the EU AI Act's objectives." However, many believe the EU's AI regulations are perhaps the most robust anywhere in the world and are set to shape risk management and governance practices for most global companies. "Its requirements may not be perfect, but they are the only binding set of rules on AI with global reach, and it represents the only realistic option of trustworthy AI and responsible innovation," said Iannopollo. The AI Act came into force last August, with the region enforcing its first set of obligations on banned practices six months later, in February. And aside from the GPAI Code, tomorrow also marks the deadline for EU member states to designate "national competent authorities" which will oversee the application of the Act and carry out market surveillance activities. The penalties for non-compliance under this Act are high, reaching up to 7pc of a company's global turnover, meaning companies will need to start paying attention. "Companies, make no mistake, there will be action in the next few months," warned Iannopollo. "The EU AI Act's 2 August sets a clear precedent and will trickle downstream. Enterprises must be ready to demonstrate that they are using AI in line with responsible practices, even if they're not yet legally required to do so," said Levent Ergin, the chief climate, sustainability and AI strategist at Informatica. "This is the first true test of AI supply chain transparency. If you can't show where your data came from or how your model reasoned, your organisations' data is not ready for AI." Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[3]
EU AI Act takes effect for GPAI providers August 2
Beginning August 2, 2025, entities providing general purpose artificial intelligence (GPAI) models within the European Union must adhere to specific stipulations outlined in the EU AI Act, including maintaining current technical documentation and training data summaries. The EU AI Act is a comprehensive legislative framework designed to establish standards for the ethical and safe development and deployment of AI technologies. This regulation adopts a risk-based approach, categorizing AI systems based on their potential risks and impact on individuals and society within the European Union. Although specific requirements for GPAI model providers become enforceable on August 2, 2025, a one-year grace period has been established, allowing companies to achieve full compliance without facing penalties until August 2, 2026. This grace period is intended to facilitate a smooth transition to the new regulatory landscape. Providers of GPAI models must be cognizant of and adhere to five key sets of regulations effective August 2, 2025. These encompass various aspects of AI governance, assessment and penalties. The first set of rules pertains to Notified Bodies, as stipulated in Chapter III, Section 4 of the EU AI Act. Providers of high-risk GPAI models must prepare to engage with these bodies for conformity assessments and understand the regulatory framework governing these evaluations. Notified Bodies are designated organizations responsible for assessing the conformity of specific products or services with applicable EU regulations. The second set of rules, detailed in Chapter V of the Act, specifically addresses GPAI models. This section outlines the requirements for technical documentation, training data summaries, and transparency measures that GPAI model providers must implement. The third set of rules, found in Chapter VII, concerns governance. This section defines the governance and enforcement architecture at both the EU and national levels. It mandates cooperation with the EU AI Office, the European AI Board, the Scientific Panel, and National Authorities in fulfilling compliance obligations, responding to oversight requests, and participating in risk monitoring and incident reporting processes. The fourth set of rules, outlined in Article 78, focuses on confidentiality. All data requests made by authorities to GPAI model providers must be legally justified, securely handled, and subject to confidentiality protections, especially concerning intellectual property, trade secrets, and source code. This ensures the protection of sensitive business information during regulatory oversight. The final set of rules, found in Articles 99 and 100, specifies penalties for non-compliance. These penalties are designed to ensure adherence to the AI Act's provisions and can be substantial. High-risk AI systems are defined as those that present a significant threat to health, safety, or fundamental rights. These systems are categorized into two main groups. First, those used as safety components of products governed by EU product safety laws. Second, those deployed in sensitive use cases, which include biometric identification, critical infrastructure management, education, employment and HR, and law enforcement. GPAI models, which can be applied across multiple domains, are considered to pose "systemic risk" if they exceed 10^25 floating-point operations executed per second (FLOPs) during training and are designated as such by the EU AI Office. Prominent examples of GPAI models that meet these criteria include OpenAI's ChatGPT, Meta's Llama, and Google's Gemini. All providers of GPAI models are required to maintain comprehensive technical documentation, a training data summary, a copyright compliance policy, guidance for downstream deployers, and transparency measures regarding capabilities, limitations, and intended use. This documentation serves to provide clarity and accountability in the development and deployment of AI systems. Providers of GPAI models that pose systemic risk face additional requirements. They must conduct model evaluations, report incidents, implement risk mitigation strategies and cybersecurity safeguards, disclose energy usage, and carry out post-market monitoring. These measures aim to address the heightened risks associated with more powerful and widely used AI models. Regarding penalties, providers of GPAI models may face fines of up to €35,000,000 or 7% of their total worldwide annual turnover, whichever is higher, for non-compliance with prohibited AI practices as defined under Article 5. These practices include manipulating human behavior, social scoring, facial recognition data scraping, and real-time biometric identification in public spaces. Other breaches of regulatory obligations, such as those related to transparency, risk management, or deployment responsibilities, can result in fines of up to €15,000,000 or 3% of turnover. These penalties are designed to ensure adherence to the broader requirements of the AI Act. Supplying misleading or incomplete information to authorities can lead to fines of up to €7,500,000 or 1% of turnover. This provision underscores the importance of accurate and transparent communication with regulatory bodies. For small and medium-sized enterprises (SMEs) and startups, the lower of the fixed amount or percentage applies when calculating penalties. The severity of the breach, its impact, the provider's cooperation, and whether the violation was intentional or negligent are all considered when determining the appropriate penalty. To facilitate compliance, the European Commission published the AI Code of Practice, a voluntary framework that tech companies can adopt to implement and adhere to the AI Act. Companies such as Google, OpenAI, and Anthropic have committed to it, while Meta has publicly refused to in protest of the legislation in its current form. The Commission plans to publish supplementary guidelines with the AI Code of Practice before August 2, 2025, which will clarify which companies qualify as providers of general-purpose models and general-purpose AI models with systemic risk. These guidelines are intended to provide further clarity and support for companies navigating the regulatory landscape. The EU AI Act was officially published in the EU's Official Journal on July 12, 2024, and took effect on August 1, 2024. However, the implementation of its various provisions is phased in over several years. Finally, by December 31, 2030, AI systems that are components of specific large-scale EU IT systems and were placed on the market before August 2, 2027, must be brought into compliance. This marks the final deadline for achieving widespread compliance across various sectors and applications. Despite these phased implementation dates, a group representing Apple, Google, Meta, and other companies urged regulators to postpone the Act's implementation by at least two years. This request was ultimately rejected by the EU, underscoring the commitment to the established timeline.
Share
Copy Link
The European Union's AI Act enters a crucial phase of implementation, introducing new regulations for AI providers, especially those offering general-purpose AI models. This marks a significant step in global AI governance.
The European Union's Artificial Intelligence Act is entering a pivotal stage of implementation, with significant provisions coming into force on August 2, 2025. This landmark legislation aims to establish a comprehensive framework for the ethical and safe development of AI technologies within the EU 123.
Source: Silicon Republic
Member states are required to notify the European Commission about their appointed market surveillance authorities to oversee businesses' compliance with the AI Act. This step marks the beginning of scrutiny for AI system providers 1.
The Act also introduces substantial penalties for non-compliance:
Laura Lazaro Cabrera from the Centre for Democracy and Technology emphasizes the importance of competent and well-resourced national authorities to oversee the broad range of risks posed by AI systems 1.
Source: euronews
The rules for GPAI systems, including large language models like ChatGPT and Google's Gemini, are now in effect. Key requirements include:
For GPAI models deemed to pose "systemic risk," additional obligations include conducting model evaluations, reporting incidents, implementing risk mitigation strategies, and disclosing energy usage 3.
The EU has introduced a voluntary Code of Practice on GPAI to help providers comply with the AI Act. While some major tech companies like Google and OpenAI have signed up, others like Meta have declined, citing concerns about innovation stifling 12.
Enza Iannopollo, a Forrester analyst, warns that there will be significant action in the coming months, urging companies to prepare for demonstrating responsible AI use 2.
Source: Dataconomy
The EU's AI regulations are considered among the most robust globally and are expected to shape risk management and governance practices for many international companies. Levent Ergin, from Informatica, notes that this marks the first true test of AI supply chain transparency 2.
As the AI Act continues to roll out in phases, with full applicability set for August 2, 2027, it presents both challenges and opportunities for the AI industry. The coming years will likely see increased focus on ethical AI development, transparency, and regulatory compliance across the global tech landscape 123.
Anthropic has cut off OpenAI's API access to its Claude AI models, citing violations of terms of service. The move comes as OpenAI prepares to launch GPT-5, highlighting growing competition in the AI industry.
5 Sources
Technology
14 hrs ago
5 Sources
Technology
14 hrs ago
Major tech companies are investing unprecedented amounts in AI infrastructure, with combined spending expected to reach $344 billion in 2025. This massive expenditure reflects the intense competition and fear of missing out in the rapidly evolving AI landscape.
3 Sources
Business and Economy
22 hrs ago
3 Sources
Business and Economy
22 hrs ago
Microsoft co-founder Bill Gates expresses surprise at AI's rapid advancement and discusses its potential to replace human workers, highlighting the uncertainty surrounding the timeline for this transition.
2 Sources
Technology
14 hrs ago
2 Sources
Technology
14 hrs ago
AI startups are experiencing unprecedented growth with record-breaking investments and strategic acquisitions, signaling a robust market despite economic uncertainties.
2 Sources
Startups
14 hrs ago
2 Sources
Startups
14 hrs ago
Researchers at NJIT use AI to identify five promising materials for multivalent-ion batteries, potentially revolutionizing energy storage technology and offering a sustainable alternative to lithium-ion batteries.
2 Sources
Science and Research
14 hrs ago
2 Sources
Science and Research
14 hrs ago