5 Sources
[1]
The EU AI Act aims to create a level playing field for AI innovation. Here's what it is. | TechCrunch
The European Union's Artificial Intelligence Act, known as the EU AI Act, has been described by the European Commission as "the world's first comprehensive AI law." After years in the making, it is progressively becoming a part of reality for the 450 million people living in the 27 countries that comprise the EU. The EU AI Act, however, is more than a European affair. It applies to companies both local and foreign, and it can affect both providers and deployers of AI systems; the European Commission cites examples of how it would apply to a developer of a CV screening tool, and to a bank that buys that tool. Now, all of these parties have a legal framework that sets the stage for their use of AI. As usual with EU legislation, the EU AI Act exists to make sure there is a uniform legal framework applying to a certain topic across EU countries -- the topic this time being AI. Now that the regulation is in place, it should "ensure the free movement, cross-border, of AI-based goods and services" without diverging local restrictions. With timely regulation, the EU seeks to create a level playing field across the region and foster trust, which could also create opportunities for emerging companies. However, the common framework that it has adopted is not exactly permissive: Despite the relatively early stage of widespread AI adoption in most sectors, the EU AI Act sets a high bar for what AI should and shouldn't do for society more broadly. According to European lawmakers, the framework's main goal is to "promote the uptake of human centric and trustworthy AI while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union, including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation." Yes, that's quite a mouthful, but it's worth parsing carefully. First, because a lot will depend on how you define "human centric" and "trustworthy" AI. And second, because it gives a good sense of the precarious balance to maintain between diverging goals: innovation vs. harm prevention, as well as uptake of AI vs. environmental protection. As usual with EU legislation, again, the devil will be in the details. To balance harm prevention against the potential benefits of AI, the EU AI Act adopted a risk-based approach: banning a handful of "unacceptable risk" use cases; flagging a set of "high-risk" uses calling for tight regulation; and applying lighter obligations to "limited risk" scenarios. Yes and no. The EU AI Act rollout started on August 1, 2024, but it will only come into force through a series of staggered compliance deadlines. In most cases, it will also apply sooner to new entrants than to companies that already offer AI products and services in the EU. The first deadline came into effect on February 2, 2025, and focused on enforcing bans on a small number of prohibited uses of AI, such as untargeted scraping of internet or CCTV for facial images to build up or expand databases. Many others will follow, but unless the schedule changes, most provisions will apply by mid-2026. Since August 2, 2025, the EU AI Act applies to "general-purpose AI models with systemic risk." GPAI models are AI models trained with a large amount of data, and that can be used for a wide range of tasks. That's where the risk element comes in. According to the EU AI Act, GPAI models can come with systemic risks; "for example, through the lowering of barriers for chemical or biological weapons development, or unintended issues of control over autonomous [GPAI] models." Ahead of the deadline, the EU published guidelines for providers of GPAI models, which include both European companies and non-European players such as Anthropic, Google, Meta, and OpenAI. But since these companies already have models on the market, they will also have until August 2, 2027, to comply, unlike new entrants. The EU AI Act comes with penalties that lawmakers wanted to be simultaneously "effective, proportionate and dissuasive" -- even for large global players. Details will be laid down by EU countries, but the regulation sets out the overall spirit -- that penalties will vary depending on the deemed risk level -- as well as thresholds for each level. Infringement on prohibited AI applications leads to the highest penalty of "up to €35 million or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher)." The European Commission can also inflict fines of up to €15 million or 3% of annual turnover on providers of GPAI models. The voluntary GPAI code of practice, including commitments such as not training models on pirated content, is a good indicator of how companies may engage with the framework law until forced to do so. In July 2025, Meta announced it wouldn't sign the voluntary GPAI code of practice meant to help such providers comply with the EU AI Act. However, Google soon after confirmed it would sign, despite reservations. Signatories so far include Aleph Alpha, Amazon, Anthropic, Cohere, Google, IBM, Microsoft, Mistral AI, and OpenAI, among others. But as we have seen with Google's example, signing does not equal a full-on endorsement. While stating in a blog post that Google would sign the voluntary GPAI code of practice, its president of global affairs, Kent Walker, still had reservations. "We remain concerned that the AI Act and Code risk slowing Europe's development and deployment of AI," he wrote. Meta was more radical, with its chief global affairs officer Joel Kaplan stating in a post on LinkedIn that "Europe is heading down the wrong path on AI." Calling the EU's implementation of the AI Act "overreach," he stated that the code of practice "introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." European companies have expressed concerns as well. Arthur Mensch, the CEO of French AI champion Mistral AI, was part of a group of European CEOs who signed an open letter in July 2025 urging Brussels to "stop the clock" for two years before key obligations of the EU AI Act came into force. In early July 2025, the European Union responded negatively to lobbying efforts calling for a pause, saying it would still stick to its timeline for implementing the EU AI Act. It went ahead with the August 2, 2025, deadline as planned, and we will update this story if anything changes.
[2]
The Clock Didn't Stop: The EU AI Act Will Reshape Your AI Strategy -- Now
After rejecting the "stop the clock" lobbying efforts from the tech industry, the EU is moving forward as planned with the next phase of the EU AI Act. If your company operates AI systems in the EU or uses AI-generated insights on the EU market, you need to pay close attention -- especially to the rules concerning general-purpose AI (GPAI) providers. This includes generative AI (genAI) models, whose providers are directly accountable for. But the impact doesn't stop there. Any organization using genAI -- whether through direct purchase or embedded in other technologies -- will likely face ripple effects across their value chains and third-party risk management programs. Despite speculation about possible delays, the EU has held firms on its timeline and released a range of tools to help companies prepare. Every company, not only GPAI providers, must be familiar with: * EU guidelines on the scope of General‑Purpose AI (GPAI) Providers' requirements. They define key terms -- such as what qualifies as a "general-purpose AI model" -- and introduce a training‑compute threshold as a practical benchmark. They are very useful for every company in clarify critical concepts of the regulation, such as which significant modifications trigger provider obligations, how to interpret the meaning of "general-purpose" AI, etc. Developed through extensive consultation, the Guidelines are not legally binding but reflect the European Commission's enforcement interpretation and are intended to guide providers in preparing for regulatory obligations. * The EU Code of Practice for General-Purpose AI (GPAI) Providers. It is a voluntary framework designed to help companies align with the upcoming requirements of the EU AI Act ahead of formal enforcement. The Code outlines practical steps GPAI providers can take to improve transparency, safety, and accountability in their AI systems. It includes guidance on model documentation, risk mitigation, and responsible deployment practices. Major AI companies like OpenAI, Mistral, and Anthropic have already signed on, signaling growing industry support for trustworthy and harmonized AI governance in the EU. Companies that use GPAI models and systems, the code of practice is useful in guiding them update their this party risk management framework for GPAI providers. * Template for transparency of training data for general‑purpose AI providers. This is a mandatory template requiring all general‑purpose AI (GPAI) providers to publish a public summary of the major sources' data used to train their models. This summary must cover training content across all stages -- from pre‑training to fine‑tuning -- and include types of data such as public and private datasets, web‑scraped content, user‑generated and synthetic data. Companies using GPAI must get hold of these summaries via providers' websites and distribution channels, expect them to be updated every six months at latest if the provider uses substantial new datasets. The EU AI Act isn't just a regional regulation -- it's the only binding global framework for trustworthy AI. Whether you like it or not, it's set to influence AI governance, risk management, and compliance practices around the world. And while the Act isn't perfect, it offers practical steps toward building more responsible AI systems -- including stronger data governance, privacy, security, and risk oversight. At the heart of this is the Act's AI risk pyramid, which gives companies a structured way to evaluate and mitigate the risks of their AI use cases. If you have any questions about compliance readiness and best practices, what the EU AI Act means for your AI strategy, and how to use it to build trustworthy AI schedule a guidance session with me. And, follow my research, as new reports on software offerings designed to help companies meet the requirements of AI regulations is on the way!
[3]
Is Europe ready to police AI? Supervision and sanctions start soon
A range of provisions under the EU's AI rulebook will enter into force, dealing with national oversight, penalties and general purpose AI models. Significant changes in terms of oversight and penalties are round the corner for AI suppliers in Europe as new provisions of the EU's AI Act enter into force from 2 August. Here's what will change month regarding the EU's rulebook on AI, which has been in force exactly one year this month, but which has been implemented gradually. National oversight On 2 August, member states will have to notify the European Commission about which market surveillance authorities they appoint to oversee businesses' compliance with the AI Act. That means that providers of AI systems will face scrutiny as of then. Euronews reported in May that with just three months to go until the early August deadline, it remained unclear in at least half of the member states which authority will be nominated. The EU executive did not want to comment back in March on which countries are ready yet, but expectations are that member states that recently went through elections will be delayed in setting up these regulators. According to a Commission official, some notifications have now been received, and they are under consideration. Laura Lazaro Cabrera, Programme Director for Equity and Data at the Centre for Democracy and Technology, told Euronews that many member states are set to miss the 2 August deadline to appoint their regulators. She said it's "crucial" that national authorities are appointed as soon as possible, and "that they are competent and properly resourced to oversee the broad range of risks posed by AI systems, including those to fundamental rights." Artur Bogucki, an associate researcher at the Centre for European Policy Studies (CEPS), echoed the likely delays. "This isn't surprising when you consider the sheer complexity of what's required. Countries need to establish market surveillance authorities, set up notifying bodies, define sanction regimes, and somehow find staff with expertise spanning AI, data computing, cybersecurity, fundamental rights, and sector-specific knowledge. That's a tall order in today's competitive tech talent market," he said. Bogucki said it doesn't stop there, because it remains to be seen how multiple bodies at both EU and national levels need to coordinate together. "This complexity becomes even more challenging when you consider how the AI Act must interact with existing regulations like GDPR, the Digital Services Act, and the Digital Markets Act. We're already seeing potential for overlaps and conflicts, reminiscent of how different data protection authorities across Europe have taken divergent approaches to regulating tech companies," he said. Penalties Also entering into force are provisions enabling penalties. Companies may be fined up to €35 million for breaches of the AI Act, or up to 7%of total worldwide annual turnover, whichever is higher. EU countries will need to adopt implementing laws that set out penalties for breaches and empower their authorities. For smaller companies, lower fines will apply. The AI Act sets a ceiling not a floor, for fines. According to Lazaro Cabrera, there is likely going to be "significant variability on how member states choose to fine their public authorities for non-compliance of the AI Act, if at all." She said that while there will be some divergence in how member states set the level of fines applicable, "forum-shopping in this context has its limits." "Ultimately market surveillance authorities have jurisdiction to act in connection to any product entering the EU market as a whole, and fines are only one of many tools at their disposal," she said. Bogucki said that the governance structure also needs to grapple with questions about prohibited AI practices, for example when it comes to biometric identification. "Different member states may have very different political appetites for enforcement in these areas, and without strong coordination mechanisms at the EU level, we could see the same fragmentation that has plagued GDPR enforcement," he said. GPAI Lastly, the rules on general purpose AI systems - which include large language models such as X's Grok, Google's Gemini, and OpenAI's ChatGPT - will enter into force. In July the Commission released a much-debated Code of Practice on GPAI. This voluntary set of rules that touches on transparency, copyright, and safety and security issues, aims to help providers of GPAI models comply with the AI Act. The Commission has recently said that those who don't sign can expect more scrutiny, whereas signatories are deemed compliant with the AI Act. But companies that sign the code will still need to comply with the AI rulebook. US tech giant Meta said last week that it will not sign, having slammed the rules for stifling innovation, others like Google and OpenAI said they will sign up. To make things more complicated, all the products that were placed on the market before 2 August have a two-year period to implement the rules, and all new tools launched after that date have to comply straight away. The EU AI Act continues to roll out in phases, each with new obligations for providers and deployers. Two years from now, on 2 August 2027, the AI Act will be applicable in full.
[4]
What does EU's general purpose AI code mean for businesses?
'Make no mistake, there will be action in the next few months,' warns Forrester analyst Enza Iannopollo. Tomorrow (2 August), the European Union's AI Act rules on general purpose AI will come into effect. To help industry comply with the new rules, the EU has developed the General-Purpose Artificial Intelligence (GPAI) Code of Practice. This voluntary tool is designed to help the industry comply with the AI Act's obligations when it comes to models with wide-ranging capabilities able to complete a variety of tasks and which can be implemented in different systems or for different applications. Examples include commonly used AI models such as ChatGPT, Gemini or Claude. The code has published rules regarding copyright and transparency, with certain advanced models deemed to have "systemic risk" facing additional voluntary obligations surrounding safety and security. Signatories have committed to respect any restriction of access to data to train their models, such as those imposed by subscription models or paywalls. They also commit to implement technical safeguards that prevent their models from generating outputs reproducing content protected by EU law. The signatories, which include the likes of Anthropic, OpenAI, Google, Amazon and IBM, are also required to draw up and implement a copyright policy that complies with EU law. The Elon Musk-owned xAI has also signed the GPAI Code, although only the section that applies to safety and security. The GPAI Code asks that signatories continuously assess and mitigate systematic risks associated with AI models and take appropriate risk management measures throughout the model's life cycle. They are also asked to report serious incidents to the EU. In addition, companies will be required to publicly disclose information on new AI models at launch, as well as give it to the EU AI Office, relevant national authorities and those who integrate the models in their systems upon request. "Providers of generative AI (GenAI) models are directly responsible for meeting these new rules, however it's worth noting that any company using GenAI models and systems - those directly purchased from GenAI providers or embedded in other technologies - will feel the impact of these requirements on their value chain and on their third-party risk management practices," said Forrester VP principal analyst Enza Iannopollo. Although, even as this regulation expands on accountability and enforcement around general purpose AI models, many copyright holders in the region have expressed their dissatisfaction. In a statement, 40 signatories - including news publications, artist collectives, translators, and TV and film producers, among others - say that the GPAI Code "does not deliver on the promise of the EU AI Act itself." Representing the coalition, the European Writers' Council said that the code is a "missed opportunity to provide meaningful protection of intellectual property" when it comes to AI. "We strongly reject any claim that the Code of Practice strikes a fair and workable balance. This is simply untrue and is a betrayal of the EU AI Act's objectives." However, many believe the EU's AI regulations are perhaps the most robust anywhere in the world and are set to shape risk management and governance practices for most global companies. "Its requirements may not be perfect, but they are the only binding set of rules on AI with global reach, and it represents the only realistic option of trustworthy AI and responsible innovation," said Iannopollo. The AI Act came into force last August, with the region enforcing its first set of obligations on banned practices six months later, in February. And aside from the GPAI Code, tomorrow also marks the deadline for EU member states to designate "national competent authorities" which will oversee the application of the Act and carry out market surveillance activities. The penalties for non-compliance under this Act are high, reaching up to 7pc of a company's global turnover, meaning companies will need to start paying attention. "Companies, make no mistake, there will be action in the next few months," warned Iannopollo. "The EU AI Act's 2 August sets a clear precedent and will trickle downstream. Enterprises must be ready to demonstrate that they are using AI in line with responsible practices, even if they're not yet legally required to do so," said Levent Ergin, the chief climate, sustainability and AI strategist at Informatica. "This is the first true test of AI supply chain transparency. If you can't show where your data came from or how your model reasoned, your organisations' data is not ready for AI." Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[5]
EU AI Act takes effect for GPAI providers August 2
Beginning August 2, 2025, entities providing general purpose artificial intelligence (GPAI) models within the European Union must adhere to specific stipulations outlined in the EU AI Act, including maintaining current technical documentation and training data summaries. The EU AI Act is a comprehensive legislative framework designed to establish standards for the ethical and safe development and deployment of AI technologies. This regulation adopts a risk-based approach, categorizing AI systems based on their potential risks and impact on individuals and society within the European Union. Although specific requirements for GPAI model providers become enforceable on August 2, 2025, a one-year grace period has been established, allowing companies to achieve full compliance without facing penalties until August 2, 2026. This grace period is intended to facilitate a smooth transition to the new regulatory landscape. Providers of GPAI models must be cognizant of and adhere to five key sets of regulations effective August 2, 2025. These encompass various aspects of AI governance, assessment and penalties. The first set of rules pertains to Notified Bodies, as stipulated in Chapter III, Section 4 of the EU AI Act. Providers of high-risk GPAI models must prepare to engage with these bodies for conformity assessments and understand the regulatory framework governing these evaluations. Notified Bodies are designated organizations responsible for assessing the conformity of specific products or services with applicable EU regulations. The second set of rules, detailed in Chapter V of the Act, specifically addresses GPAI models. This section outlines the requirements for technical documentation, training data summaries, and transparency measures that GPAI model providers must implement. The third set of rules, found in Chapter VII, concerns governance. This section defines the governance and enforcement architecture at both the EU and national levels. It mandates cooperation with the EU AI Office, the European AI Board, the Scientific Panel, and National Authorities in fulfilling compliance obligations, responding to oversight requests, and participating in risk monitoring and incident reporting processes. The fourth set of rules, outlined in Article 78, focuses on confidentiality. All data requests made by authorities to GPAI model providers must be legally justified, securely handled, and subject to confidentiality protections, especially concerning intellectual property, trade secrets, and source code. This ensures the protection of sensitive business information during regulatory oversight. The final set of rules, found in Articles 99 and 100, specifies penalties for non-compliance. These penalties are designed to ensure adherence to the AI Act's provisions and can be substantial. High-risk AI systems are defined as those that present a significant threat to health, safety, or fundamental rights. These systems are categorized into two main groups. First, those used as safety components of products governed by EU product safety laws. Second, those deployed in sensitive use cases, which include biometric identification, critical infrastructure management, education, employment and HR, and law enforcement. GPAI models, which can be applied across multiple domains, are considered to pose "systemic risk" if they exceed 10^25 floating-point operations executed per second (FLOPs) during training and are designated as such by the EU AI Office. Prominent examples of GPAI models that meet these criteria include OpenAI's ChatGPT, Meta's Llama, and Google's Gemini. All providers of GPAI models are required to maintain comprehensive technical documentation, a training data summary, a copyright compliance policy, guidance for downstream deployers, and transparency measures regarding capabilities, limitations, and intended use. This documentation serves to provide clarity and accountability in the development and deployment of AI systems. Providers of GPAI models that pose systemic risk face additional requirements. They must conduct model evaluations, report incidents, implement risk mitigation strategies and cybersecurity safeguards, disclose energy usage, and carry out post-market monitoring. These measures aim to address the heightened risks associated with more powerful and widely used AI models. Regarding penalties, providers of GPAI models may face fines of up to €35,000,000 or 7% of their total worldwide annual turnover, whichever is higher, for non-compliance with prohibited AI practices as defined under Article 5. These practices include manipulating human behavior, social scoring, facial recognition data scraping, and real-time biometric identification in public spaces. Other breaches of regulatory obligations, such as those related to transparency, risk management, or deployment responsibilities, can result in fines of up to €15,000,000 or 3% of turnover. These penalties are designed to ensure adherence to the broader requirements of the AI Act. Supplying misleading or incomplete information to authorities can lead to fines of up to €7,500,000 or 1% of turnover. This provision underscores the importance of accurate and transparent communication with regulatory bodies. For small and medium-sized enterprises (SMEs) and startups, the lower of the fixed amount or percentage applies when calculating penalties. The severity of the breach, its impact, the provider's cooperation, and whether the violation was intentional or negligent are all considered when determining the appropriate penalty. To facilitate compliance, the European Commission published the AI Code of Practice, a voluntary framework that tech companies can adopt to implement and adhere to the AI Act. Companies such as Google, OpenAI, and Anthropic have committed to it, while Meta has publicly refused to in protest of the legislation in its current form. The Commission plans to publish supplementary guidelines with the AI Code of Practice before August 2, 2025, which will clarify which companies qualify as providers of general-purpose models and general-purpose AI models with systemic risk. These guidelines are intended to provide further clarity and support for companies navigating the regulatory landscape. The EU AI Act was officially published in the EU's Official Journal on July 12, 2024, and took effect on August 1, 2024. However, the implementation of its various provisions is phased in over several years. Finally, by December 31, 2030, AI systems that are components of specific large-scale EU IT systems and were placed on the market before August 2, 2027, must be brought into compliance. This marks the final deadline for achieving widespread compliance across various sectors and applications. Despite these phased implementation dates, a group representing Apple, Google, Meta, and other companies urged regulators to postpone the Act's implementation by at least two years. This request was ultimately rejected by the EU, underscoring the commitment to the established timeline.
Share
Copy Link
The EU AI Act, described as the world's first comprehensive AI law, is progressively becoming a reality with new provisions coming into force on August 2, 2025. This article explores the key aspects of the Act, its implications for AI providers and users, and the global impact of this landmark regulation.
The European Union's Artificial Intelligence Act (EU AI Act), described as "the world's first comprehensive AI law," is progressively becoming a reality for the 450 million people living in the 27 EU countries 1. As of August 2, 2025, significant provisions of the Act will come into force, particularly affecting providers of General Purpose AI (GPAI) models 23.
Source: Silicon Republic
The EU AI Act adopts a risk-based approach, categorizing AI systems based on their potential risks and impact on individuals and society 5. Key provisions include:
National Oversight: EU member states must notify the European Commission about appointed market surveillance authorities to oversee businesses' compliance with the AI Act 3.
Penalties: Companies may face fines of up to €35 million or 7% of total worldwide annual turnover for breaches of the AI Act, whichever is higher 35.
GPAI Regulations: Rules on general purpose AI systems, including large language models, will enter into force 35.
GPAI model providers must adhere to specific stipulations, including:
For GPAI models deemed to pose "systemic risk," additional requirements include conducting model evaluations, reporting incidents, implementing risk mitigation strategies, and disclosing energy usage 5.
The EU AI Act isn't just a regional regulation; it's set to influence AI governance, risk management, and compliance practices worldwide 2. Major tech companies have responded differently to the Act's voluntary Code of Practice for GPAI Providers:
Source: TechCrunch
The implementation of the EU AI Act faces several challenges:
Companies operating in or serving the EU market must prepare for these new regulations:
Source: euronews
As the first binding set of rules on AI with global reach, the EU AI Act represents a significant step towards trustworthy AI and responsible innovation 4. While not perfect, it offers practical steps toward building more responsible AI systems, including stronger data governance, privacy, security, and risk oversight 2.
Google is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.
3 Sources
Technology
23 hrs ago
3 Sources
Technology
23 hrs ago
The UK's technology secretary and OpenAI's CEO discussed a potential multibillion-pound deal to provide ChatGPT Plus access to all UK residents, highlighting the government's growing interest in AI technology.
2 Sources
Technology
7 hrs ago
2 Sources
Technology
7 hrs ago
Multiple news outlets, including Wired and Business Insider, have been duped by AI-generated articles submitted under a fake freelancer's name, raising concerns about the future of journalism in the age of artificial intelligence.
4 Sources
Technology
2 days ago
4 Sources
Technology
2 days ago
Google inadvertently revealed a new smart speaker during its Pixel event, sparking speculation about its features and capabilities. The device is expected to be powered by Gemini AI and could mark a significant upgrade in Google's smart home offerings.
5 Sources
Technology
1 day ago
5 Sources
Technology
1 day ago
As AI and new platforms transform search behavior, brands must adapt their strategies beyond traditional SEO to remain visible in an increasingly fragmented digital landscape.
2 Sources
Technology
1 day ago
2 Sources
Technology
1 day ago