Curated by THEOUTPOST
On Fri, 15 Nov, 8:03 AM UTC
3 Sources
[1]
EU AI Act: Everything you need to know
The European Union's risk-based rulebook for artificial intelligence -- aka the EU AI Act -- has been years in the making. But expect to hear a lot more about the regulation in the coming months (and years) as key compliance deadlines kick in. Meanwhile, read on for an overview of the law and its aims. So what is the EU trying to achieve? Dial back the clock to April 2021, when the Commission published the original proposal and lawmakers were framing it as a law to bolster the bloc's ability to innovate in AI by fostering trust among citizens. The framework would ensure AI technologies remained "human-centered" while also giving businesses clear rules to work their machine learning magic, the EU suggested. Increasing adoption of automation across industry and society certainly has the potential to supercharge productivity in various domains. But it also poses risks of fast-scaling harms if outputs are poor and/or where AI intersects with individual rights and fails to respect them. The bloc's goal for the AI Act is therefore to drive uptake of AI and grow a local AI ecosystem by setting conditions that are intended to shrink the risks that things could go horribly wrong. Lawmakers think that having guardrails in place will boost citizens' trust in and uptake of AI. This ecosystem-fostering-through-trust idea was fairly uncontroversial back in the early part of the decade, when the law was being discussed and drafted. Objections were raised in some quarters, though, that it was simply too early to be regulating AI and that European innovation and competitiveness could suffer. Few would likely say it's too early now, of course, given how the technology has exploded into mainstream consciousness thanks to the boom in generative AI tools. But there are still objections that the law sandbags the prospects of homegrown AI entrepreneurs, despite the inclusion of support measures like regulatory sandboxes. Even so, the big debate for many lawmakers is now around how to regulate AI, and with the AI Act the EU has set its course. The next years are all about the bloc executing on the plan. What does the AI Act require? Most uses of AI are not regulated under the AI Act at all, as they fall out of scope of the risk-based rules. (It's also worth noting that military uses of AI are entirely out of scope as national security is a member-state, rather than EU-level, legal competence.) For in-scope uses of AI, the Act's risk-based approach sets up a hierarchy where a handful of potential use cases (e.g., "harmful subliminal, manipulative and deceptive techniques" or "unacceptable social scoring") are framed as carrying "unacceptable risk" and are therefore banned. However, the list of banned uses is replete with exceptions, meaning even the law's small number of prohibitions carry plenty of caveats. For example, a ban on law enforcement using real-time remote biometric identification in publicly accessible spaces is not the blanket ban some parliamentarians and many civil society groups had pushed for, with exceptions allowing its use for certain crimes. The next tier down from unacceptable risk/banned use is "high-risk" use cases -- such as AI apps used for critical infrastructure; law enforcement; education and vocational training; healthcare; and more -- where app makers must conduct conformity assessments prior to market deployment, and on an ongoing basis (such as when they make substantial updates to models). This means the developer must be able to demonstrate that they are meeting the law's requirements in areas such as data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity, and robustness. They must put in place quality and risk-management systems so they can demonstrate compliance if an enforcement authority comes knocking to do an audit. High-risk systems that are deployed by public bodies must also be registered in a public EU database. There is also a third, "medium-risk" category, which applies transparency obligations to AI systems, such as chatbots or other tools that can be used to produce synthetic media. Here the concern is they could be used to manipulate people, so this type of tech requires that users are informed they are interacting with or viewing content produced by AI. All other uses of AI are automatically considered low/minimal risk and aren't regulated. This means that, for example, stuff like using AI to sort and recommend social media content or target advertising doesn't have any obligations under these rules. But the bloc encourages all AI developers to voluntarily follow best practices for boosting user trust. This set of tiered risk-based rules make up the bulk of the AI Act. But there are also some dedicated requirements for the multifaceted models that underpin generative AI technologies -- which the AI Act refers to as "general purpose AI" models (or GPAIs). This subset of AI technologies, which the industry sometimes calls "foundational models," typically sits upstream of many apps that implement artificial intelligence. Developers are tapping into APIs from the GPAIs to deploy these models' capabilities into their own software, often fine-tuned for a specific use case to add value. All of which is to say that GPAIs have quickly gained a powerful position in the market, with the potential to influence AI outcomes at a large scale. GenAI has entered the chat ... The rise of GenAI reshaped more than just the conversation around the EU's AI Act; it led to changes to the rulebook itself as the bloc's lengthy legislative process coincided with the hype around GenAI tools like ChatGPT. Lawmakers in the European parliament seized their chance to respond. MEPs proposed adding additional rules for GPAIs -- that is, the models that underlie GenAI tools. These, in turn, sharpened tech industry attention on what the EU was doing with the law, leading to some fierce lobbying for a carve-out for GPAIs. French AI firm Mistral was one of the loudest voices, arguing that rules on model makers would hold back Europe's ability to compete against AI giants from the U.S. and China. OpenAI's Sam Altman also chipped in, suggesting, in a side remark to journalists that it might pull its tech out of Europe if laws proved too onerous, before hurriedly falling back to traditional flesh-pressing (lobbying) of regional powerbrokers after the EU called him out on this clumsy threat. Altman getting a crash course in European diplomacy has been one of the more visible side effects of the AI Act. The upshot of all this noise was a white-knuckle ride to get the legislative process wrapped. It took months and a marathon final negotiating session between the European parliament, Council, and Commission to push the file over the line last year. The political agreement was clinched in December 2023, paving the way for adoption of the final text in May 2024. The EU has trumpeted the AI Act as a "global first." But being first in this cutting-edge tech context means there's still a lot of detail to be worked out, such as setting the specific standards in which the law will apply and producing detailed compliance guidance (Codes of Practice) in order for the oversight and ecosystem-building regime the Act envisages to function. So, as far as assessing its success, the law remains a work in progress -- and will be for a long time. For GPAIs, the AI Act continues the risk-based approach, with (only) lighter requirements for most of these models. For commercial GPAIs, this means transparency rules (including technical documentation requirements and disclosures around the use of copyrighted material used to train models). These provisions are intended to help downstream developers with their own AI Act compliance. There's also a second tier -- for the most powerful (and potentially risky) GPAIs -- where the Act dials up obligations on model makers by requiring proactive risk assessment and risk mitigation for GPAIs with "systemic risk." Here the EU is concerned about very powerful AI models that might pose risks to human life, for example, or even risks that tech makers lose control over continued development of self-improving AIs. Lawmakers elected to rely on compute threshold for model training as a classifier for this systemic risk tier. GPAIs will fall into this bracket based on the cumulative amount of compute used for their training being measured in floating point operations (FLOPs) of greater than 10. So far no models are thought to be in scope, but of course that could change as GenAI continues to develop. There is also some leeway for AI safety experts involved in oversight of the AI Act to flag concerns about systemic risks that may arise elsewhere. (For more on the governance structure the bloc has devised for the AI Act -- including the various roles of the AI Office -- see our earlier report.) Mistral et al.'s lobbying did result in a watering down of the rules for GPAIs, with lighter requirements on open source providers for example (lucky Mistral!). R&D also got a carve out, meaning GPAIs that have not yet been commercialized fall out of scope of the Act entirely, without even transparency requirements applying. A long march toward compliance The AI Act officially entered into force across the EU on August 1, 2024. That date essentially fired a starting gun as deadlines for complying with different components are set to hit at different intervals from early next year until around the middle of 2027. Some of the main compliance deadlines are six months in from entry into force, when rules on prohibited use cases kick in; nine months in when Codes of Practice start to apply; 12 months in for transparency and governance requirements; 24 months for other AI requirements, including obligations for some high-risk systems; and 36 months for other high-risk systems. Part of the reason for this staggered approach to legal provisions is about giving companies enough time to get their operations in order. But even more than that, it's clear that time is needed for regulators to work out what compliance looks like in this cutting-edge context. At the time of writing, the bloc is busy formulating guidance for various aspects of the law ahead of these deadlines, such as Codes of Practice for makers of GPAIs. The EU is also consulting on the law's definition of "AI systems" (i.e., which software will be in scope or out) and clarifications related to banned uses of AI. The full picture of what the AI Act will mean for in-scope companies is still being shaded in and fleshed out. But key details are expected to be locked down in the coming months and into the first half of next year. One more thing to consider: As a consequence of the pace of development in the AI field, what's required to stay on the right side of the law will likely continue to shift as these technologies (and their associated risks) continue evolving, too. So this is one rulebook that may well need to remain a living document. AI rules enforcement Oversight of GPAIs is centralized at EU level, with the AI Office playing a key role. Penalties the Commission can reach for to enforce these rules can reach up to 3% of model makers' global turnover. Elsewhere, enforcement of the Act's rules for AI systems is decentralized, meaning it will be down to member state-level authorities (plural, as there may be more than one oversight body designated) to assess and investigate compliance issues for the bulk of AI apps. How workable this structure will be remains to be seen. On paper, penalties can reach up to 7% of global turnover (or €35 million, whichever is greater) for breaches of banned uses. Violations of other AI obligations can be sanctioned with fines of up to 3% of global turnover, or up to 1.5% for providing incorrect information to regulators. So there's a sliding scale of sanctions enforcement authorities can reach for.
[2]
EU AI Act: Draft guidance for general purpose AIs shows first steps for Big AI to comply
A first draft of a Code of Practice that will apply to providers of general-purpose AI models under the European Union's AI Act has been published, alongside an invitation for feedback -- open until November 28 -- as the drafting process continues into next year, ahead of formal compliance deadlines kicking in over the coming years. The pan-EU law, which came into force this summer, regulates applications of artificial intelligence under a risk-based framework. But it also targets some measures at more powerful foundational -- or general-purpose -- AI models (GPAIs). This is where this Code of Practice will come in. Among those likely to be in the frame are OpenAI, maker of the GPT models, which underpin the AI chatbot ChatGPT, Google with its Gemini GPAIs, Meta with Llama, Anthropic with Claude, and others like France's Mistral. They will be expected to abide by the General-Purpose AI Code of Practice if they want to make sure they are complying with the AI Act and thus avoid the risk of enforcement for non-compliance. To be clear, the Code is intended to provide guidance for meeting the EU AI Act's obligations. GPAI providers may choose to deviate from the best practice suggestions if they believe they can demonstrate compliance via other measures. This first draft of the Code runs to 36 pages but is likely to get longer -- perhaps considerably so -- as the drafters warn it's light on detail as it's "a high-level drafting plan that outlines our guiding principles and objectives for the Code." The draft is peppered with box outs asking "open questions" the working groups tasked with producing the Code have yet to resolve. The sought for feedback -- from industry and civil society -- will clearly play a key role in shaping the substance of specific Sub-Measures and Key Performance Indicators (KPIs) that are yet to be included. But the document gives a sense of what's coming down the pipe (in terms of expectations) for GPAI makers, once the relevant compliance deadlines apply. Transparency requirements for makers of GPAIs are set to enter into force on August 1, 2025. But for the most powerful GPAIs -- those the law defines as having "systemic risk" -- the expectation is they must abide by risk assessment and mitigation requirements 36 months after entry into force (or August 1, 2027). There's a further caveat in that the draft Code has been devised on the assumption that there will only be "a small number" of GPAI makers and GPAIs with systemic risk. "Should that assumption prove wrong, future drafts may need to be changed significantly, for instance, by introducing a more detailed tiered system of measures aiming to focus primarily on those models that provide the largest systemic risks," the drafters warn. Copyright On the transparency front, the Code will set out how GPAIs must comply with information provisions, including in the area of copyrighted material. One example here is "Sub-Measure 5.2", which currently commits signatories to providing details of the name of all web crawlers used for developing the GPAI and their relevant robots.txt features "including at the time of crawling." GPAI model makers continue to face questions over how they acquired data to train their models, with multiple lawsuits filed by rights holders alleging AI firms unlawfully processed copyrighted information. Another commitment set out in the draft Code requires GPAI providers to have a single point of contact and complaint handling to make it easier for rights holders to communicate grievances "directly and rapidly." Other proposed measures related to copyright cover documentation that GPAIs will be expected to provide about the data sources used for "training, testing and validation and about authorisations to access and use protected content for the development of a general-purpose AI." Systemic risk The most powerful GPAIs are also subject to rules in the EU AI Act that aim to mitigate so-called "systemic risk." These AI systems are currently defined as models that have been trained using a total computing power of more than 10^25 FLOPs. The Code contains a list of risk types that signatories will be expected to treat as systemic risks. They include: This version of the Code also suggests that GPAI makers could identify other types of systemic risks that are not explicitly listed, too -- such as "large-scale" privacy infringements and surveillance, or uses that might pose risks to public health. And one of the open questions the document poses here asks which risks should be prioritised for addition to the main taxonomy. Another one is how the taxonomy of systemic risks should address deepfake risks (related to AI-generated child sexual abuse material and non-consensual intimate imagery). The Code also seeks to provide guidance around identifying key attributes that could lead to models creating systemic risks, such as "dangerous model capabilities" (e.g. cyber offensive or "weapon acquisition or proliferation capabilities"), and "dangerous model propensities" (e.g. being misaligned with human intent and/or values; having a tendency to deceive; bias; confabulation; lack of reliability and security; and resistance to goal modification). While much detail still remains to be filled in, as the drafting process continues, the authors of the Code write that its measures, sub-measures, and KPIs should be "proportionate" with a particular focus on "tailoring to the size and capacity of a specific provider, particularly SMEs and start-ups with less financial resources than those at the frontier of AI development." Attention should also be paid to "different distribution strategies (e.g. open-sourcing), where appropriate, reflecting the principle of proportionality and taking into account both benefits and risks," they add. Many of the open questions the draft poses concern how specific measures should be applied to open-source models. Safety and security in the frame Another measure in the code concerns a "Safety and Security Framework" (SSF). GPAI makers will be expected to detail their risk management policies and "continuously and thoroughly" identify systemic risks that could arise from their GPAI. Here there's an interesting sub-measure on "Forecasting risks." This would commit signatories to include in their SSF "best effort estimates" of timelines for when they expect to develop a model that triggers systemic risk indicators -- such as the aforementioned dangerous model capabilities and propensities. It could mean that, starting in 2027, we'll see cutting-edge AI developers putting out timeframes for when they expect model development to cross certain risk thresholds. Elsewhere, the draft Code puts a focus on GPAIs with systemic risk using "best-in-class evaluations" of their models' capabilities and limitations and applying "a range of suitable methodologies" to do so. Listed examples include: Q&A sets, benchmarks, red-teaming and other methods of adversarial testing, human uplift studies, model organisms, simulations, and proxy evaluations for classified materials. Another sub-measure on "substantial systemic risk notification" would commit signatories to notify the AI Office, an oversight and steering body established under the Act, "if they have strong reason to believe substantial systemic risk might materialise." The Code also sets out measures on "serious incident reporting." "Signatories commit to identify and keep track of serious incidents, as far as they originate from their general-purpose AI models with systemic risk, document and report, without undue delay, any relevant information and possible corrective measures to the AI Office and, as appropriate, to national competent authorities," it reads -- although an associated open question asks for input on "what does a serious incident entail." So there looks to be more work to be done here on nailing down definitions. The draft Code includes further questions on "possible corrective measures" that could be taken in response to serious incidents. It also asks "what serious incident response processes are appropriate for open weight or open-source providers?", among other feedback-seeking formulations. "This first draft of the Code is the result of a preliminary review of existing best practices by the four specialised Working Groups, stakeholder consultation input from nearly 430 submissions, responses from the provider workshop, international approaches (including the G7 Code of Conduct, the Frontier AI Safety Commitments, the Bletchley Declaration, and outputs from relevant government and standard-setting bodies), and, most importantly, the AI Act itself," the drafters go on to say in conclusion. "We emphasise that this is only a first draft and consequently the suggestions in the draft Code are provisional and subject to change," they add. "Therefore, we invite your constructive input as we further develop and update the contents of the Code and work towards a more granular final form for May 1, 2025."
[3]
The EU publishes the first draft of regulatory guidance for general purpose AI models
The AI Act guidelines cover transparency, copyright and risk assessment along with technical and governance risk mitigation. On Thursday, the European Union published its first draft of a Code of Practice for general purpose AI (GPAI) models. The document, which won't be finalized until May, lays out guidelines for managing risks -- and giving companies a blueprint to comply and avoid hefty penalties. The EU's AI Act came into force on August 1, but it left room to nail down the specifics of GPAI regulations down the road. This draft (via TechCrunch) is the first attempt to clarify what's expected of those more advanced models, giving stakeholders time to submit feedback and refine them before they kick in. GPAIs are those trained with a total computing power of over 10²⁵ FLOPs. Companies expected to fall under the EU's guidelines include OpenAI, Google, Meta, Anthropic and Mistral. But that list could grow. The document addresses several core areas for GPAI makers: transparency, copyright compliance, risk assessment and technical / governance risk mitigation. This 36-page draft covers a lot of ground (and will likely balloon much more before it's finalized), but several highlights stand out. The code emphasizes transparency in AI development and requires AI companies to provide information about the web crawlers they used to train their models -- a key concern for copyright holders and creators. The risk assessment section aims to prevent cyber offenses, widespread discrimination and loss of control over AI (the "it's gone rogue" sentient moment in a million bad sci-fi movies). AI makers are expected to adopt a Safety and Security Framework (SSF) to break down their risk management policies and mitigate them proportionately to their systemic risks. The rules also cover technical areas like protecting model data, providing failsafe access controls and continually reassessing their effectiveness. Finally, the governance section strives for accountability within the companies themselves, requiring ongoing risk assessment and bringing in outside experts where needed. Like the EU's other tech-related regulations, companies that run afoul of the AI Act can expect steep penalties. They can be fined up to €35 million (currently $36.8 million) or up to seven percent of their global annual profits, whichever is higher. Stakeholders are invited to submit feedback through the dedicated Futurium platform by November 28 to help refine the next draft. The rules are expected to be finalized by May 1, 2025.
Share
Share
Copy Link
The European Union's AI Act, a risk-based rulebook for artificial intelligence, is nearing implementation with the release of draft guidelines for general-purpose AI models. This landmark legislation aims to foster innovation while ensuring AI remains human-centered and trustworthy.
The European Union is on the brink of implementing its landmark AI Act, a comprehensive regulatory framework designed to govern the development and use of artificial intelligence across the bloc. This legislation, years in the making, aims to foster innovation while ensuring AI technologies remain "human-centered" and trustworthy 1.
The AI Act adopts a risk-based approach, categorizing AI applications into different risk levels:
The Act includes specific provisions for General Purpose AI (GPAI) models, recognizing their growing influence. A draft Code of Practice for GPAI providers has been published, outlining expectations in areas such as transparency, copyright compliance, and risk assessment 2.
Key compliance deadlines include:
Non-compliance can result in fines of up to €35 million or 7% of global annual profits, whichever is higher 3.
The draft Code of Practice is open for stakeholder feedback until November 28, 2024, with the final version expected by May 1, 2025 3. This collaborative approach aims to refine the guidelines and ensure they are practical and effective for the rapidly evolving AI landscape.
While some concerns persist about potential impacts on European AI innovation, the EU maintains that the Act will boost citizen trust and AI adoption. The regulation seeks to strike a balance between fostering a thriving AI ecosystem and protecting individual rights and societal interests 1.
As the AI Act moves closer to full implementation, it is set to become a global benchmark for AI regulation, potentially influencing policy approaches worldwide and shaping the future of AI development and deployment.
The EU has released a new draft of the Code of Practice for General-Purpose AI, aiming to guide AI model makers in complying with the AI Act. The draft has sparked debates among industry stakeholders, highlighting the challenges of balancing innovation with regulation.
2 Sources
2 Sources
The European Union has begun enforcing the first phase of its AI Act, prohibiting AI systems deemed to pose "unacceptable risk." The EU has also issued guidelines to help companies comply with the new regulations.
12 Sources
12 Sources
Major technology companies are pushing for changes to the European Union's AI Act, aiming to reduce regulations on foundation models. This effort has sparked debate about balancing innovation with potential risks of AI technology.
9 Sources
9 Sources
The European Commission has selected a panel of 13 international experts to develop a code of practice for generative AI. This initiative aims to guide AI companies in complying with the EU's upcoming AI Act.
5 Sources
5 Sources
LatticeFlow, in collaboration with ETH Zurich and INSAIT, has developed the first comprehensive technical interpretation of the EU AI Act for evaluating Large Language Models (LLMs), revealing compliance gaps in popular AI models.
12 Sources
12 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved