Curated by THEOUTPOST
On Wed, 12 Mar, 5:38 PM UTC
2 Sources
[1]
EU AI Act: Latest draft Code for AI model makers tiptoes towards gentler guidance for Big AI
Ahead of a May deadline to lock in guidance for providers of general purpose AI (GPAI) models on complying with provisions of the EU AI Act that apply to Big AI, a third draft of the Code of Practice was published on Tuesday. The Code has been in formulation since last year, and this draft is expected to be the last revision round before the guidelines are finalized in the coming months. A website has also been launched with the aim of boosting the Code's accessibility. Written feedback on the latest draft should be submitted by March 30, 2025. The bloc's risk-based rulebook for AI includes a sub-set of obligations that apply only to the most powerful AI model makers -- covering areas such as transparency, copyright, and risk mitigation. The Code is aimed at helping GPAI model makers understand how to meet the legal obligations and avoid the risk of sanctions for non-compliance. AI Act penalties for breaches of GPAI requirements, specifically, could reach up to 3% of global annual turnover. Streamlined The latest revision of the Code is billed as having "a more streamlined structure with refined commitments and measures" compared to earlier iterations, based on feedback on the second draft that was published in December. Further feedback, working group discussions and workshops will feed into the process of turning the third draft into final guidance. And the experts say they hope to achiever greater "clarity and coherence" in the final adopted version of the Code. The draft is broken down into a handful of sections covering off commitments for GPAIs, along with detailed guidance for transparency and copyright measures. There is also a section on safety and security obligations which apply to the most powerful models (with so-called systemic risk, or GPAISR). On transparency, the guidance includes an example of a model documentation form GPAIs might be expected to fill in in order to ensure that downstream deployers of their technology have access to key information to help with their own compliance. Elsewhere, the copyright section likely remains the most immediately contentious area for Big AI. The current draft is replete with terms like "best efforts", "reasonable measures" and "appropriate measures" when it comes to complying with commitments such as respecting rights requirements when crawling the web to acquire data for model training, or mitigating the risk of models churning out copyright-infringing outputs. The use of such mediated language suggests data-mining AI giants may feel they have plenty of wiggle room to carry on grabbing protected information to train their models and ask forgiveness later -- but it remains to be seen whether the language gets toughened up in the final draft of the Code. Language used in an earlier iteration of the Code -- saying GPAIs should provide a single point of contact and complaint handling to make it easier for rightsholders to communicate grievances "directly and rapidly" -- appears to have gone. Now, there is merely a line stating: "Signatories will designate a point of contact for communication with affected rightsholders and provide easily accessible information about it." The current text also suggests GPAIs may be able to refuse to act on copyright complaints by rightsholders if they "manifestly unfounded or excessive, in particular because of their repetitive character." It suggests attempts by creatives to flip the scales by making use of AI tools to try to detect copyright issues and automate filing complaints against Big AI could result in them... simply being ignored. When it comes to safety and security, the EU AI Act's requirements to evaluate and mitigate systemic risks already only apply to a subset of the most powerful models (those trained using a total computing power of more than 10^25 FLOPs) -- but this latest draft sees some previously recommended measures being further narrowed in response to feedback. US pressure Unmentioned in the EU press release about the latest draft are blistering attacks on European lawmaking generally, and the bloc's rules for AI specifically, coming out of the U.S. administration led by president Donald Trump. At the Paris AI Action summit last month, U.S. vice president JD Vance dismissed the need to regulate to ensure AI is applied safety -- Trump's administration would instead be leaning into "AI opportunity". And he warned Europe that overregulation could kill the golden goose. Since then, the bloc has moved to kill off one AI safety initiative -- putting the AI Liability Directive on the chopping block. EU lawmakers have also trailed an incoming "omnibus" package of simplifying reforms to existing rules that they say are aimed at reducing red tape and bureaucracy for business, with a focus on areas like sustainability reporting. But with the AI Act still in the process of being implemented, there is clearly pressure being applied to dilute requirements. At the Mobile World Congress trade show in Barcelona earlier this month, French GPAI model maker Mistral -- a particularly loud opponent of the EU AI Act during negotiations to conclude the legislation back in 2023 -- with founder Arthur Mensh claimed it is having difficulties finding technological solutions to comply with some of the rules. He added that the company is "working with the regulators to make sure that this is resolved." While this GPAI Code is being drawn up by independent experts, the European Commission -- via the AI Office which oversees enforcement and other activity related to the law -- is, in parallel, producing some "clarifying" guidance that will also shape how the law applies. Including definitions for GPAIs and their responsibilities. So look out for further guidance, "in due time", from the AI Office -- which the Commission says will "clarify ... the scope of the rules" -- as this could offer a pathway for nerve-losing lawmakers to respond to the U.S. lobbying to deregulate AI.
[2]
Industry flags 'serious concerns' with EU AI code of practice
Feedback on draft possible up to 30 March, before the final Code of Practice on General-Purpose AI is set to come out in May. The tech sector remains concerned about a proposed set of rules for providers of General-Purpose Artificial Intelligence (GPAI) after the latest draft was published by a European Commission- appointed expert group on Tuesday, several lobby groups told Euronews. The Code of Practice on GPAI should help providers of AI models - tools that can perform many tasks such as ChatGPT, Google Gemini and picture application Midjourney - comply with the EU's AI Act and includes transparency and copyright-related rules, risk assessment, and mitigation measures. Previous versions of the text have raised copyright issues, among others, and industry representatives such as publishers and rights-holders are dissatisfied with the latest updates, sector operators told Euronews. Boniface de Champris, senior policy manager at tech lobby group CCIA said that "serious issues remain", including "far-ranging obligations regarding copyright and transparency, which would threaten trade secrets, as well as burdensome external risk assessments." "The new draft makes limited progress from its highly problematic predecessor, yet the GPAI Code continues to fall short of providing companies with the legal certainty that's needed to drive AI innovation in Europe," he added. Elias Papadopoulos, director of policy at internet lobby group DOT Europe, said that at first glance the draft "has been somewhat improved", but that some provisions still go beyond the requirements of the AI Act. "For example, mandatory third-party risk assessment pre- and post-deployment, although not an obligation in the AI Act itself, unfortunately remains in the new draft," he said. The expert group, which includes actors from the EU, US and Canada, last month pushed back a previous deadline to reflect the feedback of stakeholders it received. Some 1000 participants have attended plenary sessions and workshops designed to help develop the Code since the kick-off in September. The concerns were echoed by Iacob Gammeltoft, senior policy manager at News Media Europe - which represents 2700 news brands, online and in print, on radio and tv. "Unfortunately, the latest draft raises serious questions about whether no code is better than this code. It reads like the chairmen got lost in the exceptions of copyright law, and forgot to assess how the general principles of copyright should apply," said Gammeltoft. In January, a group of 15 different European rightsholder organisations warned the Commission that the current draft of the CoP contradicts copyright law. News Media Europe said the "same problems" exist today as in the first draft. "Copyright law creates an obligation of results which requires that lawful access is achieved, and it's just not good enough to ask AI companies to make 'best efforts' to not use our content without authorisation," he said. From a fundamental rights' perspective, the document has also been weakened, said Laura Lazaro Cabrera, programme director for equity and data at the Centre for Democracy & Technology Europe. "The third draft confirms what many of us had feared - that consideration and mitigation of the most serious fundamental rights risks would remain optional for general-purpose AI model providers," she said. The EU executive can decide to formalisethe CoP under the AI Act - which will be fully applicable in 2027 - through an implementing act. The issue of Copyright and AI is also subject of European Parliament scrutiny: German lawmaker Axel Voss (EPP) will work on an own-initiative report on the issue.
Share
Share
Copy Link
The EU has released a new draft of the Code of Practice for General-Purpose AI, aiming to guide AI model makers in complying with the AI Act. The draft has sparked debates among industry stakeholders, highlighting the challenges of balancing innovation with regulation.
The European Union has published the third draft of its Code of Practice for General-Purpose AI (GPAI) models, aiming to provide guidance for AI model makers on complying with the EU AI Act 1. This latest iteration, released on March 11, 2025, is expected to be the final revision before the guidelines are finalized in May. The Code addresses key areas such as transparency, copyright, and risk mitigation for powerful AI models.
The new draft boasts a more streamlined structure with refined commitments and measures compared to earlier versions. It includes sections on transparency, copyright, and safety and security obligations for the most powerful models with systemic risk (GPAISR) 1. The transparency section provides an example documentation form for GPAIs to ensure downstream deployers have access to key information for compliance.
Despite efforts to refine the Code, industry stakeholders have expressed "serious concerns" about the latest draft 2. Tech lobby groups, such as CCIA and DOT Europe, argue that some provisions go beyond the requirements of the AI Act and may threaten trade secrets or impose burdensome external risk assessments.
The copyright section remains one of the most controversial areas. The current draft uses terms like "best efforts" and "reasonable measures" when addressing copyright compliance 1. This language has drawn criticism from rights-holders and publishers who argue that it falls short of providing adequate protection. News Media Europe, representing 2,700 news brands, has stated that the draft "raises serious questions about whether no code is better than this code" 2.
The draft narrows down some previously recommended measures for safety and security, particularly for the most powerful models trained using a total computing power of more than 10^25 FLOPs 1. This adjustment comes in response to feedback from stakeholders.
The EU faces pressure from the U.S. administration, led by President Donald Trump, which has criticized European AI regulations. At the Paris AI Action summit, U.S. Vice President JD Vance warned that overregulation could stifle innovation 1. This international context adds complexity to the EU's efforts to balance innovation with regulation.
Stakeholders have until March 30, 2025, to submit written feedback on the latest draft. The European Commission's AI Office is also working on clarifying guidance that will shape the law's application 1. The final Code of Practice is expected to be released in May, with the AI Act set to be fully applicable by 2027 2.
Reference
[1]
[2]
The European Union's AI Act, a risk-based rulebook for artificial intelligence, is nearing implementation with the release of draft guidelines for general-purpose AI models. This landmark legislation aims to foster innovation while ensuring AI remains human-centered and trustworthy.
3 Sources
3 Sources
Major technology companies are pushing for changes to the European Union's AI Act, aiming to reduce regulations on foundation models. This effort has sparked debate about balancing innovation with potential risks of AI technology.
9 Sources
9 Sources
The European Commission has selected a panel of 13 international experts to develop a code of practice for generative AI. This initiative aims to guide AI companies in complying with the EU's upcoming AI Act.
5 Sources
5 Sources
The European Union has begun enforcing the first phase of its AI Act, prohibiting AI systems deemed to pose "unacceptable risk." The EU has also issued guidelines to help companies comply with the new regulations.
12 Sources
12 Sources
The European Union faces backlash over a copyright loophole in its AI Act, potentially exposing creatives to exploitation by tech companies using their work for AI training without proper compensation or recognition.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved