15 Sources
[1]
Everything tech giants will hate about the EU's new AI rules
The European Union is moving to force AI companies to be more transparent than ever, publishing a code of practice Thursday that will help tech giants prepare to comply with the EU's landmark AI Act. These rules -- which have not yet been finalized and focus on copyright protections, transparency, and public safety -- will initially be voluntary when they take effect for the biggest makers of "general purpose AI" on August 2. But the EU will begin enforcing the AI Act in August 2026, and the Commission has noted that any companies agreeing to the rules could benefit from a "reduced administrative burden and increased legal certainty," The New York Times reported. Rejecting the voluntary rules could force companies to prove their compliance in ways that could be more costly or time-consuming, the Commission suggested. The AI industry participated in drafting the AI Act, but some companies have recently urged the EU to delay enforcement of the law, warning that the EU may risk hampering AI innovation by placing heavy restrictions on companies. Among the most controversial commitments that the EU is asking companies like Google, Meta, and OpenAI to voluntarily make is a promise to never pirate materials for AI training. Many AI companies have controversially used pirated book datasets to train AI, including Meta, which suggested that individual books are individually worthless to train AI after being called out for torrenting unauthorized book copies. But the EU doesn't agree, recommending that tech companies designate staffers and create internal mechanisms to field complaints "within a reasonable timeframe" from rightsholders, who must be allowed to opt their creative works out of AI training data sets. The EU rules pressure AI makers to take other steps the industry has mostly resisted. Most notably, AI companies will need to share detailed information about their training data, including providing a rationale for key model design choices and disclosing precisely where their training data came from. That could make it clearer how much of each company's models depend on publicly available data versus user data, third-party data, synthetic data, or some emerging new source of data. The code also details expectations for AI companies to respect paywalls, as well as robots.txt instructions restricting crawling, which could help confront a growing problem of AI crawlers hammering websites. It "encourages" online search giants to embrace a solution that Cloudflare is currently pushing: allowing content creators to protect copyrights by restricting AI crawling without impacting search indexing. Additionally, companies are asked to disclose total energy consumption for both training and inference, allowing the EU to detect environmental concerns while companies race forward with AI innovation. More substantially, the code's safety guidance provides for additional monitoring for other harms. It makes recommendations to detect and avoid "serious incidents" with new AI models, which could include cybersecurity breaches, disruptions of critical infrastructure, "serious harm to a person's health (mental and/or physical)," or "a death of a person." It stipulates timelines of between five and 10 days to report serious incidents with the EU's AI Office. And it requires companies to track all events, provide an "adequate level" of cybersecurity protection, prevent jailbreaking as best they can, and justify "any failures or circumventions of systemic risk mitigations." Ars reached out to tech companies for immediate reactions to the new rules. OpenAI, Meta, and Microsoft declined to comment. A Google spokesperson confirmed that the company is reviewing the code, which still must be approved by the European Commission and EU member states amid expected industry pushback. "Europeans should have access to first-rate, secure AI models when they become available, and an environment that promotes innovation and investment," Google's spokesperson said. "We look forward to reviewing the code and sharing our views alongside other model providers and many others." These rules are just one part of the AI Act, which will start taking effect in a staggered approach over the next year or more, the NYT reported. Breaching the AI Act could result in AI models being yanked off the market or fines "of as much as 7 percent of a company's annual sales or 3 percent for the companies developing advanced AI models," Bloomberg noted.
[2]
EU Rolls Out AI Code With Broad Copyright, Transparency Rules
The European Union published a code of practice to help companies follow its landmark AI Act that includes copyright protections for creators and transparency requirements for advanced models. The code will require developers to provide up-to-date documentation describing their AI's features to regulators and third parties looking to integrate it in their own products, the European Commission said Thursday. Companies also will be banned from training AI on pirated materials and must respect requests from writers and artists to keep copyrighted work out of datasets. If AI produces material that infringes copyright rules, the code of practice will require companies to have a process in place to address it.
[3]
EU explains how to do AI without breaking the law
The EU has a new set of AI regulations poised to take effect soon. While debate over them continues, Brussels has put out a handy guidebook to help companies make sense of what they can and cannot do. The European Commission announced the publication of the General-Purpose AI Code of Practice on Wednesday with the goal of helping folks comply with the AI Act. Parties subject to the Act will be able to sign on to the Code to indicate that they're in compliance, but it's purely voluntary. Broken into three parts, the Code has two brief chapters outlining responsibilities AI companies face regarding transparency and obeying copyright, as well as a much longer section on safety and security that the Commission noted is "relevant only to a limited number of providers of the most advanced models." The same goes for the AI Act in general. As we've noted in our prior coverage, it puts the greatest onus for compliance on the largest, most powerful frontier AI models, and those operating in the most critical sectors or whose operations have the greatest potential for harm. The Code's three chapters include all the things you'd expect from a typical AI regulation, like preventing the output of copyrighted content, requiring AI scraping bots to obey robots.txt, and mandating risk assessments. Beyond those basic rules, the Code also calls for companies to fill out a documentation form that details all the intricacies of a model (including energy consumption data) and keep model documentation on file for a decade for each version. It even recommends that AI companies that "also provide an online search engine" refrain from downranking pages that refuse to be ingested by the company's AI - and no, that's not targeted at anyone in particular, why do you ask? The Code's publication doesn't mean it's official yet - that'll require endorsement by the EU states and the Commission - but even then it won't mean all that much. As the EC noted, there's no push for companies to sign on if they don't want to. That said, the Code does reflect the AI Act as its written, so they're more than just guidelines - companies ignore them at their peril. This all assumes the AI Act's general-purpose rules enter into application on 2 August as planned. A number of European companies have come out against the AI Act, calling for its delay and simplification to ensure continental AI firms would be able to compete with unnamed "AI behemoths," some of whom have also called for a pause on enforcement. The European arm of the Computer and Communications Industry Association (CCIA), a pro-tech trade association, expressed disappointment in the Code, saying it added further confusion to a set of rules that are badly in need of clarification. As one example, the CCIA says, the Code imposes rules that aren't requirements in the Act, while other requirements from the Act are missing. If it's that obviously messy, why would any company want to sign on? "Without meaningful improvements, signatories remain at a disadvantage compared to non-signatories, thereby undermining the Commission's competitiveness and simplification agenda," said Boniface de Champris, a senior policy manager at CCIA Europe. Others are urging the EU not to bow to corporate pressure. A group of academics, watchdog groups, and privacy advocates signed an open letter to the Commission urging it not to delay enforcement, as doing so would call into question Europe's dedication to its own principles of "putting consumer and fundamental rights at the center of all legislation." "We call upon the Commission to prioritise the full implementation and proper enforcement of the AI Act instead of re-opening or delaying its implementation," the open letter pleaded. Whichever way the Commission swings on AI Act enforcement, it'll still take some time for punishments to be meted out: While the AI Act applies as of next month, the Commission won't have enforcement power for some time. According to the EC, "new models" released on or after August 2, 2025, will have one year to get fully compliant, while older models get two years. A lot could change between now and then. ®
[4]
EU pushes ahead with AI code of practice
The EU has unveiled its code of practice for general purpose artificial intelligence, pushing ahead with its landmark regulation despite fierce lobbying from the US government and Big Tech groups. The final version of the code, which helps explain rules that are due to come into effect next month for powerful AI models such as OpenAI's GPT-4 and Google's Gemini, includes copyright protections for creators and potential independent risk assessments for the most advanced systems. The EU's decision to push forward with its rules comes amid intense pressure from US technology groups as well as European companies over its AI act, considered the world's strictest regime regulating the development of the fast-developing technology. This month the chief executives of large European companies including Airbus, BNP Paribas and Mistral urged Brussels to introduce a two-year pause, warning that unclear and overlapping regulations were threatening the bloc's competitiveness in the global AI race. Brussels has also come under fire from the European parliament and a wide range of privacy and civil society groups over moves to water down the rules from previous draft versions, following pressure from Washington and Big Tech groups. The EU had already delayed publishing the code, which was due in May. Henna Virkkunen, the EU's tech chief, said the code was important "in making the most advanced AI models available in Europe not only innovative, but also safe and transparent". Tech groups will now have to decide whether to sign the code, and it still needs to be formally approved by the European Commission and member states. The Computer & Communications Industry Association, whose members include many Big Tech companies, said the "code still imposes a disproportionate burden on AI providers". "Without meaningful improvements, signatories remain at a disadvantage compared to non-signatories, thereby undermining the commission's competitiveness and simplification agenda," it said. As part of the code, companies will have to commit to putting in place technical measures that prevent their models from generating content that reproduces copyrighted content. Signatories also commit to testing their models for risks laid out in the AI act. Companies that provide the most advanced AI models will agree to monitor their models after they have been released, including giving external evaluators access to their most capable models. But the code does give them some leeway in identifying risks their models might pose. Officials within the European Commission and in different European countries have been privately discussing streamlining the complicated timeline of the AI act. While the legislation entered into force in August last year, many of its provisions will only come into effect in the years to come. European and US companies are putting pressure on the bloc to delay upcoming rules on high-risk AI systems, such as those that include biometrics and facial recognition, which are set to come into effect in August next year.
[5]
EU code of practice to help firms with AI rules will focus on copyright, safety
BRUSSELS, July 10 (Reuters) - A code of practice designed to help thousands of companies comply with the European Union's landmark artificial intelligence rules will focus on transparency, copyright, safety and security, the European Commission said on Thursday. The comments came as the EU executive presented a final draft of the guidance, which will apply from Aug. 2 but will only be enforced a year later. Signing up to the code is voluntary, but companies who decline to do so, as some Big Tech firms have indicated, will not benefit from the legal certainty provided to a signatory. While the guidance on transparency and copyright will apply to all providers of general-purpose AI models, the chapters on safety and security target providers of the most advanced models. "Co-designed by AI stakeholders, the Code is aligned with their needs. Therefore, I invite all general-purpose AI model providers to adhere to the Code. Doing so will secure them a clear, collaborative route to compliance with the EU's AI Act," EU tech chief Henna Virkkunen said. Reporting by Foo Yun Chee; Editing by GV De Clercq Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial Intelligence Foo Yun Chee Thomson Reuters An agenda-setting and market-moving journalist, Foo Yun Chee is a 21-year veteran at Reuters. Her stories on high profile mergers have pushed up the European telecoms index, lifted companies' shares and helped investors decide on their next move. Her knowledge and experience of European antitrust laws and developments helped her break stories on Microsoft, Google, Amazon, Meta and Apple, numerous market-moving mergers and antitrust investigations. She has previously reported on Greek politics and companies, when Greece's entry into the eurozone meant it punched above its weight on the international stage, as well as on Dutch corporate giants and the quirks of Dutch society and culture that never fail to charm readers.
[6]
EU unveils AI code of practice to help businesses comply with bloc's rules
LONDON (AP) -- The European Union on Thursday released a code of practice on general purpose artificial intelligence to help thousands of businesses in the 27-nation bloc using the technology comply with the bloc's landmark AI rule book. The EU code is voluntary and complements the EU's AI Act, a comprehensive set of regulations that was approved last year and is taking effect in phases. The code focuses on three areas: transparency requirements for providers of AI models that are looking to integrate them into their products; copyright protections; and safety and security of the most advanced AI systems The AI Act's rules on general purpose artificial intelligence are set to take force on Aug. 2. The bloc's AI Office, under its executive Commission, won't start enforcing them for at least a year. General purpose AI, exemplified by chatbots like OpenAI's ChatGPT, can do many different tasks and underpin many of the AI systems that companies are using across the EU. Under the AI Act, uses of artificial intelligence face different levels of scrutiny depending on the level of risk they pose, with some uses deemed unacceptable banned entirely. Violations could draw fines of up to 35 million euros ($41 million), or 7% of a company's global revenue. Some Big Tech companies such as Meta have resisted the regulations, saying they're unworkable, and U.S. Vice President JD Vance, speaking at a Paris summit in February, criticized "excessive regulation" of AI, warning it could kill "a transformative industry just as it's taking off." More recently, more than 40 European companies, including Airbus, Mercedes-Benz, Philips and French AI startup Mistral, urged the bloc in an open letter to postpone the regulations for two years. They say more time is needed to simplify "unclear, overlapping and increasingly complex EU regulations" that put the continent's competitiveness in the global AI race at risk. There was no sign that Brussels was prepared to stop the clock. "Today's publication of the final version of the Code of Practice for general-purpose AI marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent," the commission's executive vice president for tech sovereignty, security and democracy, Henna Virkkunen, said in a news release.
[7]
European Union Unveils Rules for Powerful A.I. Systems
Makers of the most advanced artificial intelligence systems will face new obligations for transparency, copyright protection and public safety. The rules are voluntary to start. European Union officials unveiled new rules on Thursday to regulate artificial intelligence. Makers of the most powerful A.I. systems will have to improve transparency, limit copyright violations and protect public safety. The rules, which are voluntary to start, come during an intense debate in Brussels about how aggressively to regulate a new technology seen by many leaders as crucial to future economic success in the face of competition with the United States and China. Some critics accused regulators of watering down the rules to win industry support. The guidelines apply only to a small number of tech companies like OpenAI, Microsoft and Google that make so-called general-purpose A.I. These systems underpin services like ChatGPT, and can analyze enormous amounts of data, learn on their own and perform some human tasks. The so-called code of practice represents some of the first concrete details about how E.U. regulators plan to enforce a law, called the A.I. Act, that was passed last year. Tech companies played a major role in drafting the rules, which will be voluntary when they take effect on Aug. 2, before becoming enforceable in August 2026, according to the European Commission, the executive branch of the 27-nation bloc. The European Commission said companies that agreed to the voluntary code of practice would benefit from a "reduced administrative burden and increased legal certainty." Officials said those that do not would have to prove compliance through other means, which could potentially be more costly and time-consuming. It was not immediately clear which companies would join. Google and OpenAI said they were reviewing the final text. Microsoft declined to comment. Meta, which had signaled it will not agree to the code of conduct, did not have an immediate comment. Amazon and Mistral, a leading A.I. company in France, did not respond to a request for comment. Under the guidelines, tech companies will have to provide detailed summaries about the content used for training their algorithms, something long sought by media publishers concerned that their intellectual property is being used to trained the A.I. systems. Other rules would require the companies to conduct risk assessments to see how their services could be misused for things like creating biological weapons that pose a risk to public safety. (The New York Times has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to A.I. systems. The two companies have denied the suit's claims.) What is less clear is how the law will address issues like the spread of misinformation and harmful content. This week, Grok, a chatbot created by Elon Musk's artificial intelligence company, xAI, shared several antisemitic comments on X, including praise of Hitler. Henna Virkkunen, the European Commission's executive vice president for tech sovereignty, security and democracy, said the policy was "an important step in making the most advanced A.I. models available in Europe not only innovative but also safe and transparent." The guidelines introduced on Thursday are just one part of a sprawling law that will take full effect over the next year or more. The act was intended to prevent the most harmful effects of artificial intelligence, but European officials have more recently been weighing the consequences of regulating such a fast-moving and competitive technology. Leaders across the continent are increasingly worried about Europe's economic position against the United States and China. Europe has long struggled to produce large tech companies, making it dependent on services from foreign corporations. Tensions with the Trump administration over tariffs and trade have intensified the debate. Groups representing many European businesses have urged policymakers to delay implementation of the A.I. Act, saying the regulation threatens to slow innovation, while putting their companies at a disadvantage against foreign competition. "Regulation should not be the best export product from the E.U.," said Aura Salla, a member of the European Parliament from Finland who was previously a top lobbyist for Meta in Brussels. "It's hurting our own companies."
[8]
EU unveils strict AI code targeting OpenAI, Google, Microsoft
European Union flags outside the European Commission headquarters in Brussels. The European Union has introduced a voluntary code of practice for general-purpose artificial intelligence. The guidelines aim to help companies comply with the bloc's AI Act, set to take effect next month. The new rules target a small number of powerful tech firms like OpenAI, Microsoft, Google, and Meta, which develop foundational AI models used across multiple products and services. While the code is not legally binding, it lays out requirements for transparency, copyright protection, and safety. Officials say companies that adopt the code will benefit from a "reduced administrative burden and increased legal certainty."
[9]
EU unveils AI code of practice to help businesses comply with bloc's rules
The European Union on Thursday released a code of practice on general purpose artificial intelligence to help thousands of businesses in the 27-nation bloc using the technology comply with the bloc's landmark AI rule book. The EU code is voluntary and complements the EU's AI Act, a comprehensive set of regulations that was approved last year and is taking effect in phases. The code focuses on three areas: transparency requirements for providers of AI models that are looking to integrate them into their products; copyright protections; and safety and security of the most advanced AI systems The AI Act's rules on general purpose artificial intelligence are set to take force on Aug. 2. The bloc's AI Office, under its executive Commission, won't start enforcing them for at least a year. General purpose AI, exemplified by chatbots like OpenAI's ChatGPT, can do many different tasks and underpin many of the AI systems that companies are using across the EU. Under the AI Act, uses of artificial intelligence face different levels of scrutiny depending on the level of risk they pose, with some uses deemed unacceptable banned entirely. Violations could draw fines of up to 35 million euros ($41 million), or 7% of a company's global revenue. Some Big Tech companies such as Meta have resisted the regulations, saying they're unworkable, and U.S. Vice President JD Vance, speaking at a Paris summit in February, criticized "excessive regulation" of AI, warning it could kill "a transformative industry just as it's taking off." More recently, more than 40 European companies, including Airbus, Mercedes-Benz, Philips and French AI startup Mistral, urged the bloc in an open letter to postpone the regulations for two years. They say more time is needed to simplify "unclear, overlapping and increasingly complex EU regulations" that put the continent's competitiveness in the global AI race at risk. There was no sign that Brussels was prepared to stop the clock. "Today's publication of the final version of the Code of Practice for general-purpose AI marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent," the commission's executive vice president for tech sovereignty, security and democracy, Henna Virkkunen, said in a news release. © 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.
[10]
EU waiting for companies to sign delayed AI Code
It remains unclear which and how many companies will sign the code of practice. AI providers can soon sign up to the European Commission's Code of Practice for General Purpose AI (GPAI), a voluntary set of rules aiming to help providers of AI models such as ChatGPT and Gemini, comply with the AI Act. On Thursday the Commission published the Code, less than a month before the rules on GPAI will start applying on 2 August. The document includes three chapters: Transparency and Copyright, both addressing all providers of general-purpose AI models, and Safety and Security, relevant only to a limited number of providers of the most advanced models. Companies that sign up are expected to be compliant with the AI Act and are expected to have more legal certainty, others will face more scrutiny from the Commission. Providers of AI systems previously said they do not have enough time to comply before the rules kick and asked the EU executive for a grace period. Once companies sign, they do not have to be fully compliant with new rules on 2 August, a source familiar with the issue told Euronews. The AI Act itself - rules that regulate artificial intelligence systems according to the risk they pose to society - entered into force in August 2024 but will fully apply in 2027. The Code of Practice on GPAI, drafted by experts appointed by the Commission, was supposed to come out in May but faced delays and heavy criticism. Tech giants as well as publishers and rights-holders are concerned that the rules violate the EU's Copyright laws, and restrict innovation. Earlier this week CEOs from more than 40 European companies including ASML, Philips, Siemens and Mistral asked the Commission to impose a "two-year clock-stop" on the AI Act to give them more time to comply with the obligations on high-risk AI systems, due to take effect as of August 2026, and to obligations for GPAI models, due to enter into force this August. A source told Euronews that the Commission is considering a possible delay of these high-risk system obligations in case the underlying standards are not ready in time. This would only apply to these specific models and not affect other parts of the Act. The Commission and the EU member states will now need to carry out a risk assessment to check the adequacy of the Code. It remains unclear if it will be ready before the entry into force in August. It also remains unclear how many companies will sign the code, and whether they have the possibility to adhere to just some elements of the document.
[11]
EU's AI Code of Practice tackles transparency, copyright and safety
The EU's General-Purpose AI Code of Practice which aims to help businesses comply with the EU AI Act has been finalised. The European Commission has today received the final version of the General-Purpose AI Code of Practice, a voluntary tool designed to help industry comply with the EU AI Act's rules on general-purpose AI, which enter into application on 2 August 2025. The rules will become enforceable by the AI Office of the Commission one year later in August 2026 for new models and August 2027 for existing models. "This aims to ensure that general-purpose AI models placed on the European market - including the most powerful ones - are safe and transparent," the Commission said in a statement. In the coming weeks, Member States and the Commission will have the opportunity to assess the adequacy of the guidelines, and further complementary Commission guidelines on key concepts related to general-purpose AI models are expected later this month. These will clarify who is in and out of scope of the AI Act's general-purpose AI rules. Back in April the Commission launched its consultation process on the guidelines that aimed to clarify key concepts underlying the provisions in the AI Act on general-purpose AI (GPAI) models. It invited stakeholders to "bring their practical experience to shape clear, accessible EU rules on general-purpose AI (GPAI) models in a targeted consultation that will contribute to the upcoming Commission guidelines". The Guidelines can be downloaded here, and they come under three main chapters: Transparency and Copyright, which apply to all providers of general-purpose AI models, and Safety and Security which is relevant only to a limited number of providers of the most advanced models. "Today's publication of the final version of the Code of Practice for general-purpose AI marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent," said Henna Virkkunen, executive vice-president for tech sovereignty, security and democracy. "Co-designed by AI stakeholders, the Code is aligned with their needs. Therefore, I invite all general-purpose AI model providers to adhere to the Code. Doing so will secure them a clear, collaborative route to compliance with the EU's AI Act." Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[12]
EU Unveils AI Code of Practice to Help Businesses Comply With Bloc's Rules
LONDON (AP) -- The European Union on Thursday released a code of practice on general purpose artificial intelligence to help thousands of businesses in the 27-nation bloc using the technology comply with the bloc's landmark AI rule book. The EU code is voluntary and complements the EU's AI Act, a comprehensive set of regulations that was approved last year and is taking effect in phases. The code focuses on three areas: transparency requirements for providers of AI models that are looking to integrate them into their products; copyright protections; and safety and security of the most advanced AI systems The AI Act's rules on general purpose artificial intelligence are set to take force on Aug. 2. The bloc's AI Office, under its executive Commission, won't start enforcing them for at least a year. General purpose AI, exemplified by chatbots like OpenAI's ChatGPT, can do many different tasks and underpin many of the AI systems that companies are using across the EU. Under the AI Act, uses of artificial intelligence face different levels of scrutiny depending on the level of risk they pose, with some uses deemed unacceptable banned entirely. Violations could draw fines of up to 35 million euros ($41 million), or 7% of a company's global revenue. Some Big Tech companies such as Meta have resisted the regulations, saying they're unworkable, and U.S. Vice President JD Vance, speaking at a Paris summit in February, criticized "excessive regulation" of AI, warning it could kill "a transformative industry just as it's taking off." More recently, more than 40 European companies, including Airbus, Mercedes-Benz, Philips and French AI startup Mistral, urged the bloc in an open letter to postpone the regulations for two years. They say more time is needed to simplify "unclear, overlapping and increasingly complex EU regulations" that put the continent's competitiveness in the global AI race at risk. There was no sign that Brussels was prepared to stop the clock. "Today's publication of the final version of the Code of Practice for general-purpose AI marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent," the commission's executive vice president for tech sovereignty, security and democracy, Henna Virkkunen, said in a news release. Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[13]
EU code of practice to help firms with AI rules will focus on copyright, safety - The Economic Times
A code of practice designed to help thousands of companies comply with the European Union's landmark artificial intelligence rules will focus on transparency, copyright, safety and security, the European Commission said on Thursday. The comments came as the EU executive presented a final draft of the guidance, which will apply from August 2 but will only be enforced a year later. Signing up to the code is voluntary, but companies who decline to do so, as some Big Tech firms have indicated, will not benefit from the legal certainty provided to a signatory. While the guidance on transparency and copyright will apply to all providers of general-purpose AI models, the chapters on safety and security target providers of the most advanced models. "Co-designed by AI stakeholders, the Code is aligned with their needs. Therefore, I invite all general-purpose AI model providers to adhere to the Code. Doing so will secure them a clear, collaborative route to compliance with the EU's AI Act," EU tech chief Henna Virkkunen said.
[14]
European AI Law to Prioritize Openness, Copyright Protection and Model Safety | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The code was developed by a panel of 13 independent experts and forms part of the EU's broader ambition to establish a global benchmark for AI governance. While adherence to the code is not mandatory, the European Commission noted that companies choosing not to participate will not benefit from the legal clarity afforded to those who do, per Reuters. This latest move comes ahead of the phased implementation of the EU's AI Act, which officially came into force in June 2024. The regulation imposes tiered requirements based on the risk profile of AI systems, with the strictest rules reserved for applications deemed high-risk. General-purpose AI models, such as those powering widely used chatbots and language generators, are subject to more moderate obligations. Read more: Federal Judge Sides with Meta in Authors' AI Copyright Lawsuit Beginning August 2, 2025, compliance will become mandatory for new general-purpose AI models released to the market. Existing models will have until August 2, 2027, to align with the legislation. The guidance on transparency and copyright will be applicable across all providers of general-purpose AI, while directives on safety and security will specifically target developers of advanced systems like OpenAI's ChatGPT, Google's Gemini, Meta's Llama, and Anthropic's Claude. The code's final approval still hinges on formal endorsement by EU member states and the European Commission, a step that is anticipated by the end of 2025. EU digital policy head Henna Virkkunen urged providers to take part, describing the code as a practical and collaborative tool for navigating regulatory expectations.
[15]
EU Publishes Final AI Code of Practice to Guide Compliance for AI Companies | PYMNTS.com
The code's publication "marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent," Henna Virkkunen, executive vice president for tech sovereignty, security and democracy for the commission, which is the EU's executive arm, said in a statement. The code was developed by 13 independent experts after hearing from 1,000 stakeholders, which included AI developers, industry organizations, academics, civil society organizations and representatives of EU member states, according to a Thursday (July 10) press release. Observers from global public agencies also participated. The EU AI Act, which was approved in 2024, is the first comprehensive legal framework governing AI. It aims to ensure that AI systems used in the EU are safe and transparent, as well as respectful of fundamental human rights. The act classifies AI applications into risk categories -- unacceptable, high, limited and minimal -- and imposes obligations accordingly. Any AI company whose services are used by EU residents must comply with the act. Fines can go up to 7% of global annual revenue. The code is voluntary, but AI model companies who sign on will benefit from lower administrative burdens and greater legal certainty, according to the commission. The next step is for the EU's 27 member states and the commission to endorse it. Read also: European Commission Says It Won't Delay Implementation of AI Act The code is structured into three core chapters: Transparency; Copyright; and Safety and Security. The Transparency chapter includes a model documentation form, described by the commission as "a user-friendly" tool to help companies demonstrate compliance with transparency requirements. The Copyright chapter offers "practical solutions to meet the AI Act's obligation to put in place a policy to comply with EU copyright law." The Safety and Security chapter, aimed at the most advanced systems with systemic risk, outlines "concrete state-of-the-art practices for managing systemic risks." The drafting process began with a plenary session in September 2024 and proceeded through multiple working group meetings, virtual drafting rounds and provider workshops. The code takes effect Aug. 2, but the commission's AI Office will enforce the rules on new AI models after one year and on existing models after two years.
Share
Copy Link
The European Union has published a code of practice to help tech companies comply with its upcoming AI Act, focusing on copyright protection, transparency, and safety measures for advanced AI models.
The European Union has taken a significant step towards regulating artificial intelligence by publishing a comprehensive code of practice. This code, designed to help tech companies prepare for compliance with the upcoming AI Act, focuses on key areas such as copyright protection, transparency, and safety measures for advanced AI models 12.
Source: Interesting Engineering
The code, set to take effect on August 2, 2025, initially on a voluntary basis, outlines several crucial requirements for AI companies:
Copyright Protection: Companies are banned from training AI on pirated materials and must respect requests from creators to keep copyrighted work out of datasets 23.
Transparency: Developers must provide up-to-date documentation describing their AI's features to regulators and third parties 24.
Safety and Security: The code includes guidance on detecting and avoiding "serious incidents" with new AI models, such as cybersecurity breaches or harm to individuals 1.
Energy Consumption Disclosure: Companies are asked to disclose total energy consumption for both training and inference of AI models 1.
The code's requirements pose significant challenges for tech giants:
Data Transparency: Companies will need to share detailed information about their training data, potentially revealing their dependence on various data sources 1.
Copyright Compliance: The ban on using pirated materials for AI training could impact companies like Meta, which have faced criticism for using unauthorized book copies 1.
Reporting Requirements: The code stipulates timelines of 5-10 days to report serious incidents to the EU's AI Office 1.
The tech industry's response to the code has been mixed:
Calls for Delay: Some European companies have urged Brussels to introduce a two-year pause, warning of unclear and overlapping regulations 4.
Competitiveness Concerns: The Computer & Communications Industry Association argued that the code "imposes a disproportionate burden on AI providers" 43.
Compliance Incentives: The EU suggests that companies agreeing to the rules could benefit from "reduced administrative burden and increased legal certainty" 1.
Source: PYMNTS
While the code is currently voluntary, it will become mandatory with the enforcement of the AI Act:
Enforcement Date: The EU will begin enforcing the AI Act in August 2026 1.
Compliance Period: New models released on or after August 2, 2025, will have one year to become fully compliant, while older models get two years 3.
Potential Penalties: Breaching the AI Act could result in fines of up to 7% of a company's annual sales or the removal of AI models from the market 1.
Source: Bloomberg Business
As the EU pushes forward with this landmark regulation, it faces the challenge of balancing innovation with consumer protection and fundamental rights. The coming months will likely see continued debate and potential adjustments as the tech industry grapples with these new regulatory demands 45.
Goldman Sachs is testing Devin, an AI software engineer developed by Cognition, potentially deploying thousands of instances to augment its human workforce. This move signals a significant shift towards AI adoption in the financial sector.
5 Sources
Technology
13 hrs ago
5 Sources
Technology
13 hrs ago
RealSense, Intel's depth-sensing camera technology division, has spun out as an independent company, securing $50 million in Series A funding to scale its 3D perception technology for robotics, AI, and computer vision applications.
13 Sources
Technology
13 hrs ago
13 Sources
Technology
13 hrs ago
AI adoption is rapidly increasing across businesses and consumers, with tech giants already looking beyond AGI to superintelligence, suggesting the AI revolution may be further along than publicly known.
2 Sources
Technology
21 hrs ago
2 Sources
Technology
21 hrs ago
Elon Musk's artificial intelligence company xAI is preparing for a new funding round that could value the company at up to $200 billion, marking a significant increase from its previous valuation and positioning it as one of the world's most valuable private companies.
3 Sources
Business and Economy
13 hrs ago
3 Sources
Business and Economy
13 hrs ago
The United Nations' International Telecommunication Union urges companies to implement advanced tools for detecting and eliminating AI-generated misinformation and deepfakes to counter risks of election interference and financial fraud.
2 Sources
Technology
13 hrs ago
2 Sources
Technology
13 hrs ago