21 Sources
[1]
Meta refuses to sign EU's AI code of practice | TechCrunch
Meta has refused to sign the European Union's code of practice for its AI Act, weeks before the bloc's rules for providers of general-purpose AI models take effect. "Europe is heading down the wrong path on AI," wrote Meta's chief global affairs officer Joel Kaplan in a post on LinkedIn. "We have carefully reviewed the European Commission's Code of Practice for general-purpose AI (GPAI) models and Meta won't be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." The EU's code of practice -- a voluntary framework published earlier this month -- aims to help companies implement processes and systems to comply with the bloc's legislation for regulating AI. Among other things, the code requires companies to provide and regularly update documentation about their AI tools and services; bans developers from training AI on pirated content; and comply with content owners' requests to not use their works in their data sets. Calling the EU's implementation of the legislation "over-reach," Kaplan claimed that the law will "throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them." A risk-based regulation for applications of artificial intelligence, the AI Act bans some "unacceptable risk" use cases outright, such as cognitive behavioral manipulation or social scoring. The rules also define a set of "high-risk" uses, such as biometrics and facial recognition, and in domains like education and employment. The act also requires developers to register AI systems and meet risk and quality management obligations. Tech companies from across the world, including those at the forefront of the AI race like Alphabet, Meta, Microsoft and Mistral AI have been fighting the rules, even urging the European Commission to delay its roll out. But the Commission held firm, saying it will not change its timeline. Also on Friday, the EU published guidelines for providers of AI models ahead of rules that will go into effect on August 2. These rules would affect providers of "general-purpose AI models with systemic risk," like OpenAI, Anthropic, Google, and Meta. Companies that have such models on the market before August 2 will have to comply with the legislation by that date.
[2]
Meta snubs the EU's voluntary AI guidelines
Jess Weatherbed is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews. Meta says it won't sign the European Union's artificial intelligence code of practice agreement, warning that "Europe is heading down the wrong path on AI." The code published by the EU on July 10th is a voluntary set of guidelines to help companies follow the AI Act's rules around general-purpose AI before they come into effect in a few weeks. "We have carefully reviewed the European Commission's Code of Practice for general-purpose AI (GPAI) models and Meta won't be signing it," Meta's global affairs chief, Joel Kaplan, said via a statement on LinkedIn. "This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." While the code of practice itself isn't legally enforced, the EU says that general-purpose AI model providers who sign it will benefit from a "reduced administrative burden and increased legal certainty," compared to providers that may otherwise be subject to more regulatory scrutiny. OpenAI announced its intention to sign the agreement on July 11th. This comes ahead of AI Act rules coming into force on August 2nd that require general-purpose AI providers to be transparent about training and security risks for their models, and abide by EU and national copyright laws. The EU can fine companies that violate the AI Act up to seven percent of their annual sales. Kaplan says that Meta is concerned that the EU's landmark AI rulebook will act to throttle frontier model development and deployment in Europe, stunting European companies that comply with the bloc's regulations. These concerns echo those raised in a letter signed by more than 45 companies and organizations last month, including Airbus, Mercedes-Benz, Philips, and ASML, that urged the EU to postpone the implementation of its landmark AI Act regulation for two years to address uncertainty around compliance. The EU's efforts to tighten AI regulations stand in contrast to attitudes in the US, where the Trump administration is actively removing such roadblocks. Meta's refusal to sign the EU's code isn't entirely surprising, given that the scorned company has been slapped with billions in fines under the EU's regulatory landscape, and has aligned itself with the Trump administration's lax views around tech regulations.
[3]
Meta Says 'No Thanks' to Europe's AI Code of Conduct
(Credit: Avishek Das/SOPA Images/LightRocket via Getty Images) Meta has hit back at the European Union's regulators, saying they are "heading down the wrong path on AI." According to a LinkedIn post by Joel Kaplan, the tech giant's chief global affairs officer, Meta won't be signing the European Commission's Code of Practice for general‑purpose AI (GPAI) models. Rolled out earlier this week, the document is a voluntary list of guidelines governing everything from copyright law to transparency, as well as safety and security for the most advanced AI models. Signing the code would mean agreeing to provide certain rights protections for content creators that AIs train on, and it would oblige each firm to publish documentation outlining the features of any tools they roll out. It's intended to act as a set of guidelines to help firms comply with the EU's highly complex AI Act, which runs hundreds of pages long. "This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act," Kaplan said. He added that the EU's "over‑reach will throttle the development and deployment of frontier AI models" in the region and "stunt European companies looking to build businesses on top of them." Meta might be the most well‑known company to speak out against the EU's AI plans in recent months, but it's certainly not the only one. Earlier this July, a group of some of the largest European companies wrote to the President of the European Commission, Ursula von der Leyen, asking her to delay the implementation of the Act by two years. Signees included Dutch aerospace giant Airbus and Siemens Energy AG. Outside of official statements, Meta CEO Mark Zuckerberg has been pretty open about his desire to push back against the firm hand of EU regulators. In an episode of The Joe Rogan Experience earlier this year, Zuckerberg claimed the EU was using fines "almost like a tariff" but said he was "optimistic" about the Trump administration defending American firms operating in the EU. Meta has already paid EU regulators hundreds of millions on account of antitrust violations. Failing to comply with the AI Act could have serious financial consequences, including fines of up to €35 million or 7 % of global annual turnover for the most serious offences, or whichever is greater.
[4]
Meta declines to abide by voluntary EU AI safety guidelines
GPAI code asks for transparency, copyright, and safety pledges Two weeks before the EU AI Act takes effect, the European Commission issued voluntary guidelines for providers of general-purpose AI models. However, Meta refused to sign, arguing that the extra measures introduce "legal uncertainties" beyond the law's scope. "With today's guidelines, the Commission supports the smooth and effective application of the AI Act," Henna Virkkunen, EVP for tech sovereignty, security and democracy, said in a statement on Friday. "By providing legal certainty on the scope of the AI Act obligations for general-purpose AI providers, we are helping AI actors, from start-ups to major developers, to innovate with confidence, while ensuring their models are safe, transparent, and aligned with European values." The EU AI Act regulates the use of AI models based on four risk categories: unacceptable risk, high risk, limited risk, and minimal or no risk. Its goal is to prevent the amplification of illegal, extremist, or harmful content, and to ensure that models refuse disallowed requests, such as instructions for creating a bioweapon. The General-Purpose AI (GPAI) Code of Practice focuses on general-purpose AI models trained with computing resources that exceed 10^23 FLOPs - almost any recently trained large‐scale model. It asks for voluntary transparency and copyright commitments from those offering such models, as well as extra safety and security commitments from those distributing models that present systemic risk - "general-purpose AI models that were trained using a total computing power of more than 10^25 FLOPs." Europe is heading down the wrong path on AI Over 30 AI models from companies like Anthropic, Google, Meta, and OpenAI appear to have been trained with at least 10^25 FLOPs. Meta, long criticized for its data-hungry tactics in the EU, doesn't want to play along, however. Meta says it will ignore the GPAI, a stance that allows its Llama 4 Behemoth (5e25 FLOPs) to roam unhindered. "Europe is heading down the wrong path on AI," said Joel Kaplan, chief global affairs officer at Meta, in a LinkedIn post. "We have carefully reviewed the European Commission's Code of Practice for general-purpose AI (GPAI) models and Meta won't be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." Kaplan noted that European businesses and policymakers have objected to the EU AI Act, pointing to the recent open letter from the likes of Siemens, Airbus, and BNP that urged EU leaders to halt the implementation of the rules. "We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them," said Kaplan. Meta in April was fined €200 million (~$232.6 million) by the EC for failing to meet consumer data privacy obligations with its "Consent or Pay" business model, which has been deemed to violate Europe's Digital Markets Act (DMA). Last week, according to Bloomberg, the EC told Meta in a letter that the company's "Consent or Pay" model remains non-compliant. Meta was also fined €797.72 million (~$927.19 million) by the EC in November for tying its online classified ads service Facebook Marketplace to its Facebook social network in violation of antitrust rules. EC spokesperson Thomas Regnier told The Register via email that all GPAI providers will have to comply with the AI Act when it comes into force on August 2 this year. "The Code of Practice is a voluntary tool, but a solid benchmark," said Regnier. "If a provider decides not to sign the Code of Practice, it will have to demonstrate other means of compliance. Companies who choose to comply via other means may be exposed to more regulatory scrutiny by the AI Office." ®
[5]
Microsoft likely to sign EU AI code of practice, Meta rebuffs guidelines
BRUSSELS, July 18 (Reuters) - Microsoft (MSFT.O), opens new tab will likely sign the European Union's code of practice to help companies comply with the bloc's landmark artificial intelligence rules, its president told Reuters on Friday, while Meta Platforms (META.O), opens new tab rebuffed the guidelines. Drawn up by 13 independent experts, the voluntary code of practice aims to provide legal certainty to signatories. They will have to publish summaries of the content used to train their general-purpose AI models and put in place a policy to comply with EU copyright law. The code is part of the AI Act which came into force in June 2024 and will apply to Google owner Alphabet (GOOGL.O), opens new tab, Facebook owner Meta (META.O), opens new tab, OpenAI, Anthropic, Mistral and thousands of companies. "I think it's likely we will sign. We need to read the documents," Microsoft President Brad Smith told Reuters. "Our goal is to find a way to be supportive and at the same time one of the things we really welcome is the direct engagement by the AI Office with industry," he said, referring to the EU's regulatory body for AI. Meta reiterated its criticism of the code. "Meta won't be signing it. This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act," Meta's chief global affairs officer Joel Kaplan said in a blog post on LinkedIn on Friday. The U.S. social media giant has the same concerns as a group of 45 European companies, he said. "We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them," Kaplan said. OpenAI and Mistral have signed the code. Reporting by Foo Yun Chee; Editing by Cynthia Osterman Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Artificial Intelligence * Social Impact * Boards Foo Yun Chee Thomson Reuters An agenda-setting and market-moving journalist, Foo Yun Chee is a 21-year veteran at Reuters. Her stories on high profile mergers have pushed up the European telecoms index, lifted companies' shares and helped investors decide on their next move. Her knowledge and experience of European antitrust laws and developments helped her break stories on Microsoft, Google, Amazon, Meta and Apple, numerous market-moving mergers and antitrust investigations. She has previously reported on Greek politics and companies, when Greece's entry into the eurozone meant it punched above its weight on the international stage, as well as on Dutch corporate giants and the quirks of Dutch society and culture that never fail to charm readers.
[6]
Meta says it won't sign Europe AI agreement, calling it an overreach that will stunt growth
Meta Platforms declined to sign the European Union's artificial intelligence code of practice because it is an overreach that will "stunt" companies, according to global affairs chief Joel Kaplan. "Europe is heading down the wrong path on AI," Kaplan wrote in a post on LinkedIn Friday. "This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." Last week, the European Commission published a final iteration of its code for general-purpose AI models, leaving it up to companies to decide if they want to sign. The rules, which go into effect next month, create a framework for complying with the AI Act passed by European lawmakers last year. It aims to improve transparency and safety surrounding the technology.
[7]
Meta says it won't sign the EU's AI code of practice
Its global affairs officer called the guidelines "over-reach." Meta said on Friday that it won't sign the European Union's new AI code of practice. The guidelines provide a framework for the EU's AI Act, which regulates companies operating in the European Union. The EU's code of practice is voluntary, so Meta was under no legal obligation to sign it. Yet Meta's Chief Global Affairs Officer, Joel Kaplan, made a point to publicly knock the guidelines on Friday. He described the code as "over-reach." "Europe is heading down the wrong path on AI," Kaplan posted in a statement. "We have carefully reviewed the European Commission's Code of Practice for general-purpose AI (GPAI) models and Meta won't be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." So, why kick up a (public) fuss about not signing something Meta was under no obligation to sign? Well, this isn't the first time the company has waged a PR battle against Europe's AI regulations. It previously called the AI Act "unpredictable," claiming "it goes too far" and is "hampering innovation and holding back developers." In February, Meta's public policy director said, "The net result of all of that is that products get delayed or get watered down and European citizens and consumers suffer." Outmuscling the EU may seem like a more attainable goal to Meta, given that it has an anti-regulation ally in the White House. In April, President Trump pressured the EU to abandon the AI Act. He described the rules as "a form of taxation." The EU published its code of practice on July 10. It includes tangible guidelines to help companies follow the AI Act. Among other things, the code bans companies from training AI on pirated materials and requires them to respect requests from writers and artists to omit their work from training data. It also requires developers to provide regularly updated documentation describing their AI features. Although signing the code of practice is voluntary, doing so has its perks. Agreeing to it can give companies more legal protection against future accusations of breaching the AI Act. Thomas Regnier, the European Commission's spokesperson for digital matters, added more color in a statement to Bloomberg. He said that AI providers who don't sign it "will have to demonstrate other means of compliance." As a consequence, they "may be exposed to more regulatory scrutiny." Companies that violate the AI Act can face hefty penalties. The European Commission can impose fines of up to seven percent of a company's annual sales. The penalties are a lower three percent for those developing advanced AI models.
[8]
The EU just issued guidelines for AI safety, and Meta is already opting out
Meta is refusing to sign the European Union's new code of practice for artificial intelligence, a company leader said on Friday. The AI code is a part of the AI Act, which became law last year. The set of laws will apply to the largest AI models, and companies have until Aug. 2 next year to comply. However, the new AI code of practice is voluntary. Joel Kaplan, Meta's Chief Global Affairs Officer, issued a statement on LinkedIn announcing the company's decision to opt out of the code. This Tweet is currently unavailable. It might be loading or has been removed. "Europe is heading down the wrong path on AI," he wrote. "The European Commission's Code of Practice for GPAI will stifle the development & deployment of frontier AI models and hinder Europe's economic growth," he continued. The AI Act applies to AI models that the commission deems to carry systemic risks that could significantly affect "public health, safety, fundamental rights, or society," reports Reuters. It also includes the most well-known "foundational models" from companies like Meta, OpenAI, Google, and Anthropic. The European Commission, the primary executive arm of the EU, issued new guidelines earlier today to help AI companies comply with the AI Act. That follows the voluntary code of practice issued earlier this July, according to Bloomberg. AI model providers can choose whether or not to sign the code of practice, which will include copyright protections, safety guidelines, and transparency requirements. If a company chooses to sign the code, it could receive more legal protections if accused of violating the act. Companies found to breach the law could be fined up to 7 percent of their annual global revenue. Meta is the latest U.S. company to push back against the EU's widespread efforts to regulate AI. Others include Mistral AI and Airbus, companies that signed a letter earlier this year asking the EU Commission to delay enforcement of the law. "We shared concerns raised by these businesses," said Kaplan in his statement, "that this over-reach will throttle the development and deployment of frontier AI models in Europe and stunt European companies looking to build businesses on top of them." Meanwhile, OpenAI has agreed to sign the EU's Code of Practice for General Purpose AI, saying in a statement, "Signing the Code reflects our commitment to providing capable, accessible, and secure AI models for Europeans to fully participate in the economic and societal benefits of the Intelligence Age."
[9]
Meta rebuffs EU's AI Code of Practice
The company has previously been critical on the Commission's work on GPAI, claiming it stifles innovation. US social media company Meta will not sign the EU's AI Code of Practice on General Purpose AI (GPAI), the company's Chief Global Affairs Officer Joel Kaplan said in a statement on Friday. "Europe is heading down the wrong path on AI. We have carefully reviewed the European Commission's Code of Practice for GPAI models and Meta won't be signing it," he said, adding that the Code "introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." The Commission last week released the Code, a voluntary set of rules that touches on transparency, copyright, and safety and security issues, aiming to help providers of AI models such as ChatGPT and Gemini comply with the AI Act. Companies that sign up are expected to be compliant with the Act and can anticipate more legal certainty, others will face more inspections. The AI Act's provisions affecting GPAI systems enter into force on 2 August. It will take another two years before the AI Act, which regulates AI systems according to the risk they pose to society, will become fully applicable. OpenAI, the parent company of ChatGPT, has said it will sign up to the Code once its ready. The drafting process of the Code was criticised by Big Tech companies as well as CEOs of European companies, claiming they need more time to comply with the rules. "We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them," Kaplan said. The Code requires sign off by EU member states, which are represented in a subgroup of the AI Board, as well as by the Commission's own AI Office. The member states are expected to give a green light as early as 22 July. The EU executive said it will publish the list of signatories on 1 August.
[10]
Meta won't sign EU's AI Code, but who will?
The rules on general purpose artificial intelligence will enter into force on 2 August. A week before the new rules on general purpose artificial intelligence (GPAI) enter into force - affecting tools such as ChatGPT and Gemini - a clearer picture is emerging on where companies stand when it comes to signing up to the EU's voluntary Code of Practice on GPAI. US Big Tech giant Meta said last week that it will not sign, having slammed the rules for stifling innovation. The Code, which the European Commission released last week, is a voluntary set of rules that touches on transparency, copyright, and safety and security issues, aiming to help providers of GPAI models comply with the AI Act. Those providers who sign up are expected to be compliant with the AI Act and can anticipate more legal certainty, others will face more inspections. Here's who's in and who's out. US AI provider Anthropic, which developed AI assistant Claude as a competitor to OpenAI's ChatGPT and Google's Gemini, is the latest company that said it intends to sign the Code. "We believe the Code advances the principles of transparency, safety and accountability -- values that have long been championed by Anthropic for frontier AI development," the company said in a statement. "If thoughtfully implemented, the EU AI Act and Code will enable Europe to harness the most significant technology of our time to power innovation and competitiveness," the statement added. OpenAI said earlier last week that it will sign up too, claiming that Europe should now "use this moment to empower [its] innovators to innovate and builders to build for Europe's future." The drafting process of the Code, which began last September after the Commission selected a group of experts, was heavily criticised, mainly by rightsholders who feared violations of copyright law would increase, while US tech giants claimed the rules stifle innovation. Microsoft President Brad Smith told Reuters last week that his company will likely sign too. Smith said earlier this year that Microsoft wants to be "a voice of reason" as geopolitical tensions rise. US tech giant Meta was the first, and so far remains the only company to say it will not sign the Code. Chief Global Affairs Officer Joel Kaplan said in a statement last Friday that "Europe is heading down the wrong path on AI." After "carefully reviewing" the Code, Meta will not sign because the document "introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act," Kaplan said. Gry Hasselbalch, a scholar working on data and AI ethics and contributor to the EU's AI ethics guidelines, told Euronews that the Code did not bring real change on how companies can implement general purpose AI in the EU. "The companies, like Meta, that decide not to sign the code will still need to comply with the AI Act. Signing the code is therefore just a formality. They would still have to read it and follow it to understand when an AI system is considered a general purpose AI system and what transparency, copyright and security means in the AI Act," Hasselbalch said. She added that the AI Act itself - rules that regulate AI systems and tools according to the risk they pose to society - "has become a token in a geo-political battle." "The law was developed in a carefully designed and performed democratic process to create legal certainty for AI developers and adopters in the EU. In fact, most AI systems can be developed and used subject to existing legislation without additional legal obligations of the AI Act," she said. Meta will still need to be compliant with the AI Act's obligations that will start applying on 2 August. Other Big Tech companies, including Amazon, Google, did not want to comment yet on whether they will sign. Providers that already have a GPAI model on the market will have to sign before 1 August, others can sign up at a later time, the Commission said. On that same day, the EU executive will publish a list of signatories.
[11]
Meta won't implement voluntary AI guidelines proposed by EU - SiliconANGLE
Meta Platforms Inc. today declined to implement a set of guidelines that the European Union has proposed for the artificial intelligence sector. The guidelines are outlined in a document called the General-Purpose AI Code of Practice, or GPAI, that was released on July 10. It covers topics such as model safety and training data collection. Implementing the GPAI is voluntary for AI developers such as Meta. The framework is designed to help companies comply with the AI Act, a piece of legislation that the EU implemented last year. The law sets forth an extensive set of rules for AI developers. Algorithms deemed to be low-risk will only have to comply with a subset of the AI Act's provisions, while high-risk models will be more heavily regulated. "We have carefully reviewed the European Commission's Code of Practice for general-purpose AI (GPAI) models and Meta won't be signing it," Joel Kaplan, Meta's Chief Global Affairs Officer, said in a statement today. "This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." The GPAI comprises 55 pages organized into three chapters. Those chapters focus on three different topics: copyright, transparency and safety. Companies that sign the GPAI must bring their AI projects into compliance with the EU's copyright laws. Additionally, they are required to disclose various technical details about their AI models. The list includes a model's architecture and the manner in which its training data was obtained. The GPAI's third chapter, which focuses on AI model safety, is longer than the two other chapters combined. It outlines a series of measures that AI providers must implement to mitigate the risks associated with their algorithms. Those measures only have to be applied to the "most advanced AI models." Signatories' compliance with the GPAI will be overseen by the AI Office, a regulatory body that is also tasked with enforcing the AI Act. It's part of the European Commission, the EU's executive arm. Around the same time Meta announced that it won't sign the GPAI, the European Commission introduced a new set of guidelines for AI companies. The guidelines are designed to complement the GPAI. Their primary goal is to clarify a number of regulatory terms introduced in the AI Act. Under the legislation, an AI designated as a "general-purpose AI model" by regulators is subject to different rules than other algorithms. The guidelines released today specify that this designation is given to algorithms if they can generate text, audio, images or videos and were trained using more than 100,000 exaflops of processing power. One exaflop corresponds to the computing capacity of about 20 Blackwell B200 chips.
[12]
Meta Says It Won't Sign EU's GPAI Guidelines Due to 'Legal Uncertainties'
OpenAI has announced its intention to sign the EU's Code of Practice Meta announced that it will not be signing the European Union's (EU) Code of Practice for general-purpose artificial intelligence (GPAI) models last week. Earlier this month, the European Commission said that it has received the final version of GPAI Code of Practice, a voluntary tool aimed at helping the industry comply with the AI Act's rules, whose first phase comes into force on August 2. The Menlo Park-based tech giant refused to sign the code citing "a number of legal uncertainties for model developers." Joel Kaplan, Meta's Chief Global Affairs Officer, announced the company's intention not to sign the GPAI Code of Practice in a LinkedIn post. He said the decision was made after carefully reviewing the document, and added, "Europe is heading down the wrong path on AI." The GPAI Code of Practice contains three chapters, namely Transparency, Copyright, and Safety and Security. The document covers guidelines on preparing a user-friendly model documentation form, compliance with EU copyright law, and ensuring that large language models (LLMs) do not carry systemic risks to fundamental rights and safety. Essentially, the Code of Practice is designed to be a starting place for companies to prepare themselves for the AI Act, and it is not legally enforceable. Kaplan criticised the Code of Practice for going beyond the scope of the AI Act. Notably, multiple companies in Europe, including Airbus, Lufthansa, Mercedes-Benz, Philips, Siemens Energy, and others, have signed an open letter urging the EC to "stop the clock" on the AI Act. The Meta executive added in the post that the company shares the concerns raised by these businesses. He said that the "over-reach" by the EU will stifle the development and deployment of frontier AI models in Europe. Interestingly, OpenAI has already announced its intention of signing the Code of Practice. Despite the protests, the EC reportedly announced that the AI Act will be rolled out as per the previously shared timeline. "I've seen, indeed, a lot of reporting, a lot of letters and a lot of things being said on the AI Act. Let me be as clear as possible, there is no stop the clock. There is no grace period. There is no pause," Commission spokesperson Thomas Regnier was quoted by Reuters as saying.
[13]
Microsoft likely to sign EU AI code of practice, Meta rebuffs guidelines - The Economic Times
Microsoft is likely to sign the EU's voluntary AI code of practice, supporting compliance with the bloc's new AI rules, while Meta has rejected it, citing legal uncertainties and overreach. The code requires transparency on training data and adherence to copyright law, and forms part of the EU's AI Act.Microsoft will likely sign the European Union's code of practice to help companies comply with the bloc's landmark artificial intelligence rules, its president told Reuters on Friday, while Meta Platforms rebuffed the guidelines. Drawn up by 13 independent experts, the voluntary code of practice aims to provide legal certainty to signatories. They will have to publish summaries of the content used to train their general-purpose AI models and put in place a policy to comply with EU copyright law. The code is part of the AI Act which came into force in June 2024 and will apply to Google owner Alphabet, Facebook owner Meta, OpenAI, Anthropic, Mistral and thousands of companies. "I think it's likely we will sign. We need to read the documents," Microsoft President Brad Smith told Reuters. "Our goal is to find a way to be supportive and at the same time one of the things we really welcome is the direct engagement by the AI Office with industry," he said, referring to the EU's regulatory body for AI. Meta reiterated its criticism of the code. "Meta won't be signing it. This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act," Meta's chief global affairs officer Joel Kaplan said in a blog post on LinkedIn on Friday. The U.S. social media giant has the same concerns as a group of 45 European companies, he said. "We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them," Kaplan said. OpenAI and Mistral have signed the code.
[14]
Meta calls EU's AI code overreach, will not sign it
Meta Platforms has declined to sign the European Union's code of practice for the AI Act, citing concerns about overreach and legal uncertainties for model developers. The voluntary framework aims to help companies comply with the AI Act, including copyright protections and transparency requirements. Meta Platforms said it won't sign the code of practice for Europe's new set of laws governing artificial intelligence, calling the guidelines to help companies follow the AI Act overreach. "Europe is heading down the wrong path on AI," Meta's head of global affairs Joel Kaplan said in a post on LinkedIn. "This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." The European Union published the code of practice earlier this month. It's a voluntary framework meant to help companies put processes in place to stay in line with the bloc's sprawling AI Act, and includes copyright protections for creators and transparency requirements for advanced AI models. It also requires developers to provide documentation to describe their AI models' features. Agreeing to the code can give companies more legal protection if they're accused of falling foul of the act. It's the latest flashpoint between US tech companies and European regulators seeking to rein in their market power. US President Donald Trump's administration, which has lambasted the bloc's tech regulations and fines as unfairly targeting US firms, had reached out to the EU to argue against the code of practice in April, before it was finalised. Dozens of European companies including ASML Holding, Airbus SE and Mistral AI also asked the European Commissionto suspend the AI Act's implementation for two years.
[15]
Meta Declines European Union's AI Pact, Claiming That The Proposed Agreement Overreaches Regulations, Threatens Innovation, And Forces Premature Changes That Could Slow Growth In The Tech Industry
Meta has refused to sign an AI Pact proposed by the European Union, arguing that the initiative imposes excessive demands that could stifle the growth and innovation of the company and industry as a whole. The company's refusal increases tensions between tech firms and regulators over how to control and govern AI without derailing its true potential. The EU introduced the voluntary AI Pact as an interim measure ahead of its AI Act, which was passed and came into effect earlier this year, but will fully be enforced starting 2026. The AI Pact wants tech companies to adopt the Act's principles early, which would promote transparency, accountability, and safety in the development of the technology. While most companies have agreed to join, Meta says that the pact duplicates obligations already set in the AI Act and risks forcing premature changes to its systems (via CNBC). Meta claims that these added demands could slow down progress in the field that thrives on flexibility and rapid innovation. However, the company says that it will remain committed to working with European regulators to ensure its AI technologies are safe and compliant but believes that the pact goes "too far, too fast." On the flip side, European officials state that the pact is a crucial step toward ensuring that AI systems do not harm users or spread misinformation, specifically generative AI and recommendation algorithms that play a larger role in everyday life. Meta warns that excessive regulations at this stage could limit the development and benefits of the technologies, which will in turn give an advantage to competitors in less regulated regions. The company's stance on the matter sets it apart from some of its rivals, who have already decided to embrace the pact to align themselves with the EU standards. Analysts and industry insiders suggest that Meta's position on the subject reflects a broader concern in the tech sector, which is to balance regulations with the need to innovate quickly in competitive markets. As ruling authorities across the globe are pushing to set AI rules, Meta's stance highlights the challenges of protecting the public while allowing tech firms to develop powerful tools. Meta could alter the shape of these regulations if all the cards are played right, but so far, it is only the beginning. The debate over the EU's AI Pact may influence how similar agreements and laws are shaped globally and how much room companies have to develop and experiment with AI in the coming years. We will keep you updated on the subject, so do stick around for more.
[16]
Meta Rejects European Commission's AI Code of Practice, Citing 'Overreach' | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. "This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act," Kaplan wrote in the post. Kaplan added that more than 40 of Europe's largest businesses signed a letter earlier this month, asking the European Commission to halt the implementation of the AI Act. "We share concerns raised by these businesses that this overreach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them," Kaplan wrote. Politico reported July 4 that 46 leaders of top European companies signed an open letter calling for a two-year pause on implementation of the AI Act, saying "unclear, overlapping and increasingly complex EU regulations" make it harder to do business in the region. The European Commission published the final version of the General-Purpose AI Code of Practice on July 10, saying this voluntary framework is designed to help artificial intelligence (AI) companies comply with the European Union's AI Act. The code of practice seeks to clarify legal obligations under the act for providers of general-purpose AI models, especially those posing systemic risks like ones that could aid in the development of chemical and biological weapons. The AI Act, which was approved in 2024, is the first comprehensive legal framework governing AI. It aims to ensure that AI systems used in the EU are safe, transparent and respectful of fundamental human rights. The code is voluntary, but AI model companies that sign on will benefit from lower administrative burdens and greater legal certainty, according to the commission. OpenAI said in a July 11 blog post that it intended to sign the code of practice if the current version is formally approved. "Signing the Code reflects our commitment to providing capable, accessible and secure AI models for Europeans to fully participate in the economic and societal benefits of the Intelligence Age," OpenAI said in the post. "We have always developed models with transparency, accountability and safety at the forefront: principles that are also reflected in the Code."
[17]
Meta Won't Sign EU's AI Code of Practice, Chief Global Affairs Officer Says -- Update
Meta Platforms won't sign the European Union's code of practice for general-purpose artificial intelligence because it adds legal uncertainty and brings in measures that go beyond the scope of AI legislation in the bloc, Chief Global Affairs Officer Joel Kaplan said. The European Commission, the EU's executive arm, last week published the final version of a code of practice for general-purpose AI that model providers can choose whether or not to sign. EU officials said the code included guidance on safety and security, transparency and copyright to help signatories comply with the bloc's wide-ranging legislation on AI. EU lawmakers approved the AI Act last year, a law that bans certain uses of the technology, rolls out new transparency guidelines and requires risk assessments for AI systems that are deemed high-risk. Rules on general-purpose AI will be effective for companies as of Aug. 2. The commission's AI Office, a body that oversees implementation of the law, will enforce rules on new AI models after a year, and two years later for existing models. Companies that breach the law risk fines of up to 7% of their annual global revenue. "Europe is heading down the wrong path on AI," Kaplan wrote in a LinkedIn post. "This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." Meta is the latest tech giant to criticize the EU's push to regulate AI. Earlier this month, chief executives of European companies, including Mistral AI, ASML Holding and Airbus, signed a letter asking the commission to delay enforcement of the law, saying overlapping and complex regulations were preventing the EU from becoming an AI leader. "We share concerns raised by these businesses that this overreach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them," Kaplan said. The announcement from Meta comes days after OpenAI said it would sign the code, subject to the current version being formally approved by the AI board--a body that includes representatives from each of the EU's 27 member states. "Signing the code reflects our commitment to providing capable, accessible, and secure AI models for Europeans to fully participate in the economic and societal benefits of the Intelligence Age," the ChatGPT maker said last week. The EU is seeking to catch up with the U.S. and China on AI and wants to develop a network of so-called AI gigafactories to help companies train the most complex models. Those facilities will be equipped with roughly 100,000 of the latest AI chips, around four times more than the number installed in AI factories being set up currently. OpenAI said it had submitted expressions of interest to take part in the process for the rollout of gigafactories in Europe. News Corp, owner of Dow Jones Newswires and The Wall Street Journal, has a content-licensing partnership with OpenAI.
[18]
Meta Won't Sign EU's AI Code of Practice, Chief Global Affairs Officer Says
Meta Platforms won't sign the European Union's code of practice for general-purpose artificial intelligence because it adds legal uncertainty and brings in measures that go beyond the scope of AI legislation in the bloc, Chief Global Affairs Officer Joel Kaplan said. The European Commission, the EU's executive arm, last week published the final version of a voluntary code of practice. Model providers can choose whether to sign the code, which EU officials said would help companies to comply with the bloc's wide-ranging legislation on AI. EU lawmakers approved the AI Act last year, a law that bans certain uses of AI, rolls out new transparency guidelines and requires risk assessments for AI systems that are deemed high-risk. "Europe is heading down the wrong path on AI," Kaplan said in a LinkedIn post. "This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act."
[19]
Meta Won't Sign EU's AI Code of Practice, Chief Global Affairs Officer Says -- 2nd Update
Meta Platforms' Chief Global Affairs Officer said the Facebook and Instagram owner wouldn't sign the European Union's code of practice for general-purpose artificial intelligence because it adds uncertainty and goes beyond the scope of AI legislation in the bloc. The European Commission, the EU's executive arm, last week published the final version of a code of practice for general-purpose AI that model providers can choose whether or not to sign. EU officials said the code included guidance on safety and security, transparency and copyright to help signatories comply with the bloc's wide-ranging legislation on AI. EU lawmakers last year approved the AI Act, a law that bans certain uses of the technology, rolls out new transparency guidelines and requires risk assessments for AI systems that are deemed high-risk. Rules on general-purpose AI will be effective for companies as of Aug. 2. The commission's AI Office, a body that oversees implementation of the law, will enforce rules on new AI models after a year, and two years later for existing models. Companies that breach the law risk fines of up to 7% of their annual global revenue. "Europe is heading down the wrong path on AI," Meta's Chief Global Affairs Officer Joel Kaplan wrote in a LinkedIn post. "This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." Thomas Regnier, spokesman for the European Commission, said AI model providers that opt not to sign the code would still have to comply with the AI Act and might be exposed to more regulatory scrutiny. Meta is the latest tech giant to criticize the EU's push to regulate AI. Earlier this month, chief executives of European companies, including Mistral AI, ASML Holding and Airbus, signed a letter asking the commission to delay enforcement of the law, saying overlapping and complex regulations were preventing the EU from becoming an AI leader. "We share concerns raised by these businesses that this overreach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them," Kaplan said. The announcement from Meta comes days after OpenAI said it would sign the code, subject to the current version being formally approved by the AI board--a body that includes representatives from each of the EU's 27 member states. "Signing the code reflects our commitment to providing capable, accessible, and secure AI models for Europeans to fully participate in the economic and societal benefits of the Intelligence Age," the ChatGPT maker said last week. The EU is seeking to catch up with the U.S. and China on AI and wants to develop a network of so-called AI gigafactories to help companies train the most complex models. Those facilities will be equipped with roughly 100,000 of the latest AI chips, around four times more than the number installed in AI factories being set up currently. OpenAI said it had submitted expressions of interest to take part in the process for the rollout of gigafactories in Europe. News Corp, owner of Dow Jones Newswires and The Wall Street Journal, has a content-licensing partnership with OpenAI.
[20]
Microsoft likely to sign EU AI code of practice, Meta rebuffs guidelines
BRUSSELS (Reuters) -Microsoft will likely sign the European Union's code of practice to help companies comply with the bloc's landmark artificial intelligence rules, its president told Reuters on Friday, while Meta Platforms rebuffed the guidelines. Drawn up by 13 independent experts, the voluntary code of practice aims to provide legal certainty to signatories. They will have to publish summaries of the content used to train their general-purpose AI models and put in place a policy to comply with EU copyright law. The code is part of the AI Act which came into force in June 2024 and will apply to Google owner Alphabet, Facebook owner Meta, OpenAI, Anthropic, Mistral and thousands of companies. "I think it's likely we will sign. We need to read the documents," Microsoft President Brad Smith told Reuters. "Our goal is to find a way to be supportive and at the same time one of the things we really welcome is the direct engagement by the AI Office with industry," he said, referring to the EU's regulatory body for AI. Meta reiterated its criticism of the code. "Meta won't be signing it. This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act," Meta's chief global affairs officer Joel Kaplan said in a blog post on LinkedIn on Friday. The U.S. social media giant has the same concerns as a group of 45 European companies, he said. "We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them," Kaplan said. (Reporting by Foo Yun Chee; Editing by Cynthia Osterman)
[21]
Meta rejects EU's voluntary AI rules: Here's why
The EU says GPAI developers like Meta must comply by 2027 or face increased regulatory scrutiny and must prove alternative compliance methods. Meta has refused to sign the newly introduced European Union's voluntary Code of Practice for general-purpose AI (GPAI) models. The European Commission, the EU's executive arm, introduced a voluntary code of practice last week, and it was set to come into force on August 2. Meta, along with Microsoft, Alphabet, and Mistral AI, have pushed back on the legislation, urging the EU to delay its implementation. However, the EU refuses to change its timeline. Now, Meta's Chief Global Affairs Officer, Joel Kaplan, criticised the code, calling it legally uncertain and overly restrictive in a LinkedIn post. He wrote, "Europe is heading down the wrong path on AI." Kaplan further added that Meta has carefully reviewed the European Commission's Code of Practice for general-purpose AI models, and they won't be signing it, as the EU's AI Code introduces legal uncertainties for model developers and has measures that go far beyond the scope of the AI Act. Introduced earlier this month, the voluntary framework is designed to help AI companies prepare for the bloc's legislation for regulating AI. The code requires companies to maintain and update documentation on their AI systems, refrain from using pirated content for training, and comply with content owners' requests to opt out of AI training datasets. Also read: Apple files lawsuit against YouTuber Jon Prosser for leaking iOS 26 features Kaplan called the EU's approach an "over-reach," warning that it could throttle the development and deployment of frontier AI models in Europe and stunt European companies looking to build businesses on top of them. He also highlighted that as many as 44 businesses and policymakers across Europe, including Bosch, Siemens, SAP, Airbus and BNP, have signed a letter urging the Commission to delay the implementation of new AI regulation. However, the European Commission released updated guidelines on Friday stating that the companies that develop GPAI models with "systemic risk," including Meta, must fully comply with the rules by August 2027. Businesses failing to do so "will have to demonstrate other means of compliance" or "more regulatory scrutiny," EU spokesperson Thomas Regnier said in a statement.
Share
Copy Link
Meta declines to sign the European Union's voluntary AI code of practice, citing legal uncertainties and overreach, while other tech giants consider compliance with the upcoming AI Act.
Meta, the tech giant behind Facebook, has refused to sign the European Union's voluntary AI code of practice, just weeks before the bloc's AI Act is set to take effect. Joel Kaplan, Meta's chief global affairs officer, stated, "Europe is heading down the wrong path on AI," arguing that the code introduces legal uncertainties and measures beyond the scope of the AI Act 1.
Source: engadget
The EU's code of practice, published earlier this month, aims to help companies implement processes and systems to comply with the bloc's upcoming AI legislation. It requires companies to provide and update documentation about their AI tools, bans the use of pirated content for AI training, and mandates compliance with content owners' requests regarding data set usage 1.
The AI Act, a risk-based regulation for AI applications, is set to take effect on August 2. It categorizes AI uses into risk levels, banning some "unacceptable risk" cases outright and defining "high-risk" uses in areas like biometrics, education, and employment 1.
The act requires AI system registration and imposes risk and quality management obligations on developers. Notably, it will affect providers of "general-purpose AI models with systemic risk," including companies like OpenAI, Anthropic, Google, and Meta 4.
While Meta has taken a strong stance against the EU's approach, other tech giants have shown varying responses:
Source: Reuters
OpenAI and Mistral: Both companies have already signed the code 5.
Other tech companies: A group of over 45 European companies, including Airbus and Siemens, have urged the EU to postpone the implementation of the AI Act by two years, citing concerns about compliance uncertainty 2.
Meta argues that the EU's approach will "throttle the development and deployment of frontier AI models in Europe" and hinder European companies building businesses on these models 3. This stance aligns with Meta's history of pushing back against EU regulations, having already faced significant fines for antitrust violations 2.
The EU, however, maintains that the code and the AI Act are crucial for ensuring AI safety, transparency, and alignment with European values. Companies that don't sign the voluntary code may face increased regulatory scrutiny 4.
Source: The Register
The EU's efforts to regulate AI stand in contrast to approaches in other regions, particularly the United States, where regulations are generally less stringent 2. This divergence could have significant implications for the global AI landscape, potentially influencing how companies develop and deploy AI technologies across different markets.
As the August 2 deadline approaches, the tech industry's response to the EU's AI regulations will likely shape the future of AI development and governance not only in Europe but potentially worldwide.
NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather, potentially improving the protection of Earth's critical infrastructure from solar storms.
5 Sources
Technology
7 hrs ago
5 Sources
Technology
7 hrs ago
Meta introduces an AI-driven voice translation feature for Facebook and Instagram creators, enabling automatic dubbing of content from English to Spanish and vice versa, with plans for future language expansions.
8 Sources
Technology
23 hrs ago
8 Sources
Technology
23 hrs ago
OpenAI CEO Sam Altman reveals plans for GPT-6, focusing on memory capabilities to create more personalized and adaptive AI interactions. The upcoming model aims to remember user preferences and conversations, potentially transforming the relationship between humans and AI.
2 Sources
Technology
23 hrs ago
2 Sources
Technology
23 hrs ago
Chinese AI companies DeepSeek and Baidu are making waves in the global AI landscape with their open-source models, challenging the dominance of Western tech giants and potentially reshaping the AI industry.
2 Sources
Technology
7 hrs ago
2 Sources
Technology
7 hrs ago
A comprehensive look at the emerging phenomenon of 'AI psychosis', its impact on mental health, and the growing concerns among experts and tech leaders about the psychological risks associated with AI chatbots.
3 Sources
Technology
7 hrs ago
3 Sources
Technology
7 hrs ago