13 Sources
[1]
Google confirms it will sign the EU AI Code of Practice
Big Tech is increasingly addicted to AI, but many companies are allergic to regulation, bucking suggestions that they adhere to copyright law and provide data on training. In a rare move, Google has confirmed it will sign the European Union's AI Code of Practice, a framework it initially opposed for being too harsh. However, Google isn't totally on board with Europe's efforts to rein in the AI explosion. The company's head of global affairs, Kent Walker, noted that the code could stifle innovation if it's not applied carefully, and that's something Google hopes to prevent. While Google was initially opposed to the Code of Practice, Walker says the input it has provided to the European Commission has been well-received, and the result is a legal framework it believes can provide Europe with access to "secure, first-rate AI tools." The company claims that the expansion of such tools on the continent could boost the economy by 8 percent (about 1.8 trillion euros) annually by 2034. These supposed economic gains are being dangled like bait to entice business interests in the EU to align with Google on the Code of Practice. While the company is signing the agreement, it appears interested in influencing the way it is implemented. Walker says Google remains concerned that tightening copyright guidelines and forced disclosure of possible trade secrets could slow innovation. Having a seat at the table could make it easier to bend the needle of regulation than if it followed some of its competitors in eschewing voluntary compliance. Google's position is in stark contrast to that of Meta, which has steadfastly refused to sign the agreement. The Facebook owner has claimed the voluntary Code of Practice could impose too many limits on frontier model development, an unsurprising position for the company to take as it looks to supercharge its so-called "superintelligence" project. Microsoft is still mulling the agreement and may eventually sign it, but ChatGPT maker OpenAI has signaled it will sign the code.
[2]
Google says it will sign EU's AI code of practice | TechCrunch
Google has confirmed it will sign the European Union's general purpose AI code of practice, a voluntary framework that aims to help AI developers implement processes and systems to comply with the bloc's AI Act. Notably, Meta earlier this month said it would not sign the code, calling the EU's implementation of its AI legislation "overreach," and stating that Europe was "heading down the wrong path on AI." Google's commitment comes days before rules for providers of "general-purpose AI models with systemic risk" go into effect on August 2. Companies likely to be affected by these rules include major names such as Anthropic, Google, Meta, and OpenAI, as well as several other large generative models, and they will have two years to comply fully with the AI Act. In a blog post on Wednesday, Kent Walker, president of global affairs at Google, conceded that the final version of the code of practice was better than what the EU proposed initially, but he still noted reservations around the AI Act and the code. "We remain concerned that the AI Act and Code risk slowing Europe's development and deployment of AI. In particular, departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployment, harming Europe's competitiveness," wrote Walker. By signing the EU's code of practice, AI companies would agree to follow a slate of guidelines, which include providing updated documentation about their AI tools and services; no training AI on pirated content; and complying with requests from content owners to not use their works in their datasets. A risk-based regulation for AI applications, the EU's landmark AI Act bans some "unacceptable risk" use cases, such as cognitive behavioral manipulation or social scoring. The rules also define a set of "high-risk" uses, including biometrics and facial recognition, and the use of AI in domains like education and employment. The act also requires developers to register AI systems and meet risk- and quality-management obligations.
[3]
Google to sign EU's AI code of practice despite concerns
BRUSSELS, July 30 (Reuters) - Alphabet's (GOOGL.O), opens new tab Google will sign the European Union's code of practice which aims to help companies comply with the bloc's landmark artificial intelligence rules, its global affairs president said in a blog post on Wednesday, though he voiced some concerns. The voluntary code of practice, drawn up by 13 independent experts, aims to provide legal certainty to signatories on how to meet requirements under the Artificial Intelligence Act (AI Act), such as issuing summaries of the content used to train their general-purpose AI models and complying with EU copyright law. "We do so with the hope that this code, as applied, will promote European citizens' and businesses' access to secure, first-rate AI tools as they become available," Kent Walker, who is also Alphabet's chief legal officer, said in the blog post. He added, however, that Google was concerned that the AI Act and code of practice risk slowing Europe's development and deployment of AI. "In particular, departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployment, harming Europe's competitiveness," Walker said. Microsoft MSFT.O will likely sign the code, its president, Brad Smith, told Reuters earlier this month, while Meta Platforms (META.O), opens new tab declined to do so and cited the legal uncertainties for model developers. The European Union enacted the guardrails for the use of artificial intelligence in an attempt to set a potential global benchmark for a technology used in business and everyday life and dominated by the United States and China. Reporting by Foo Yun Chee in Brussels; Editing by Matthew Lewis Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Artificial Intelligence Foo Yun Chee Thomson Reuters An agenda-setting and market-moving journalist, Foo Yun Chee is a 21-year veteran at Reuters. Her stories on high profile mergers have pushed up the European telecoms index, lifted companies' shares and helped investors decide on their next move. Her knowledge and experience of European antitrust laws and developments helped her break stories on Microsoft, Google, Amazon, Meta and Apple, numerous market-moving mergers and antitrust investigations. She has previously reported on Greek politics and companies, when Greece's entry into the eurozone meant it punched above its weight on the international stage, as well as on Dutch corporate giants and the quirks of Dutch society and culture that never fail to charm readers.
[4]
Big Tech split? Google to sign EU's AI guidelines despite Meta snub
Google on Wednesday said it will sign the European Union's guidelines on artificial intelligence, which Meta previously rebuffed due to concerns they could stifle innovation. In a blog post, Google said it planned to sign the code in the hope that it would promote European citizens' access to advanced new AI tools, as they become available. Google's endorsement comes after Meta recently said it would refuse to sign the code over concerns that it could constrain European AI innovation. "Prompt and widespread deployment is important," Kent Walker, president of global affairs of Google, said in the post, adding that embracing AI could boost Europe's economy by 1.4 trillion euros ($1.62 trillion) annually by 2034. The European Commission, which is the executive body of the EU, published a final iteration of its code of practice for general-purpose AI models, leaving it up to companies to decide if they want to sign.
[5]
Google will sign EU's AI Code of Practice
This signals the companies presumed compliance with the EU's AI Act. Google says it will sign the European Union's new AI Code of Practice, which provides a framework for compliance with the EU's AI Act. The act itself was passed in 2024, but its many provisions will take months to years to come into effect. The non-binding Code of Practice is a voluntary measure intended to help ensure that companies generally meet the obligations laid out by the Act in the meantime. In a blog post announcing Google's participation, the tech giant shared some skepticism about the AI Act's impact on the technology in the EU. The statement reads in part, "While the final version of the Code comes closer to supporting Europe's innovation and economic goals than where it began -- and we appreciate the opportunity we have been provided to submit comments -- we remain concerned that the AI Act and Code risk slowing Europe's development and deployment of AI." Just recently, Meta said it would not be signing the Code of Practice. The company's chief global affairs officer, Joel Kaplan, called the Code an "over-reach." In a statement, Kaplan said, "Europe is heading down the wrong path on AI." The EU's AI Act is the first of its kind from a major regulator and is comprehensive in its approach. Meanwhile, the United States is in the earliest stages of determining its approach to AI regulation. Obligations under the EU's AI Act are being implemented in a staggered fashion, though rules governing general‑purpose AI (GPAI) models will apply on August 2, 2025. Any models brought to market before then must be fully compliant with the rules by August 2, 2027. The current implementation timeline lists assessment and enforcement steps as far out as August 2031.
[6]
Google joins EU code for powerful AI models rebuffed by Meta
Google on Wednesday said it would join the likes of ChatGPT-maker OpenAI and sign the EU's set of recommendations for the most powerful artificial intelligence models that has been rebuffed by Meta. The European Union this month published long-delayed recommendations for a code of practice that would apply to the most advanced AI models such as Google's Gemini, in a code of practice. The announcement came as Brussels resists pressure from the industry and the United States to delay the enforcement of sweeping rules, who warn they could hurt the growing sector in Europe. "We will join several other companies, including US model providers, in signing" the code, said Google's president of global affairs Kent Walker. The code was published just weeks before the August 2 start of the compliance period on complex models known as general purpose AI -- systems that have a vast range of functions. Walker said Google would provide feedback, and warned the rules "risk slowing Europe's development and deployment of AI." "Departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployment, harming Europe's competitiveness," Walker added. OpenAI and French AI startup Mistral have said they would sign the code, but Meta -- a vocal critic of the EU's digital rules -- said it would not follow suit. "This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act," Meta's chief global affairs officer Joel Kaplan said in a LinkedIn post this month. Facebook and Instagram owner Meta and the EU have locked horns over a range of issues, the most recent over the EU's political advertising rules. The US tech titan said it would ban such advertising instead of applying EU rules. Dozens of Europe's biggest companies including France's Airbus and Germany's Lufthansa urged the EU this month to hit pause on the AI rules, warning against steps that could put the bloc behind in the global AI race.
[7]
We will sign the EU AI Code of Practice.
We will join several other companies, including U.S. model providers, in signing the European Union's General Purpose AI Code of Practice. We do so with the hope that this Code, as applied, will promote European citizens' and businesses' access to secure, first-rate AI tools as they become available. Prompt and widespread deployment is important. Europe stands to gain significantly, potentially boosting its economy by 8% (€1.4 trillion) annually by 2034. While the final version of the Code comes closer to supporting Europe's innovation and economic goals than where it began -- and we appreciate the opportunity we have been provided to submit comments -- we remain concerned that the AI Act and Code risk slowing Europe's development and deployment of AI. In particular, departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployment, harming Europe's competitiveness. We are committed to working with the AI Office to ensure the Code is proportionate and responsive to the rapid and dynamic evolution of AI. And we will be an active voice in supporting a pro-innovation approach that leads to future investment and innovation in Europe that benefits everyone.
[8]
Google will sign up to EU's AI Code despite concerns
Meta is so far the only US tech giant that has said it will not sign up to the voluntary rules. US tech giant Google said it will sign the EU's AI Code of Practice on General Purpose AI (GPAI), while still expressing concerns about bloc's AI rules regarding innovation. "While the final version of the Code comes closer to supporting Europe's innovation and economic goals [...] we remain concerned that the AI Act and Code risk slowing down Europe's development and deployment of AI," the president of global affairs at Google's parent company Alphabet, Kent Walker, said in a blogpost on Wednesday. "In particular departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployments, harming Europe's competitiveness," Walker said. The Code, which the European Commission released earlier this month, is a voluntary set of rules that touches on transparency, copyright, and safety and security issues, aiming to help providers of GPAI models comply with the AI Act. Those providers who sign up are expected to be compliant with the AI Act and can anticipate more legal certainty, others will face more inspections. The rules on GPAI under the AI Act enter into force on 2 August. Companies that already have tools on the market will have two years to implement the rules. Tools launched after that date must be compliant with immediate effect. US tech giant Meta said last week that it will not sign, having slammed the rules for stifling innovation. The drafting process of the Code has also received criticism from rightsholders, fearing it is a violation of copyright rules. The Commission will make public a list with the signatories on 1 August. Google said in its statement that it's committed to work with the AI Office to ensure that the Code is "proportionate and responsive to the rapid and dynamic evolution of AI".
[9]
A week after Meta turned it down, Google agrees to sign EU's AI Code of Practice while still raising its own concerns
"We will join several other companies, including U.S. model providers, in signing the European Union's General Purpose AI Code of Practice. " You can't go two steps in the tech space without hearing something about AI. Be it good or bad, AI has a complete stranglehold on the industry with its new and confusing power. To help mitigate some of this confusion, the EU has gathered independant experts together and penned The General-Purpose AI Code of Practice, a document that outlines rules and guides for the industry in complying with the AI Act's obligations. The code is tool made available to all to use voluntarily, and Google has just signed on despite Meta turning its nose at the document. Yesterday Google announced its agreement to adhere to the EU AI Code of Practice in a blog post, which reads "We will join several other companies, including U.S. model providers, in signing the European Union's General Purpose AI Code of Practice. We do so with the hope that this Code, as applied, will promote European citizens' and businesses' access to secure, first-rate AI tools as they become available. Prompt and widespread deployment is important." The code itself is currently a living document, and is to be assessed and refined. Google's statement makes the case it believes in what the EU is trying to achieve with the code, and of course underlines the profits it believes AI can deliver. It also outlined concerns it has with the document around copyright law and other things that might slow the development of generative AI. "we remain concerned that the AI Act and Code risk slowing Europe's development and deployment of AI. In particular, departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployment, harming Europe's competitiveness." Reads the statement. These concerns are similar to those highlighted by Meta boss, Joel Kaplan earlier this week on Linkedin. In the post Kaplan declares Meta won't be signing the GPAI due to legal issues he believes are unresolved in the document. "Europe is heading down the wrong path on AI." Says Kaplan "We have carefully reviewed the European Commission's Code of Practice for general-purpose AI (GPAI) models and Meta won't be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." It's worth noting, even a US federal judge has been confused about Meta's warped understanding when it comes to things like copyright and fair use. But why wouldn't companies want to sign this new code? Looking at it, The GPAI has three main chapters: Transparency, Copyright, and Safety and Security. I think those first two might be what's turning them off. Transparency gives tips and forms on how to document AI development and work, which a lot of companies have been hesitant to do. Legal issues around the data a generative AI was trained on are one of the more confusing areas of AI with many battles developing over ownership of work. The EU is also famously more friendly to individual ownership rights than many other lawmakers, especially when it comes to AI. This is something many feel we need, especially around things like false information spready by AI. A lot of businesses with profits in their eyes dislike as it slows down their progress worrying about silly things like credit or justice. With many AIs being trained on content they don't own, many aren't keen to be open about their sources. Copyright ties directly into this, and is all about helping AIs comply with European copyright law. Again, this is an area that a lot of generative AI models probably are concerned about, because it likely means they may not own the content they are generating. Especially if it's trained on stolen content to begin with. Safety and Security is more about minimising risks, so I don't see it being as contentious as the other two chapters in encouraging businesses to sign on. It's still a worthwhile member though, as the risks against systems with AI are ever growing. It seems like Copyright and Transparency are likely to be the places turning companies like Meta away from the EU's new code. These are incredibly important when it comes to terms of ownership, which again, is likely to upset a lot of these models. Sure there are other rules and restrictions here that a company might take issue with, but not wanting to show your work is always a huge problem, and often points to the work not being yours at all.
[10]
Google to Sign EU AI Code of Practice, Warns of Regulations Slowing AI Growth
Meta has said it will not sign the voluntary code, while OpenAI and Microsoft have signaled their willingness to do so. Google announced on Wednesday its plans to sign onto the European Union's "Code of Practice" for artificial intelligence, a set of guidelines on how businesses can follow the rules of the E.U.'s new AI Act. The tech giant, owned by Alphabet (GOOGL), said in a Wednesday blog post that it would sign the code as the final version "comes closer to supporting Europe's innovation and economic goals than where it began," but noted that "we remain concerned that the AI Act and Code risk slowing Europe's development and deployment of AI." "In particular, departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployment, harming Europe's competitiveness," the post from Google President of Global Affairs Kent Walker said. The European Commission, the enforcement arm of the E.U., released the code earlier this month. The Commission said companies can abide by the AI Act by signing and following the Code of Practice, and said that would "reduce their administrative burden and give them more legal certainty than if they proved compliance through other methods." Big tech firms have been divided on the law and code of practice, as ChatGPT maker OpenAI said earlier this month that it would sign the code, while Facebook and Instagram parent Meta Platforms (META) opted not to do so. Microsoft's (MSFT) President Brad Smith told Reuters earlier this month that "it's likely" that the Copilot maker will sign onto the code. Meta's Chief Global Affairs Officer Joel Kaplan said this month that the code "introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act," and said that Meta believes the policies will slow AI development in Europe. Shares of Google's parent company Alphabet were little changed on Wednesday morning.
[11]
Google signs EU AI Code of Practise, but not without its concerns
Google has agreed to sign the EU's AI Code of Practise, a document that outlines guidelines for the use of AI in EU regions going forward. This code has not yet been fully finalised, but largely it focused on three main principles regarding transparency, copyright, and safety and security. Meta, the owner of Facebook and Instagram, turned down the opportunity to sign the code, as it believes that Europe is going down the wrong path when it comes to AI. "We have carefully reviewed the European Commission's Code of Practice for general-purpose AI (GPAI) models and Meta won't be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act," said Meta's Joel Kaplan. Google, on the other hand, has agreed to sign the document, though there are some concerns with copyright law which could slow down the progress of generative AI. "We remain concerned that the AI Act and Code risk slowing Europe's development and deployment of AI. In particular, departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployment, harming Europe's competitiveness," reads a statement from Google. Since the inception of generative AI, large companies like Google and Meta have been cherry-picking content from wherever they please to train models with no care for copyright law. The EU is notably tighter on its copyright law than other regions, and therefore this could limit Google and Meta's models to content owned by those two companies, for example. This isn't going to be a popular policy with large AI model producers, as they'll want to show off the latest steps their learning machines have made and won't be able to make big strides if they can't steal other people's IP and copyrighted content.
[12]
Google and Meta Go Separate Ways on EU AI Act: What's at Stake?
Google Signs the EU AI Act Along with OpenAI, Meta Refuses and Shares a Big Concern Google has recently signed the European Union's code of conduct for developing powerful AI models like text, image, and video generators. Meta, however, has refused to sign the Act, saying that the guidelines impose limitations that are out of the scope of an AI Act. The tech giant further expresses its concern that complying with these rules may restrict innovation. Joel Kaplan, the Chief Global Affairs Officer at Meta, in his recent post on LinkedIn, highlighted this issue further by mentioning that the EU Code has legal uncertainties for creators. Meta has had other conflicts before this with the EU's guidelines. The most recent one was over political advertising rules. On the other hand, Google strongly supports the and passes an opinion in favour of European citizens. The company says citizens can now access new and powerful data models as they keep rolling out. The President of Global Affairs at Google, Kent Walker, also said that the widespread utilization of artificial intelligence will increase Europe's economy significantly. He put Europe's economic rise in precise numbers, suggesting it can grow by US$1.62 trillion by 2034.
[13]
Google to sign EU's AI code of practice despite concerns
BRUSSELS (Reuters) -Alphabet's Google will sign the European Union's code of practice which aims to help companies comply with the bloc's landmark artificial intelligence rules, its global affairs president said in a blog post on Wednesday, though he voiced some concerns. The voluntary code of practice, drawn up by 13 independent experts, aims to provide legal certainty to signatories on how to meet requirements under the Artificial Intelligence Act (AI Act), such as issuing summaries of the content used to train their general-purpose AI models and complying with EU copyright law. "We do so with the hope that this code, as applied, will promote European citizens' and businesses' access to secure, first-rate AI tools as they become available," Kent Walker, who is also Alphabet's chief legal officer, said in the blog post. He added, however, that Google was concerned that the AI Act and code of practice risk slowing Europe's development and deployment of AI. "In particular, departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployment, harming Europe's competitiveness," Walker said. Microsoft will likely sign the code, its president, Brad Smith, told Reuters earlier this month, while Meta Platforms declined to do so and cited the legal uncertainties for model developers. The European Union enacted the guardrails for the use of artificial intelligence in an attempt to set a potential global benchmark for a technology used in business and everyday life and dominated by the United States and China. (Reporting by Foo Yun Chee in Brussels; Editing by Matthew Lewis)
Share
Copy Link
Google announces its intention to sign the European Union's AI Code of Practice, contrasting with Meta's refusal and highlighting the ongoing debate about AI regulation in the tech industry.
In a significant move, Google has announced its intention to sign the European Union's AI Code of Practice, a voluntary framework designed to help AI developers comply with the bloc's landmark AI Act 1. Kent Walker, Google's president of global affairs, confirmed this decision in a blog post, stating that the company hopes the code will "promote European citizens' and businesses' access to secure, first-rate AI tools" 2.
Source: Tech Xplore
Google's decision stands in stark contrast to Meta's position. The Facebook owner has steadfastly refused to sign the agreement, claiming it could impose too many limits on frontier model development 3. Joel Kaplan, Meta's chief global affairs officer, went as far as to call the Code an "over-reach," stating that "Europe is heading down the wrong path on AI" 4.
The EU's AI Act, passed in 2024, is the first comprehensive AI regulation from a major governing body. It bans certain "unacceptable risk" use cases and defines "high-risk" applications of AI 1. The voluntary Code of Practice aims to help companies implement processes and systems to comply with this Act.
Source: Ars Technica
Despite agreeing to sign, Google has expressed some concerns about the AI Act and Code. Walker noted that they "risk slowing Europe's development and deployment of AI" 2. Specific concerns include potential departures from EU copyright law, steps that might slow approvals, and requirements that could expose trade secrets 1.
Google claims that the expansion of AI tools could boost the European economy by 8 percent (about 1.8 trillion euros) annually by 2034 3. This economic argument appears to be part of Google's strategy to influence the implementation of the Code and the broader AI Act.
The EU's AI Act is being implemented in stages. Rules for "general-purpose AI models with systemic risk" go into effect on August 2, 2025, with companies having two years to fully comply 1. The current timeline includes assessment and enforcement steps as far out as August 2031 4.
Source: Reuters
As the first major regulatory framework for AI, the EU's approach could set a potential global benchmark. This comes at a time when the United States is still in the early stages of determining its approach to AI regulation 4. The divergent responses from tech giants like Google and Meta highlight the ongoing debate about how to balance innovation with responsible AI development and deployment.
Google introduces AlphaEarth Foundations, an AI model that functions as a "virtual satellite" to integrate and analyze vast amounts of Earth observation data, providing unprecedented detail in global mapping and monitoring.
8 Sources
Technology
19 hrs ago
8 Sources
Technology
19 hrs ago
Microsoft's market capitalization surpasses $4 trillion after reporting exceptional quarterly earnings, driven by strong growth in cloud computing and AI services. The company joins Nvidia in the exclusive $4 trillion club, showcasing the impact of AI on tech giants.
9 Sources
Business and Economy
11 hrs ago
9 Sources
Business and Economy
11 hrs ago
The Trump administration announces a collaboration with major tech companies to create a digital health ecosystem, aiming to revolutionize patient data sharing and healthcare management using AI and other technologies.
15 Sources
Health
19 hrs ago
15 Sources
Health
19 hrs ago
OpenAI, the company behind ChatGPT, is generating $1 billion monthly but faces significant losses due to high operating costs. CEO Sam Altman leads the company's long-term vision for AI dominance, backed by Microsoft, amidst an intensifying talent war in the tech industry.
2 Sources
Business and Economy
3 hrs ago
2 Sources
Business and Economy
3 hrs ago
Tech CEOs cite AI as a reason for layoffs, but experts suggest the reality is more nuanced, involving multiple factors including market conditions and strategic shifts.
6 Sources
Technology
19 hrs ago
6 Sources
Technology
19 hrs ago