2 Sources
[1]
Exclusive: 60 U.K. Lawmakers Accuse Google of Breaking AI Safety Pledge
A cross-party group of 60 U.K. parliamentarians has accused Google DeepMind of violating international pledges to safely develop artificial intelligence, in an open letter shared exclusively with TIME ahead of publication. The letter, released on August 29 by activist group PauseAI U.K., says that Google's March release of Gemini 2.5 Pro without accompanying details on safety testing "sets a dangerous precedent." The letter, whose signatories include digital rights campaigner Baroness Beeban Kidron and former Defence Secretary Des Browne, calls on Google to clarify its commitment. For years, experts in AI, including Google DeepMind's CEO Demis Hassabis, have warned that AI could pose catastrophic risks to public safety and national security -- for example, by helping would-be bio-terrorists in designing a new pathogen or hackers in a takedown of critical infrastructure. In an effort to manage those risks, at an international AI summit co-hosted by the U.K. and South Korean governments in February 2024, Google, OpenAI, and others signed the Frontier AI Safety Commitments. Signatories pledged to "publicly report" system capabilities and risk assessments and explain if and how external actors, such as government AI safety institutes, were involved in testing. Without binding regulation, the public and lawmakers have relied largely on information stemming from voluntary pledges to understand AI's emerging risks. Yet, when Google released Gemini 2.5 Pro on March 25 -- which it said beat rival AI systems on industry benchmarks by "meaningful margins" -- the company neglected to publish detailed information on safety tests for over a month. The letter says that not only reflects a "failure to honour" its international safety commitments, but threatens the fragile norms promoting safer AI development. "If leading companies like Google treat these commitments as optional, we risk a dangerous race to deploy increasingly powerful AI without proper safeguards," Browne wrote in a statement accompanying the letter.
[2]
Google violated AI safety commitments British lawmakers say in an open letter
At an international summit co-hosted by the U.K. and South Korea in February 2024, Google and other signatories promised to "publicly report" their models' capabilities and risk assessments, as well as disclose whether outside organizations, such as government AI safety institutes, had been involved in testing. However, when the company released Gemini 2.5 Pro in March 2025, the company failed to publish a model card, the document that details key information about how models are tested and built. This was despite the company's assertions that the new model outperformed competitors on industry benchmarks by "meaningful margins." Instead, the AI lab released a simplified six-page model card three weeks after it first made the model publicly available as a "preview" version. At the time, one AI governance expert called this report "meager" and "worrisome." The letter called Google's delay a "failure to honour" the company's commitment at the summit and "a troubling breach of trust with governments and the public." The letter also took issue with what it called a "minimal 'model card'" that lacked "any substantive detail about external evaluations," as well as Google's refusal to confirm whether government agencies like the U.K. AI Security Institute participated in testing. A spokesperson for Google DeepMind previously told Fortune that any suggestion that the company had reneged on its commitments was "inaccurate." The company also said in May that a more detailed "technical report" would come later when it makes a final version of the Gemini 2.5 Pro "model family" fully available to the public. The company appeared to provide a longer report in late June, months after the full version was released. The company did not immediately respond to Fortune's request for comment about the recent letter. However, as a spokesperson told Time: "We're fulfilling our public commitments, including the Seoul Frontier AI Safety Commitments." "As part of our development process, our models undergo rigorous safety checks, including by UK AISI and other third-party testers -- and Gemini 2.5 is no exception," they added. When Google first released the preview version of Gemini 2.5 Pro, critics said that the missing system card appeared to violate several other pledges the AI company had made, including the 2023 White House Commitments and a voluntary Code of Conduct on Artificial Intelligence signed in October 2023. Google wasn't the only company to sign these pledges and then appear to pull back on safety disclosures. Meta's model card for its frontier Llama 4 model was about as brief and limited in detail as the one Google released for Gemini 2.5 Pro, and it, too, drew criticism from AI safety researchers. Earlier this year, OpenAI announced it would not publish a technical safety report for its new GPT-4.1 model. The company argued that GPT-4.1 is "not a frontier model," since its reasoning-focused systems like o3 and o4-mini outperform it on many benchmarks. The recent letter calls on Google to reaffirm its commitments to AI safety, asking the tech company to define deployment clearly as the point when a model becomes publicly accessible, commit to publishing safety evaluation reports on a set timeline for all future model releases, and provide full transparency for each release by naming the government agencies and independent third parties involved in testing, along with the exact testing timelines. "If leading companies like Google treat these commitments as optional, we risk a dangerous race to deploy increasingly powerful AI without proper safeguards," Lord Browne of Ladyton, a Member of the House of Lords and one of the letter's signatories, said in a statement.
Share
Copy Link
A group of 60 UK parliamentarians have accused Google DeepMind of breaching international AI safety commitments by delaying the release of safety information for its Gemini 2.5 Pro model.
In a significant development at the intersection of artificial intelligence and public policy, a cross-party group of 60 UK parliamentarians has accused Google DeepMind of violating international pledges on AI safety. The accusation stems from the company's handling of the release of its Gemini 2.5 Pro model in March 2025 1.
Source: TIME
Google's release of Gemini 2.5 Pro, which the company claimed outperformed rival AI systems on industry benchmarks by "meaningful margins," has come under scrutiny. The primary issue is the delay in publishing detailed information on safety tests, which the lawmakers argue contradicts the commitments made by Google and other AI companies at an international AI summit in February 2024 2.
At the summit co-hosted by the UK and South Korean governments, Google, along with other major AI companies, signed the Frontier AI Safety Commitments. These pledges included promises to "publicly report" system capabilities and risk assessments, and to explain the involvement of external actors in testing 1. The importance of these commitments lies in their role as a primary source of information for the public and lawmakers to understand emerging AI risks, especially in the absence of binding regulations.
The open letter, shared exclusively with TIME, accuses Google of:
The letter specifically points out that Google neglected to publish detailed safety test information for over a month after the release of Gemini 2.5 Pro 1.
Google DeepMind has defended its actions, stating that any suggestion of reneging on commitments is "inaccurate." The company maintains that it is fulfilling its public commitments, including the Seoul Frontier AI Safety Commitments 2. Google also asserts that its models, including Gemini 2.5, undergo rigorous safety checks, involving the UK AI Security Institute and other third-party testers 2.
This controversy is not isolated to Google. Other major AI companies have faced similar criticisms:
The lawmakers' letter calls on Google to:
As the AI industry continues to evolve rapidly, this incident highlights the ongoing challenges in balancing innovation with safety and transparency in AI development.
Meta Platforms is considering collaborations with AI rivals Google and OpenAI to improve its AI applications, potentially integrating external models into its products while developing its own AI capabilities.
5 Sources
Technology
21 hrs ago
5 Sources
Technology
21 hrs ago
Meta announces significant changes to its AI chatbot policies, focusing on teen safety by restricting conversations on sensitive topics and limiting access to certain AI characters.
8 Sources
Technology
21 hrs ago
8 Sources
Technology
21 hrs ago
Meta faces scrutiny for hosting AI chatbots impersonating celebrities without permission, raising concerns about privacy, ethics, and potential legal implications.
7 Sources
Technology
21 hrs ago
7 Sources
Technology
21 hrs ago
A groundbreaking AI-powered stethoscope has been developed that can detect three major heart conditions in just 15 seconds, potentially transforming early diagnosis and treatment of heart diseases.
5 Sources
Health
13 hrs ago
5 Sources
Health
13 hrs ago
Walmart unveils a suite of AI-powered 'super agents' and advanced digital twin technology, signaling a major shift in retail innovation and operational efficiency.
2 Sources
Technology
13 hrs ago
2 Sources
Technology
13 hrs ago