UK Lawmakers Accuse Google DeepMind of Violating AI Safety Pledges with Gemini 2.5 Pro Release

Reviewed byNidhi Govil

2 Sources

A group of 60 UK parliamentarians have accused Google DeepMind of breaching international AI safety commitments by delaying the release of safety information for its Gemini 2.5 Pro model.

UK Lawmakers Challenge Google DeepMind's AI Safety Practices

In a significant development at the intersection of artificial intelligence and public policy, a cross-party group of 60 UK parliamentarians has accused Google DeepMind of violating international pledges on AI safety. The accusation stems from the company's handling of the release of its Gemini 2.5 Pro model in March 2025 1.

The Controversy Surrounding Gemini 2.5 Pro

Source: TIME

Source: TIME

Google's release of Gemini 2.5 Pro, which the company claimed outperformed rival AI systems on industry benchmarks by "meaningful margins," has come under scrutiny. The primary issue is the delay in publishing detailed information on safety tests, which the lawmakers argue contradicts the commitments made by Google and other AI companies at an international AI summit in February 2024 2.

AI Safety Commitments and Their Importance

At the summit co-hosted by the UK and South Korean governments, Google, along with other major AI companies, signed the Frontier AI Safety Commitments. These pledges included promises to "publicly report" system capabilities and risk assessments, and to explain the involvement of external actors in testing 1. The importance of these commitments lies in their role as a primary source of information for the public and lawmakers to understand emerging AI risks, especially in the absence of binding regulations.

The Lawmakers' Accusations

The open letter, shared exclusively with TIME, accuses Google of:

  1. Failing to honor its international safety commitments
  2. Setting a dangerous precedent for AI development
  3. Threatening the fragile norms promoting safer AI development 1

The letter specifically points out that Google neglected to publish detailed safety test information for over a month after the release of Gemini 2.5 Pro 1.

Google's Response and Ongoing Debate

Google DeepMind has defended its actions, stating that any suggestion of reneging on commitments is "inaccurate." The company maintains that it is fulfilling its public commitments, including the Seoul Frontier AI Safety Commitments 2. Google also asserts that its models, including Gemini 2.5, undergo rigorous safety checks, involving the UK AI Security Institute and other third-party testers 2.

Broader Industry Context

This controversy is not isolated to Google. Other major AI companies have faced similar criticisms:

  1. Meta's model card for Llama 4 was criticized for being brief and limited in detail
  2. OpenAI announced it would not publish a technical safety report for its GPT-4.1 model, arguing it is not a "frontier model" 2

Calls for Improved Transparency and Commitment

The lawmakers' letter calls on Google to:

  1. Reaffirm its commitments to AI safety
  2. Define deployment clearly as the point when a model becomes publicly accessible
  3. Commit to publishing safety evaluation reports on a set timeline for all future model releases
  4. Provide full transparency by naming the government agencies and independent third parties involved in testing, along with exact testing timelines 2

As the AI industry continues to evolve rapidly, this incident highlights the ongoing challenges in balancing innovation with safety and transparency in AI development.

Explore today's top stories

Meta Explores Partnerships with Google and OpenAI to Enhance AI Features

Meta Platforms is considering collaborations with AI rivals Google and OpenAI to improve its AI applications, potentially integrating external models into its products while developing its own AI capabilities.

Reuters logoengadget logoEconomic Times logo

5 Sources

Technology

21 hrs ago

Meta Explores Partnerships with Google and OpenAI to

Meta Implements Strict AI Chatbot Rules to Protect Teen Users

Meta announces significant changes to its AI chatbot policies, focusing on teen safety by restricting conversations on sensitive topics and limiting access to certain AI characters.

TechCrunch logoReuters logoCNBC logo

8 Sources

Technology

21 hrs ago

Meta Implements Strict AI Chatbot Rules to Protect Teen

Meta's Unauthorized Celebrity AI Chatbots Spark Controversy and Legal Questions

Meta faces scrutiny for hosting AI chatbots impersonating celebrities without permission, raising concerns about privacy, ethics, and potential legal implications.

Reuters logoengadget logoU.S. News & World Report logo

7 Sources

Technology

21 hrs ago

Meta's Unauthorized Celebrity AI Chatbots Spark Controversy

AI-Enabled Stethoscope Revolutionizes Heart Condition Detection in Seconds

A groundbreaking AI-powered stethoscope has been developed that can detect three major heart conditions in just 15 seconds, potentially transforming early diagnosis and treatment of heart diseases.

Medical Xpress logoBBC logoThe Guardian logo

5 Sources

Health

13 hrs ago

AI-Enabled Stethoscope Revolutionizes Heart Condition

Walmart's AI Revolution: Transforming Retail with 'Super Agents' and Digital Twins

Walmart unveils a suite of AI-powered 'super agents' and advanced digital twin technology, signaling a major shift in retail innovation and operational efficiency.

CNBC logoSiliconANGLE logo

2 Sources

Technology

13 hrs ago

Walmart's AI Revolution: Transforming Retail with 'Super
TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo