Google Abandons AI Ethics Pledge, Opening Door to Military Applications

40 Sources

Share

Google has quietly removed its commitment not to use AI for weapons or surveillance, signaling a shift towards potential military applications amidst growing competition and national security concerns.

News article

Google Revises AI Principles, Removing 'Do No Harm' Clause

In a significant shift from its previous ethical stance, Google has quietly removed key passages from its AI principles that had committed the company to avoid using artificial intelligence for potentially harmful applications, including weapons and surveillance

1

. This change, first noticed by Bloomberg, marks a departure from Google's earlier position on responsible AI development

5

.

The Shift in Google's AI Ethics

The now-deleted section of Google's AI principles, titled "AI applications we will not pursue," had explicitly stated that the company would refrain from developing technologies "that cause or are likely to cause overall harm," with weapons being a specific example

2

. This revision comes in the wake of U.S. President Donald Trump revoking former President Joe Biden's executive order aimed at promoting safe, secure, and trustworthy development and use of AI

1

.

Growing Trend of Militarized AI

Google's decision follows a recent trend of big tech companies entering the national security arena and accommodating more military applications of AI

2

:

  1. Meta announced it would make its "Llama" AI models available to government agencies and private companies involved in defense and national security.
  2. Anthropic teamed up with Palantir and Amazon Web Services to provide U.S. intelligence and defense agencies access to its AI models.
  3. OpenAI partnered with defense startup Anduril Industries to develop AI for the U.S. Department of Defense.

Justifications and Concerns

Google has defended this change, citing global AI competition, complex geopolitical landscapes, and national security interests as reasons for revising its AI principles

2

. The company's AI chief, Demis Hassabis, framed the change as inevitable progress rather than a compromise

3

.

However, this shift has raised concerns among experts and former employees:

  1. Margaret Mitchell, former co-lead of Google's ethical AI team, expressed worry about the implications of removing the "harm" clause, suggesting it could lead to the deployment of technology capable of killing people

    5

    .
  2. Human Rights Watch criticized the removal of explicit statements regarding weapons development in the original principles

    2

    .

Implications and Future Outlook

The revision of Google's AI principles is part of a larger trend among tech giants to reconsider previously held ethical positions

5

. This shift could lead to:

  1. Increased competition with rivals already involved in military AI projects.
  2. Potential for increased research and development funding from government sources.
  3. Accelerated AI advancements for Google.

As the international community watches with concern, there are growing calls for legally binding regulations to ensure human oversight and prevent the development of fully autonomous weapons

3

. The Future of Life Institute has proposed a tiered system for treating military AI systems, similar to the oversight of nuclear facilities

3

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo