Curated by THEOUTPOST
On Wed, 12 Feb, 8:13 AM UTC
3 Sources
[1]
Top Tech News: Climate Tech Funding & KSSL-L3Harris Tech Partnership
Google Drops AI Weapons Ban: A Shift in Ethics and National Security Google has officially removed its longstanding pledge against developing weapons and surveillance, marking a major policy shift with global ethical and security implications. In February 2025, Google executives Demis Hassabis and James Manyika announced the update, signaling a move toward "bold innovation" and "national security collaboration." The decision aligns Google with other tech giants expanding AI applications in defense, but it has also drawn criticism from human rights groups and ethics experts who warn of increased risks of autonomous weapons and mass surveillance.
[2]
Google Drops AI Weapons Ban: A Shift in Ethics and National Security
Google's AI Ethics Overhaul Sparks Debate on Military and Surveillance Use Google has officially removed its longstanding pledge against developing weapons and surveillance, marking a major policy shift with global ethical and security implications. In February 2025, Google executives Demis Hassabis and James Manyika announced the update, signaling a move toward "bold innovation" and "national security collaboration." The decision aligns Google with other tech giants expanding AI applications in defense, but it has also drawn criticism from human rights groups and ethics experts who warn of increased risks of autonomous weapons and mass surveillance. In February 2025, Google executives Demis Hassabis and James Manyika made public their decision to repeal the pledge through their . The revised AI principles now focus on "bold innovation," "responsible development," and "collaborative progress." These updated guidelines suggest that AI should be developed to benefit humanity and support national security. Google's new direction reflects the growing global competition for AI leadership, with a focus on aligning AI development with the values of democratic nations.
[3]
Google has dropped its promise not to use AI for weapons. It's part of a troubling trend
Last week, Google quietly abandoned a long-standing commitment to not use artificial intelligence (AI) technology in weapons or surveillance. In an update to its AI principles, which were first published in 2018, the tech giant removed statements promising not to pursue: The update came after United States President Donald Trump revoked former President Joe Biden's executive order aimed at promoting safe, secure and trustworthy development and use of AI. The Google decision follows a recent trend of big tech entering the national security arena and accommodating more military applications of AI. So why is this happening now? And what will be the impact of more military use of AI? The growing trend of militarized AI In September, senior officials from the Biden government met with bosses of leading AI companies, such as OpenAI, to discuss AI development. The government then announced a taskforce to coordinate the development of data centers, while weighing economic, national security and environmental goals. The following month, the Biden government published a memo that in part dealt with "harnessing AI to fulfill national security objectives." Big tech companies quickly heeded the message. In November 2024, tech giant Meta announced it would make its "Llama" AI models available to government agencies and private companies involved in defense and national security. This was despite Meta's own policy which prohibits the use of Llama for "[m]ilitary, warfare, nuclear industries or applications." Around the same time, AI company Anthropic also announced it was teaming up with data analytics firm Palantir and Amazon Web Services to provide US intelligence and defense agencies access to its AI models. The following month, OpenAI announced it had partnered with defense startup Anduril Industries to develop AI for the US Department of Defense. The companies claim they will combine OpenAI's GPT-4o and o1 models with Anduril's systems and software to improve US military's defenses against drone attacks. Defending national security The three companies defended the changes to their policies on the basis of US national security interests. Take Google. In a blog post published earlier this month, the company cited global AI competition, complex geopolitical landscapes and national security interests as reasons for changing its AI principles. In October 2022, the US issued export controls restricting China's access to particular kinds of high-end computer chips used for AI research. In response, China issued their own export control measures on high-tech metals, which are crucial for the AI chip industry. The tensions from this trade war escalated in recent weeks thanks to the release of highly efficient AI models by Chinese tech company DeepSeek. DeepSeek purchased 10,000 Nvidia A100 chips prior to the US export control measures and allegedly used these to develop their AI models. It has not been made clear how the militarization of commercial AI would protect US national interests. But there are clear indications tensions with the US's biggest geopolitical rival, China, are influencing the decisions being made. A large toll on human life What is already clear is that the use of AI in military contexts has a demonstrated toll on human life. For example, in the war in Gaza, the Israeli military has been relying heavily on advanced AI tools. These tools require huge volumes of data and greater computing and storage services, which is being provided by Microsoft and Google. These AI tools are used to identify potential targets but are often inaccurate. Israeli soldiers have said these inaccuracies have accelerated the death toll in the war, which is now more than 61,000, according to authorities in Gaza. Google removing the "harm" clause from their AI principles contravenes the international law on human rights. This identifies "security of person" as a key measure. It is concerning to consider why a commercial tech company would need to remove a clause around harm. Avoiding the risks of AI-enabled warfare In its updated principles, Google does say its products will still align with "widely accepted principles of international law and human rights." Despite this, Human Rights Watch has criticized the removal of the more explicit statements regarding weapons development in the original principles. The organization also points out that Google has not explained exactly how its products will align with human rights. This is something Joe Biden's revoked executive order about AI was also concerned with. Biden's initiative wasn't perfect, but it was a step towards establishing guardrails for responsible development and use of AI technologies. Such guardrails are needed now more than ever as big tech becomes more enmeshed with military organizations -- and the risk that come with AI-enabled warfare and the breach of human rights increases.
Share
Share
Copy Link
Google has removed its longstanding pledge against developing AI for weapons and surveillance, signaling a major policy shift with global implications for ethics and national security.
In a significant policy change, Google has officially removed its longstanding pledge against developing artificial intelligence (AI) for weapons and surveillance. This decision, announced in February 2025 by Google executives Demis Hassabis and James Manyika, marks a major shift in the company's ethical stance and aligns it with other tech giants expanding into defense-related AI applications 12.
Google's revised AI principles now focus on "bold innovation," "responsible development," and "collaborative progress." The updated guidelines suggest that AI should be developed to benefit humanity and support national security, reflecting the growing global competition for AI leadership 2. This change comes in the wake of U.S. President Donald Trump revoking former President Joe Biden's executive order aimed at promoting safe, secure, and trustworthy development and use of AI 3.
Google's decision is part of a broader trend of big tech companies entering the national security arena:
Google cited global AI competition, complex geopolitical landscapes, and national security interests as reasons for changing its AI principles 3. However, the decision has drawn criticism from human rights groups and ethics experts who warn of increased risks of autonomous weapons and mass surveillance 12.
Human Rights Watch has criticized the removal of explicit statements regarding weapons development in Google's original principles and points out that the company has not explained exactly how its products will align with human rights 3.
The militarization of commercial AI raises several concerns:
As big tech becomes more enmeshed with military organizations, the need for guardrails in responsible development and use of AI technologies becomes increasingly critical 3.
Reference
[1]
[2]
Google has quietly removed its commitment not to use AI for weapons or surveillance, signaling a shift towards potential military applications amidst growing competition and national security concerns.
40 Sources
40 Sources
Leading AI companies like Anthropic, Meta, and OpenAI are changing their policies to allow military use of their technologies, marking a significant shift in the tech industry's relationship with defense and intelligence agencies.
2 Sources
2 Sources
OpenAI, the creator of ChatGPT, has entered into a partnership with defense technology company Anduril Industries to develop AI solutions for military applications, raising concerns among employees and industry observers about the ethical implications of AI in warfare.
29 Sources
29 Sources
Lockheed Martin and Google Cloud announce a strategic partnership to integrate advanced generative AI into Lockheed's AI Factory ecosystem, aiming to enhance AI-driven capabilities in national security, aerospace, and scientific applications.
5 Sources
5 Sources
Over 100 Google DeepMind employees have signed an open letter urging the company to cease its involvement in military contracts, particularly those with Israel. The move highlights growing concerns about AI's role in warfare and surveillance.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved