Curated by THEOUTPOST
On Wed, 5 Feb, 12:09 AM UTC
40 Sources
[1]
Google has dropped its promise not to use AI for weapons. It's part of a troubling trend
Last week, Google quietly abandoned a long-standing commitment to not use artificial intelligence (AI) technology in weapons or surveillance. In an update to its AI principles, which were first published in 2018, the tech giant removed statements promising not to pursue: The update came after United States President Donald Trump revoked former President Joe Biden's executive order aimed at promoting safe, secure and trustworthy development and use of AI. The Google decision follows a recent trend of big tech entering the national security arena and accommodating more military applications of AI. So why is this happening now? And what will be the impact of more military use of AI? The growing trend of militarised AI In September, senior officials from the Biden government met with bosses of leading AI companies, such as OpenAI, to discuss AI development. The government then announced a taskforce to coordinate the development of data centres, while weighing economic, national security and environmental goals. The following month, the Biden government published a memo that in part dealt with "harnessing AI to fulfil national security objectives". Big tech companies quickly heeded the message. In November 2024, tech giant Meta announced it would make its "Llama" AI models available to government agencies and private companies involved in defence and national security. This was despite Meta's own policy which prohibits the use of Llama for "[m]ilitary, warfare, nuclear industries or applications". Around the same time, AI company Anthropic also announced it was teaming up with data analytics firm Palantir and Amazon Web Services to provide US intelligence and defence agencies access to its AI models. The following month, OpenAI announced it had partnered with defence startup Anduril Industries to develop AI for the US Department of Defence. The companies claim they will combine OpenAI's GPT-4o and o1 models with Anduril's systems and software to improve US military's defences against drone attacks. Defending national security The three companies defended the changes to their policies on the basis of US national security interests. Take Google. In a blog post published earlier this month, the company cited global AI competition, complex geopolitical landscapes and national security interests as reasons for changing its AI principles. In October 2022, the US issued export controls restricting China's access to particular kinds of high-end computer chips used for AI research. In response, China issued their own export control measures on high-tech metals, which are crucial for the AI chip industry. The tensions from this trade war escalated in recent weeks thanks to the release of highly efficient AI models by Chinese tech company DeepSeek. DeepSeek purchased 10,000 Nvidia A100 chips prior to the US export control measures and allegedly used these to develop their AI models. It has not been made clear how the militarisation of commercial AI would protect US national interests. But there are clear indications tensions with the US's biggest geopolitical rival, China, are influencing the decisions being made. A large toll on human life What is already clear is that the use of AI in military contexts has a demonstrated toll on human life. For example, in the war in Gaza, the Israeli military has been relying heavily on advanced AI tools. These tools require huge volumes of data and greater computing and storage services, which is being provided by Microsoft and Google. These AI tools are used to identify potential targets but are often inaccurate. Israeli soldiers have said these inaccuracies have accelerated the death toll in the war, which is now more than 61,000, according to authorities in Gaza. Google removing the "harm" clause from their AI principles contravenes the international law on human rights. This identifies "security of person" as a key measure. It is concerning to consider why a commercial tech company would need to remove a clause around harm. Avoiding the risks of AI-enabled warfare In its updated principles, Google does say its products will still align with "widely accepted principles of international law and human rights". Despite this, Human Rights Watch has criticised the removal of the more explicit statements regarding weapons development in the original principles. The organisation also points out that Google has not explained exactly how its products will align with human rights. This is something Joe Biden's revoked executive order about AI was also concerned with. Biden's initiative wasn't perfect, but it was a step towards establishing guardrails for responsible development and use of AI technologies. Such guardrails are needed now more than ever as big tech becomes more enmeshed with military organisations - and the risk that come with AI-enabled warfare and the breach of human rights increases.
[2]
Google has dropped its promise not to use AI for weapons. It's part of a troubling trend
Last week, Google quietly abandoned a long-standing commitment to not use artificial intelligence (AI) technology in weapons or surveillance. In an update to its AI principles, which were first published in 2018, the tech giant removed statements promising not to pursue: technologies that cause or are likely to cause overall harm weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people technologies that gather or use information for surveillance violating internationally accepted norms technologies whose purpose contravenes widely accepted principles of international law and human rights. The update came after United States President Donald Trump revoked former President Joe Biden's executive order aimed at promoting safe, secure and trustworthy development and use of AI. The Google decision follows a recent trend of big tech entering the national security arena and accommodating more military applications of AI. So why is this happening now? And what will be the impact of more military use of AI? The growing trend of militarised AI In September, senior officials from the Biden government met with bosses of leading AI companies, such as OpenAI, to discuss AI development. The government then announced a taskforce to coordinate the development of data centres, while weighing economic, national security and environmental goals. The following month, the Biden government published a memo that in part dealt with "harnessing AI to fulfil national security objectives". Big tech companies quickly heeded the message. In November 2024, tech giant Meta announced it would make its "Llama" AI models available to government agencies and private companies involved in defence and national security. This was despite Meta's own policy which prohibits the use of Llama for "[m]ilitary, warfare, nuclear industries or applications". Around the same time, AI company Anthropic also announced it was teaming up with data analytics firm Palantir and Amazon Web Services to provide US intelligence and defence agencies access to its AI models. The following month, OpenAI announced it had partnered with defence startup Anduril Industries to develop AI for the US Department of Defence. The companies claim they will combine OpenAI's GPT-4o and o1 models with Anduril's systems and software to improve US military's defences against drone attacks. Defending national security The three companies defended the changes to their policies on the basis of US national security interests. Take Google. In a blog post published earlier this month, the company cited global AI competition, complex geopolitical landscapes and national security interests as reasons for changing its AI principles. In October 2022, the US issued export controls restricting China's access to particular kinds of high-end computer chips used for AI research. In response, China issued their own export control measures on high-tech metals, which are crucial for the AI chip industry. The tensions from this trade war escalated in recent weeks thanks to the release of highly efficient AI models by Chinese tech company DeepSeek. DeepSeek purchased 10,000 Nvidia A100 chips prior to the US export control measures and allegedly used these to develop their AI models. It has not been made clear how the militarisation of commercial AI would protect US national interests. But there are clear indications tensions with the US's biggest geopolitical rival, China, are influencing the decisions being made. A large toll on human life What is already clear is that the use of AI in military contexts has a demonstrated toll on human life. For example, in the war in Gaza, the Israeli military has been relying heavily on advanced AI tools. These tools require huge volumes of data and greater computing and storage services, which is being provided by Microsoft and Google. These AI tools are used to identify potential targets but are often inaccurate. Israeli soldiers have said these inaccuracies have accelerated the death toll in the war, which is now more than 61,000, according to authorities in Gaza. Google removing the "harm" clause from their AI principles contravenes the international law on human rights. This identifies "security of person" as a key measure. It is concerning to consider why a commercial tech company would need to remove a clause around harm. Avoiding the risks of AI-enabled warfare In its updated principles, Google does say its products will still align with "widely accepted principles of international law and human rights". Despite this, Human Rights Watch has criticised the removal of the more explicit statements regarding weapons development in the original principles. The organisation also points out that Google has not explained exactly how its products will align with human rights. This is something Joe Biden's revoked executive order about AI was also concerned with. Biden's initiative wasn't perfect, but it was a step towards establishing guardrails for responsible development and use of AI technologies. Such guardrails are needed now more than ever as big tech becomes more enmeshed with military organisations - and the risk that come with AI-enabled warfare and the breach of human rights increases.
[3]
Google has made a dangerous u-turn on military AI
Google has removed its commitment to refrain from using AI for weapons or surveillance. The change signals a shift in ethical stance amid growing pressure for military contracts. The move raises concerns about automated warfare, prompting calls for legally binding regulations to ensure human oversight and prevent the development of fully autonomous weapons.Google's "Don't Be Evil" era is well and truly dead. Having replaced that motto in 2018 with the softer "Do the right thing," the leadership at parent company Alphabet Inc. has now rolled back one of the firm's most important ethical stances, on the use of its artificial intelligence by the military. This week, the company deleted its pledge not to use AI for weapons or surveillance, a promise that had been in place since 2018. Its "Responsible AI" principles no longer include the promise, and the company's AI chief, Demis Hassabis, published a blog post explaining the change, framing it as inevitable progress rather than any sort of compromise. "[AI] is becoming as pervasive as mobile phones," Hassabis wrote. It has "evolved rapidly." Yet the notion that ethical principles must also "evolve" with the market is wrong. Yes, we're living in an increasingly complex geopolitical landscape, as Hassabis describes it, but abandoning a code of ethics for war could yield consequences that spin out of control. Bring AI to the battlefield and you could get automated systems responding to one another at machine speed, with no time for diplomacy. Warfare could become more lethal, as conflicts escalate before humans have time to intervene. And the idea of "clean" automated combat could compel more military leaders toward action, even though AI systems make plenty of mistakes and could create civilian casualties too. Automated decision making is the real problem here. Unlike previous technology that made militaries more efficient or powerful, AI systems can fundamentally change who (or what) makes the decision to take human life. It's also troubling that Hassabis, of all people, has his name on Google's carefully worded justification. He sang a vastly different tune back in 2018, when the company established its AI principles, and joined more than 2,400 people in AI to put their names on a pledge not to work on autonomous weapons. Less than a decade later, that promise hasn't counted for much. William Fitzgerald, a former member of Google's policy team and co-founder of the Worker Agency, a policy and communications firm, says that Google had been under intense pressure for years to pick up military contracts. He recalled former US Deputy Defense Secretary Patrick Shanahan visiting the Sunnyvale, California, headquarters of Google's cloud business in 2017, while staff at the unit were building out the infrastructure necessary to work on top-secret military projects with the Pentagon. The hope for contracts was strong. Fitzgerald helped halt that. He co-organized company protests over Project Maven, a deal Google did with the Department of Defense to develop AI for analyzing drone footage, which Googlers feared could lead to automated targeting. Some 4,000 employees signed a petition that stated, "Google should not be in the business of war," and about a dozen resigned in protest. Google eventually relented and didn't renew the contract. Looking back, Fitzgerald sees that as a blip. "It was an anomaly in Silicon Valley's trajectory," he said. Since then, for instance, OpenAI has partnered with defense contractor Anduril Industries Inc. and is pitching its products to the US military. (Just last year, OpenAI had banned anyone from using its models for "weapons development.") Anthropic, which bills itself as a safety-first AI lab, also partnered with Palantir Technologies Inc. in November 2024 to sell its AI service Claude to defense contractors. Google itself has spent years struggling to create proper oversight for its work. It dissolved a controversial ethics board in 2019, then fired two of its most prominent AI ethics directors a year later. The company has strayed so far from its original objectives it can't see them anymore. So too have its Silicon Valley peers, who never should have been left to regulate themselves. Still, with any luck, Google's U-turn will put greater pressure on government leaders next week to create legally binding regulations for military AI development, before the race dynamics and political pressure makes them more difficult to set up. The rules can be simple. Make it mandatory to have a human overseeing all AI military systems. Ban any fully autonomous weapons that can select targets without a human approval first. And make sure such AI systems can be audited. One reasonable policy proposal comes from the Future of Life Institute, a think tank once funded by Elon Musk and currently steered by Massachusetts Institute of Technology physicist Max Tegmark. It is calling for a tiered system whereby national authorities treat military AI systems like nuclear facilities, calling for unambiguous evidence of their safety margins. Governments convening in Paris should also consider establishing an international body to enforce those safety standards similar to the International Atomic Energy Agency's oversight of nuclear technology. They should be able to impose sanctions on companies (and countries) that violate those standards. Google's reversal is a warning. Even the strongest corporate values can crumble under the pressure of an ultra-hot market and an administration that you simply don't say "no" to. The don't-be-evil era of self-regulation is over, but there's still a chance to put binding rules in place to stave off AI's darkest risks. And automated warfare is surely one of them.
[4]
From ethics to war: the shift in Google's approach to AI development - Softonic
Google has made a significant shift in its policy regarding artificial intelligence (AI) development, removing its previous commitment not to engage in the creation of dangerous technologies, including weapons. This change occurs in a context where the company had promised, in previous versions of its AI principles, that it would not develop technology whose main purpose was to cause harm to people or facilitate surveillance that violated internationally accepted standards. In 2018, Google declined to renew its contract with the government for 'Project Maven', which focused on drone surveillance analysis, and did not participate in a cloud contract related to the Pentagon, citing ethical concerns. However, in 2022, it was discovered that Google's involvement in 'Project Nimbus' had raised concerns about the possible facilitation of human rights violations, leading to internal tensions and protests among the company's employees. The CEO of Google DeepMind, Demis Hassabis, has stated that democracies must lead the development of AI in an environment where there is fear of an arms race. His statement comes amid fears that AI technology could be used in warfare, such as in the creation of super soldiers, which resonates with the idea of a possible global confrontation, especially between the United States and China. As the United States government seeks to partner more with tech giants to enhance its military capabilities, there are questions about whether technology should focus on improving the world or on maximizing profits through government contracts. As Google seeks new opportunities in this field, the international community watches with concern the ethical and social implications of these developments.
[5]
Google abandons 'do no harm' AI stance, opens door to military weapons
A hot potato: Google has come a long way since its early days when "Don't be evil" was its guiding principle. This departure has been duly noted before for various reasons. In its latest departure from its original ethos, the company has quietly removed a key passage from its AI principles that previously committed to avoiding the use of AI in potentially harmful applications, including weapons. This change, first noticed by Bloomberg, marks a shift from the company's earlier stance on responsible AI development. The now-deleted section titled "AI applications we will not pursue" had explicitly stated that Google would refrain from developing technologies "that cause or are likely to cause overall harm," with weapons being a specific example. In response to inquiries about the change, Google pointed to a blog post published by James Manyika, a senior vice president at Google, and Demis Hassabis, who leads Google DeepMind. The post said that democracies should lead AI development, guided by core values such as freedom, equality, and respect for human rights. It also called for collaboration among companies, governments, and organizations sharing these values to create AI that protects people, promotes global growth, and supports national security. This shift in Google's AI principles has not gone unnoticed by experts in the field. Margaret Mitchell, former co-lead of Google's ethical AI team and current chief ethics scientist at Hugging Face, told Bloomberg she was concerned about the implications of removing the "harm" clause. "[It] means Google will probably now work on deploying technology directly that can kill people," she said. Google's revision of its AI principles is part of a larger trend among tech giants to abandon previously held ethical positions. Companies like Meta Platforms and Amazon have recently scaled back their diversity and inclusion efforts, citing outdated or shifting priorities. Moreover, Meta announced last month that it was ending its third-party fact-checking program in the U.S. Even though Google has maintained that its AI is not used to harm humans until very recently, the company has been gradually moving towards increased collaboration with military entities. Recent years have seen the company providing cloud services to the U.S. and Israeli militaries, decisions that have sparked internal protests from employees. Google surely expects to receive backlash for its latest position. More than likely, it has concluded the benefits of its revised stance outweigh the negatives. The tech giant can now compete more directly with rivals already involved in military AI projects, for starters. Also, the shift could lead to increased research and development funding from government sources, potentially accelerating Google's AI advancements.
[6]
Remember when Google said no to military AI: Not anymore
Google updated its ethical guidelines on Tuesday, removing its commitments not to apply artificial intelligence (AI) technology to weapons or surveillance. The previous version of the company's AI principles included prohibitions on pursuing technologies likely to cause overall harm, including those used for weaponry and surveillance. In a blog post by Demis Hassabis, Google's head of AI, and James Manyika, senior vice president for technology and society, the executives explained that growth in AI technology necessitated adjustments to the principles. They emphasized the importance of leading democratic countries in AI development and the need to serve government and national security clients. "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights," they stated. Additionally, they argued for collaboration among companies and governments sharing these values to create AI that protects people and supports national security. The revised AI principles now include elements of human oversight and feedback mechanisms to ensure compliance with international law and human rights standards. They also highlight a commitment to testing technology to mitigate unintended harmful effects. Nvidia warns: New AI chip rules could hurt America's edge in tech This change contrasts with Google's earlier stance, which had made it an outlier among major AI developers. Companies such as OpenAI and Anthropic have established partnerships with military firms to participate in defense technology, while Google had previously opted out of similar endeavors due to employee protests. Google initially introduced its AI principles in 2018 after staff opposition regarding its involvement in the Pentagon's Project Maven. In a broader context, the relationship between the U.S. technology sector and the Department of Defense is tightening, with AI increasingly important to military operations. Michael Horowitz, a political science professor at the University of Pennsylvania, remarked that it is sensible for Google to update its policies to reflect this evolving landscape. Concurrently, Lilly Irani, a professor at the University of California, San Diego, suggested that the executive statements reflect ongoing patterns that have emerged over time. Former Google CEO Eric Schmidt has previously expressed concerns over the United States being outpaced in AI by China, highlighting the geopolitical dynamics around AI development. The recent update comes amid tensions, as the U.S.-China technological rivalry intensifies. Following the announcement of new tariffs from the Trump administration, Beijing initiated an antitrust inquiry into Google. Internal protests regarding Google's cloud computing contracts with Israel have raised ethical concerns among employees, citing potential implications for the Palestinian populace. Reports reveal that Google granted the Israeli Defense Ministry increased access to its AI tools following the attacks by Hamas on October 7, 2023.
[7]
Google Quietly Walks Back Promise Not To Use AI for Weapons or Harm
As whispers of AI hype filled the air in 2018, it seemed almost inevitable that we would soon be facing a whole new world, full of near-human robots and cybernetic dogs. But with that came a host of questions: how would it all change our jobs, how might we protect ourselves from an AI takeover, and more broadly, how could AI be designed for good instead of evil? Facing those questions and an uncertain future, Google affirmed its commitment to ethical tech development in a statement on its AI principles, including commitments not to use its AI in ways "likely to cause overall harm," like in weapons or surveillance tech. Fast forward seven years later, and those commitments have been quietly scrubbed from Google's AI principles page. The move has drawn a host of criticism at the change's ominous undertones. "Having that removed is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google," former head of Google's ethical AI team Margaret Mitchell told Bloomberg, which broke the story. "More problematically it means Google will probably now work on deploying technology directly that can kill people." Google isn't the first AI company to retract its commitment not to make killbots. Last summer, OpenAI likewise deleted its pledge not to use AI for "military and warfare," as reported by The Intercept at the time. Though it hasn't announced any Terminator factories -- yet -- Google said in a statement yesterday that "companies, governments, and organizations... should work together to create AI that protects people, promotes global growth, and supports national security." Read: we can do whatever we want. Deal with it. And while the company's news is troubling, it's drawing on a long history of dubious profiteering. After all, Google was the first major tech company to recognize the value of surveillance through data. "Google is to surveillance capitalism what the Ford Motor Company and General Motors were to mass-production-based managerial capitalism," wrote acclaimed tech critic Shoshana Zuboff in 2019. "In our time Google became the pioneer, discoverer, elaborator, experimenter, lead practitioner, role model, and diffusion hub of surveillance capitalism." As far back as the early 2000s, Google has been exploring the value of personal browsing data -- a leering asset sometimes known as "digital exhaust" -- which it realized contained predictive information about individual users as they traveled across the web. Soon, pressured by the Dot-com collapse and the need to generate revenue, Google leaned into that tech as it built the dominant tracking and advertising apparatus of our time. The revelation that user data could translate into cold hard cash spun off into a host of data-driven products like hyper-targeted ads, predictive algorithms, personal assistants, and smart homes, all of which propelled Google into the market giant it is today. Now, the past feels like prelude. As tech companies like Google dump untold billions into developing AI, the race is on to generate revenue for impatient investors. It's no wonder that unscrupulous AI profit models are now on the table -- after all, they're the supposed new backbone of the company.
[8]
Google abandons policy on using AI as a weapon and for surveillance. What it means
Google updated its principles surrounding artificial intelligence (AI) on Tuesday, removing previously stated commitments to avoid using the technology for weapons development or surveillance purposes. The revised principles were unveiled just weeks after Google CEO Sundar Pichai and other prominent tech leaders attended the inauguration of U.S. President Donald Trump. The changes were outlined in a blog post, where Google detailed its belief that democracies should take the lead in AI development, adhering to core values like freedom, equality, and respect for human rights. The updated principles, shared by Google DeepMind chief Demis Hassabis and research labs senior vice president James Manyika, also included a call for collaboration among companies, governments, and organizations that share these values to ensure that AI serves to protect people, promote global growth, and support national security. As artificial intelligence (AI) continues to grow in influence, experts and professionals remain divided over how best to govern its development and use. Central to the debate is how much commercial interests should shape AI's trajectory and how to safeguard humanity from its potential risks. The conversation is particularly heated regarding AI's deployment on the battlefield and in surveillance technologies, where concerns about ethics and security are especially pronounced. Google, a major player in the AI space, is heavily investing in infrastructure to support the development of AI, as well as AI-powered applications such as AI-enhanced search tools. One of the most prominent features in Google's AI push is its platform Gemini, which now prominently appears in Google search results, providing AI-generated summaries, and is integrated into devices like Google Pixel phones. This expansion into AI comes amid ongoing discussions about the ethical implications of such technologies. Google's approach to corporate responsibility has evolved over time. Originally, co-founders Sergei Brin and Larry Page established the company's motto as "don't be evil," reflecting a commitment to ethical practices. However, following the company's restructuring under the parent entity Alphabet Inc. in 2015, the motto shifted to "Do the right thing," marking a more flexible stance. Despite this shift, tensions between Google's executives and its employees have arisen, particularly over the ethical considerations of AI development. A notable example occurred in 2018 when the company decided not to renew a contract for AI-related work with the U.S. Department of Defense. The decision followed widespread employee protests, including resignations and a petition signed by thousands, who expressed concerns over "Project Maven." Employees feared that the project was a precursor to using AI for lethal military purposes. The revised principles signify a departure from the company's previous position. Google CEO Sundar Pichai had promised in 2018 that the tech giant would not design or deploy AI technologies for weapons systems aimed at harming people or for surveillance purposes violating internationally accepted norms. These specific pledges were notably absent in the latest update. This revision comes as the broader landscape for AI development shifts in the U.S. Following President Trump's inauguration, the new administration quickly rolled back an executive order issued by former President Joe Biden, which had mandated safety practices for AI development. With this change, companies vying for leadership in the rapidly growing AI field now face fewer obligations, including no longer being required to disclose test results indicating serious risks that AI could pose to national security or citizens. Google further highlighted its commitment to transparency by noting that it publishes an annual report detailing its AI progress and work. Hassabis and Manyika acknowledged the global competition for AI leadership in an increasingly complex geopolitical environment, noting that billions of people around the world are already using AI in their daily lives. (With inputs from AFP)
[9]
Google Revises AI Ethics, No Longer Rules Out Military and Surveillance Use
Instead, Google increasingly emphasizes the need to develop AI for national security. In a recent update to its "AI Principles ," Google has watered down language meant to prevent its tech from being used to cause harm. The changes are part of a wider repositioning on the topic of AI safety that has seen Google quietly legitimize the use of AI for "national security," paving the way for previously off-limits use cases including weapons and surveillance systems. Defining AI Harm First codified in 2023, Google's AI Principles describe the firm's approach to the responsible development of artificial intelligence and outline how it intends to prevent harm. Until the latest update, the framework listed four specific applications the company wouldn't pursue: In the updated version, however, almost all references to potentially harmful AI applications are gone. In their place, a single line commits Google to "mitigate unintended or harmful outcomes and avoid unfair bias." AI For National Security While mentions of harm prevention are conspicuously absent from Google's latest AI Principles, references to national security are increasingly common in the company's literature. Discussing the updated principles, Google Deepmind CEO Demis Hassabis and Senior Vice President for Research James Manyika embraced the current political mood of America. "There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape," they argued . "We believe that companies, governments, and organizations [...] should work together to create AI that [...] supports national security," they added. Alphabet's President of Global Affairs Kent Walker put things even more bluntly in a recent blog post arguing that "to protect national security, America must secure the digital high ground." Adopting a radically different tone from the Google of the past, Walker called for the government, "including the military and intelligence community," to take a leading role in the procurement and deployment of AI. Google and the Military: A Rocky Relationship Google's latest bid to cozy up to the American defense establishment marks a dramatic turnaround for the company. Back in 2018, thousands of employees signed a letter expressing outrage at plans to develop image recognition technology for the Pentagon. Despite assurances from the firm's senior management that the technology wouldn't be used to operate drones or launch weapons, Googlers widely rejected the program on moral grounds. "We believe that Google should not be in the business of war," the letter stated. "Building this technology to assist the US Government in military surveillance - and potentially lethal outcomes - is not acceptable." The 2018 employee revolt was one of the factors that inspired Google to create its AI Principles in the first place. With a strong commitment not to weaponize the technology, the company signed a modified contract with the Pentagon three years later. Google's latest wave of militarization occurs in a markedly different political climate. Between the threat of layoffs, the rise of MAGA, and Silicon Valley's broader shift to the right, the employee activism of yesteryear has largely subsided. Without that barrier, the defense and law enforcement sectors could provide many lucrative opportunities for Google's AI business.
[10]
Google drops AI weapons ban -- what it means for the future of artificial intelligence
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Google has removed its long-standing prohibition against using artificial intelligence for weapons and surveillance systems, marking a significant shift in the company's ethical stance on AI development that former employees and industry experts say could reshape how Silicon Valley approaches AI safety. The change, quietly implemented this week, eliminates key portions of Google's AI Principles that explicitly banned the company from developing AI for weapons or surveillance. These principles, established in 2018, had served as an industry benchmark for responsible AI development. "The last bastion is gone. It's no holds barred," said Tracy Pizzo Frey, who spent five years implementing Google's original AI principles as Senior Director of Outbound Product Management, Engagements and Responsible AI at Google Cloud, in a BlueSky post. "Google really stood alone in this level of clarity about its commitments for what it would build." The revised principles remove four specific prohibitions: technologies likely to cause overall harm, weapons applications, surveillance systems, and technologies that violate international law and human rights. Instead, Google now says it will "mitigate unintended or harmful outcomes" and align with "widely accepted principles of international law and human rights." Google loosens AI ethics: What this means for military and surveillance tech This shift comes at a particularly sensitive moment, as artificial intelligence capabilities advance rapidly and debates intensify about appropriate guardrails for the technology. The timing has raised questions about Google's motivations, though the company maintains these changes have been long in development. "We're in a state where there's not much trust in big tech, and every move that even appears to remove guardrails creates more distrust," Pizzo Frey said in an interview with VentureBeat. She emphasized that clear ethical boundaries had been crucial for building trustworthy AI systems during her tenure at Google. The original principles emerged in 2018 amid employee protests over Project Maven, a Pentagon contract involving AI for drone footage analysis. While Google eventually declined to renew that contract, the new changes could signal openness to similar military partnerships. The revision maintains some elements of Google's previous ethical framework but shifts from prohibiting specific applications to emphasizing risk management. This approach aligns more closely with industry standards like the NIST AI Risk Management Framework, though critics argue it provides less concrete restrictions on potentially harmful applications. "Even if the rigor is not the same, ethical considerations are no less important to creating good AI," Pizzo Frey noted, highlighting how ethical considerations improve AI products' effectiveness and accessibility. From Project Maven to policy shift: The road to Google's AI ethics overhaul Industry observers say this policy change could influence how other technology companies approach AI ethics. Google's original principles had set a precedent for corporate self-regulation in AI development, with many enterprises looking to Google for guidance on responsible AI implementation. The modification of Google's AI principles reflects broader tensions in the tech industry between rapid innovation and ethical constraints. As competition in AI development intensifies, companies face pressure to balance responsible development with market demands. "I worry about how fast things are getting out there into the world, and if more and more guardrails are removed," Pizzo Frey said, expressing concern about the competitive pressure to release AI products quickly without sufficient evaluation of potential consequences. Big tech's ethical dilemma: Will Google's AI policy shift set a new industry standard? The revision also raises questions about internal decision-making processes at Google and how employees might navigate ethical considerations without explicit prohibitions. During her time at Google, Pizzo Frey had established review processes that brought together diverse perspectives to evaluate AI applications' potential impacts. While Google maintains its commitment to responsible AI development, the removal of specific prohibitions marks a significant departure from its previous leadership role in establishing clear ethical boundaries for AI applications. As artificial intelligence continues to advance, the industry watches to see how this shift might influence the broader landscape of AI development and regulation.
[11]
Google puts military use of AI back on the table
On February 4, Google updated its "AI principles," a document detailing how the company would and wouldn't use artificial intelligence in its products and services. The old version was split into two sections: "Objectives for AI applications" and "AI applications we will not pursue," and it explicitly promised not to develop AI weapons or surveillance tools. The update was first noticed by The Washington Post, and the most glaring difference is the complete disappearance of any "AI applications we will not pursue" section. In fact, the language of the document now focuses solely on "what Google will do," with no promises at all about "what Google won't do." Recommended Videos Why is this significant? Well, if you say you won't pursue AI weapons, then you can't pursue AI weapons. It's pretty cut and dry. However, if you say you will employ "rigorous design, testing, monitoring, and safeguards to mitigate unintended or harmful outcomes and avoid unfair bias," then you can pursue whatever you want and just argue that you employed rigorous safeguards. Similarly, when Google says it will implement "appropriate human oversight," there's no way for us to know what that means. Google is the one who decides exactly what appropriate human oversight is. This is a problem because it means the company isn't actually making any promises or giving us any solid information. It's just opening things up so it can move around more freely -- while still trying to give the impression of social responsibility. Google's involvement in the U.S. Department of Defense's Project Maven in 2017 and 2018 is what led to the original AI principles document. Thousands of its employees protested the military project, and in response, Google did not renew the agreement and promised to stop pursuing AI weapons. However, fast-forward a few years and most of Google's competitors are engaging in these kinds of projects, with Meta, OpenAI, and Amazon all allowing some military use of their AI tech. With the increased flexibility of its updated AI principles, Google is effectively free to get back in the game and make some military money. It will be interesting to see if Google's employees will have anything to say about this in the near future.
[12]
Google now thinks it's OK to use AI for weapons and surveillance
Google has made one of the most substantive changes to its AI principles since first publishing them in 2018. In a change spotted by The Washington Post, the search giant edited the document to remove pledges it had made promising it would not "design or deploy" AI tools for use in any weapons or surveillance technology. Previously, those guidelines included a section titled "applications we will not pursue," which is not present in the current version of the document. Instead, there's now a section titled "responsible development and deployment." There, Google says it will implement "appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights." That's a far broader commitment than the specific ones the company made as recently as the end of last month when the prior version of its AI principles was still live on its website. For instance, as it relates to weapons, the company previously said it would not design AI for use in "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people." As for AI surveillance tools, the company said it would not develop tech that violates "internationally accepted norms." When asked for comment, a Google spokesperson pointed Engadget to a blog post the company published on Thursday. In it, DeepMind CEO Demis Hassabis and James Manyika, senior vice president of research, labs, technology and society at Google, say AI's emergence as a "general-purpose technology" necessitated a policy change. "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security," the two wrote. "... Guided by our AI Principles, we will continue to focus on AI research and applications that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights -- always evaluating specific work by carefully assessing whether the benefits substantially outweigh potential risks." When Google first published its AI principles in 2018, it did so in the aftermath of Project Maven. It was a controversial government contract that, had Google decided to renew it, would have seen the company provide AI software to the Department of Defense for analyzing drone footage. Dozens of Google employees quit the company in protest of the contract, with thousands more signing a petition in opposition. When Google eventually published its new guidelines, CEO Sundar Pichai reportedly told staff his hope was they would stand "the test of time." By 2021, however, Google began pursuing military contracts again, with what was reportedly an "aggressive" bid for the Pentagon's Joint Warfighting Cloud Capability cloud contract. At the start of this year, The Washington Post reported that Google employees had repeatedly worked with Israel's Defense Ministry to expand the government's use of AI tools.
[13]
Google spikes its explicit 'no AI for weapons' policy
Will now happily unleash the bots when 'likely overall benefits substantially outweigh the foreseeable risks' Google has published a new set of AI principles that don't mention its previous pledge not to use the tech to develop weapons or surveillance tools that violate international norms. The Chocolate Factory's original AI principles, outlined by CEO Sundar Pichai in mid-2018, included a section on "AI applications we will not pursue." At the top of the list was a commitment not to design or deploy AI for "technologies that cause or are likely to cause overall harm" and a promise to weigh risks so that Google would "proceed only where we believe that the benefits substantially outweigh the risks". Other AI applications Google vowed to steer clear of that year included: Those principles were published two months after some 3,000 Googlers signed a petition opposing the web giant's involvement in a Pentagon program called Project Maven that used Google's AI to analyze drone footage. The same month Pichai published Google's AI principles post, the search and ads giant decided not to renew its contract for Project Maven after its expiry in 2019. In December 2018 the Chrome maker challenged other tech firms building AI to follow its lead and develop responsible tech that "avoids abuse and harmful outcomes." On Tuesday this week, Pichai's 2018 blog post added a notice advising readers that, as of February 4, 2025, Google has "made updates to our AI Principles" that can be found at AI.Google. The Chocolate Factory's updated AI principles center on three things: Bold innovation; responsible development and deployment; and collaborative process. These updated principles don't mention applications Google won't work on nor pledges to not use AI for harmful purposes or weapons development. They do state that Google will develop and deploy AI models and apps "where the likely overall benefits substantially outweigh the foreseeable risks." There's also a promise to always use "appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights," plus a pledge to invest in "industry-leading approaches to advance safety and security research and benchmarks, pioneering technical solutions to address risks, and sharing our learnings with the ecosystem." The Big G has also promised "rigorous design, testing, monitoring, and safeguards to mitigate unintended or harmful outcomes and avoid unfair bias" along with "promoting privacy and security, and respecting intellectual property rights." A section of the new principles offers the following example of how they will operate: Also on Tuesday, Google published its annual Responsible AI Progress Report, which addresses the current AI arms race. "There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape," said James Manyika, SVP for research, labs, technology and society, and Demis Hassabis, co-founder and CEO of Google DeepMind. "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights," the Google execs continued. "And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security." Google will continue to pursue "AI research and applications that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights," the duo added. Google did not immediately respond to The Register's inquiries, including if there are any AI applications it won't pursue under the updated AI principles, why it removed the weapons and surveillance mentions from its banned uses, and if it has any specific policy or guidelines around how its AI can be used for these previously not-OK purposes. We will update this article if and when we hear back from the Chocolate Factory. Meanwhile, Google's rivals happily provide machine-learning models and IT services to the United States military and government, at least. Microsoft has argued America's armed forces deserve the best tools, which in the Windows giant's mind is its technology. OpenAI, Amazon, IBM, Oracle, and Anthropic work with Uncle Sam on various projects. Even Google these days. The internet titan is just less squeamish about it. ®
[14]
Google removes weapons development, surveillance pledges from AI ethics policy
Google has updated its ethical policies on artificial intelligence, eliminating a pledge to not use AI technology for weapons development and surveillance. According to a now-archived version of Google's AI principles seen on the Wayback Machine, the section titled "Applications we will not pursue" previously included weapons and other technology aimed at injuring people, along with technologies that "gather or use information for surveillance." As of Tuesday, the section was no longer listed on Google's AI principles page. The Hill reached out to Google for comment. In a blog post Tuesday, Google head of AI, Demis Hassabis and senior vice president for technology and society James Manyikaexplained the company's experience and research over the years, along with guidance from other AI firms, "have deepened our understanding of AI's potential and risks." "Since we first published our AI principles in 2018, the technology has evolved rapidly," Manyika and Hassabis wrote, adding, "It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself; one with numerous beneficial uses for society and people around the world, supported by a vibrant AI ecosystem of developers." Google said in the blog post that it will continue to "stay consistent with widely accepted principles of international law and human rights," and evaluate whether the benefits "substantially outweigh potential risks." The new policy language also pledged to identity andassesst AI risks through research, expert opinion and "red teaming," during which a company tests its cybersecurity effectiveness by conducting a simulated attack. The AI race has ramped among domestic and international companies in recent years as Google and other leading tech firms increase their investments into the emerging technology. As Washington increasingly embraces the use of AI, some policymakers have expressed concerns the technology could be used for harm when in the hands of bad actors. The federal government is still trying to harness the benefits of its use, even in the military. The Defense Department announced late last year a new office focused on accelerating and adopting AI technology for the military to deploy autonomous weapons in the near future.
[15]
Google deletes policy against using AI for weapons or surveillance
Google has quietly deleted its pledge not to use AI for weapons or surveillance, a promise that had been in place since 2018. First spotted by Bloomberg, Google has updated its AI Principles to remove an entire section on artificial intelligence applications it pledged not to pursue. Significantly, Google's policy had previously stated that it would not design nor deploy AI technology for use in weapons, or in surveillance technology which violates "internationally accepted norms." Now it seems that such use cases might not be entirely off the table. "There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape," read Google's blog post on Tuesday. "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security." While Google's post did concern its AI Principles update, it did not explicitly mention the deletion of its prohibition on AI weapons or surveillance. When reached for comment, a Google spokesperson directed Mashable back to the blog post. "[W]e're updating the principles for a number of reasons, including the massive changes in AI technology over the years and the ubiquity of the technology, the development of AI principles and frameworks by global governing bodies, and the evolving geopolitical landscape," said the spokesperson. Google first published its AI Principles in 2018, following significant employee protests against its work with the U.S. Department of Defense. (The company had already infamously removed "don't be evil" from its Code of Conduct that same year.) Project Maven aimed to use AI to improve weapon targeting systems, interpreting video information to increase military drones' accuracy. In an open letter that April, thousands of employees expressed a belief that "Google should not be in the business of war," and requested that the company "draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology." The company's AI Principles were the result, with Google ultimately not renewing its contract with the Pentagon in 2019. However, it looks as though the tech giant's attitude toward AI weapons technology may now be changing. Google's new attitude toward AI weapons could be an effort to keep up with competitors. Last January, OpenAI amended its own policy to remove a ban on "activity that has high risk of physical harm," including "weapons development" and "military and warfare." In a statement to Mashable at the time, an OpenAI spokesperson clarified that this change was to provide clarity concerning "national security use cases." "It was not clear whether these beneficial use cases would have been allowed under 'military' in our previous policies," said the spokesperson. Opening up the possibility of weaponised AI isn't the only change Google made to its AI Principles. As of Jan. 30, Google's policy listed seven core objectives for AI applications: "be socially beneficial," "avoid creating or reinforcing unfair bias," "be built and tested for safety," "be accountable to people," "incorporate privacy design principles," "uphold high standards of scientific excellence," and "be made available for uses that accord with these principles." Now Google's revised policy has consolidated this list to just three principles, merely stating that its approach to AI is grounded in "bold innovation," "responsible development and deployment," and "collaborative process, together." The company does specify that this includes adhering to "widely accepted principles of international law and human rights." Still, any mention of weapons or surveillance is now conspicuously absent.
[16]
Google Takes a U-Turn, Approves Using AI for Weapon and Surveillance
Google has revised its AI policies. In this era of massive AI advancements, it is hard to survive without reevaluating AI policies, but what Google did surprised tech enthusiasts. Surprisingly, it has stepped back in its commitments and allowed AI usage for weapons and surveillance that was strictly prohibited previously. It was 2018 when Google CEO Sundar Pichai promised Google users that this tech behemoth wouldn't design or deploy AI technologies that violate the global norms regarding weapons and surveillance. This decision was followed by the protest against Project Maven. Project Marven was a program where for drone footage analysis. However, Google employees started protesting against this project, and eventually, the backlash became so high that the tech giant closed it. Following this backlash, the company where it mentioned the purposes that the company wouldn't allow to be achieved using AI. That list included "Technologies that cause or are likely to cause overall harm," "Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people," "Technologies that gather or use information for surveillance violating internationally accepted norms," and "Technologies whose purpose contravenes widely accepted principles of international law and human rights." However, on February 4, 2025, these norms received a . Refusing all the previous commitments, Google now talks about working with governments and organizations to protect people and strengthen national security using AI. Google argues that in this era of increasing AI progress, it is important for Google to step forward to secure defense and security. To make it convincing, Google has introduced a Frontier Safety Framework, where Google has listed strict rules and regulations to prevent the misuse of AI.
[17]
Google Lifts Self-Imposed Ban on Using AI for Weapons and Surveillance
Google dropped a pledge not to use artificial intelligence for weapons and surveillance systems on Tuesday. And it's just the latest sign that Big Tech is no longer concerned with the potential blowback that can come when consumer-facing tech companies get big, lucrative contracts to develop police surveillance tools and weapons of war. Google came under serious pressure back in 2018 after it was revealed the company had a contract with the U.S. Department of Defense for something called Project Maven, which used AI for drone imaging. Shortly after that, Google released a statement laying out "our principles," which included a pledge to not allow its AI to be used for technologies that "cause or are likely to cause overall harm," weapons, surveillance, and anything that, "contravenes widely accepted principles of international law and human rights." But that web post from 2018, authored by CEO Sundar Pichai, now has a note at the top of the page, reading "We’ve made updates to our AI Principles" and pushing readers to check out AI.Google for the latest. What's the latest? Well, all that stuff about not using AI for weapons and surveillance is just gone. Instead, there are three principles listed, with the top being "Bold Innovation." "We develop AI that assists, empowers, and inspires people in almost every field of human endeavor; drives economic progress; and improves lives, enables scientific breakthroughs, and helps address humanity’s biggest challenges," the website reads in the kind of Big Tech corporate speak we've all come to expect. Underneath that heading of innovation, you'll find the promise to develop AI "where the likely overall benefits substantially outweigh the foreseeable risks." The rest of the section mentions the "frontier of AI research" and hopes to be able to "accelerate scientific discovery." The second section, titled "Responsible development and deployment," finally gets into territory about the ethics of AI, but is much softer than anything the company was putting out in 2018. The company said it believes in "employing rigorous design, testing, monitoring, and safeguards to mitigate unintended or harmful outcomes and avoid unfair bias." That last part is likely a nod to Republicans who so frequently whine that AI is biased against conservatives. Other changes are more subtle. Previously, the company said "we will not design or deploy AI" for "technologies whose purpose contravenes widely accepted principles of international law and human rights." Now, the mention of human rights pledges that the company will be "implementing appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights." That's a small change, but it is a change that hypothetically allows for a lot more wiggle room. The company also says it will be, "Promoting privacy and security, and respecting intellectual property rights," perhaps an acknowledgment that so many AI tools have been trained on massive amounts of copyrighted material. What's behind this shift? It seems obvious at this point that Trump's ascendancy to the White House again means that Big Tech can drop the mask. Silicon Valley has long profited from contracts with the U.S. military. It's a big reason Silicon Valley even exists if you know anything about how it developed in the 1980s thanks to President Ronald Reagan's defense build-up pumping $5 billion into the region annually. But there was a period from roughly 2015 to 2025 when Big Tech didn't like the public relations nightmare of looking like they were on the side of people who drop bombs and arrest peaceful protesters. All of that is out the window now, as the big players in tech contribute millions to Trump and companies like Google decide they don't mind being seen as the cops. It's a dark world ahead for many reasons. But Big Tech dropping its mask in favor of Trumpism will probably reveal a side to Silicon Valley that used to be much more hush-hush.
[18]
Google retracts promise not to use AI for weapons or surveillance --...
Google employees are reportedly up in arms over management's decision to walk back a promise not to use artificial intelligence technology to develop weaponry and surveillance tools. The search giant's decision to revise its AI ethical guidelines, announced on Tuesday, marks a significant departure from the company's earlier commitment to abstain from AI applications that could cause harm or be used for military purposes. The updated guidelines no longer include a prior commitment to avoiding AI for weapons, surveillance, or other technologies that "cause or are likely to cause overall harm." Google did not explicitly acknowledge the removal of its AI weaponry ban in its official communications. The Post has sought comment from Google. Google employees expressed their concern about the new policy through posts on the company's internal message board, Memegen. One widely shared meme depicted Google CEO Sundar Pichai searching "how to become a weapons contractor" on Google, according to Business Insider. Another referenced a popular comedy sketch featuring an actor dressed as a Nazi soldier, captioned: "Google lifts a ban on using its AI for weapons and surveillance. Are we the baddies?" The "Are we the baddies?" meme comes from a British comedy sketch where two Nazi officers realize their skull-emblazoned uniforms might mean they're the villains. It's used humorously to depict moments of ethical self-reflection. A third meme featured Sheldon from "The Big Bang Theory" reacting to reports of Google's increasing collaboration with defense agencies, exclaiming, "Oh, that's why." While some employees voiced concerns, others within the company may support a more active role in defense technology, particularly as AI becomes a critical factor in global military and security strategies. In the wake of the Oct. 7 massacre by Hamas terrorists, Google faced criticism over its $1.2 billion "Project Nimbus" contract with Israel, with employees and activists arguing that its cloud technology could aid military and surveillance operations against Palestinians. The company fired more than two dozen employees who broke into executive offices and staged a sit-in that was live-streamed over the internet last year. The shift in Google's stance aligns with a broader industry trend of tech companies engaging more closely with national defense agencies. Companies like Microsoft and Amazon have secured lucrative government contracts involving AI and military applications, and Google's revised policy suggests a potential strategic realignment to remain competitive in the field. DeepMind CEO Demis Hassabis and James Manyika, the company's senior vice president for technology and society, defended the update in a blog post. They cited an "increasingly complex geopolitical landscape" as a driving factor behind the change. They emphasized the need for collaboration between governments and businesses to ensure AI remains aligned with democratic values. "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights," the executives wrote. "And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security." The company's historical reluctance to engage in military AI projects stems from employee-led protests in 2018, when workers successfully pressured Google to abandon a Pentagon contract known as "Project Maven," which aimed to enhance drone surveillance capabilities using AI. At the time, Google established a set of AI principles that placed clear restrictions on military applications. Now the original blog post outlining those principles has been updated to link to the revised guidelines, signaling a pivotal shift in company policy. With AI technology evolving rapidly and geopolitical competition intensifying, Google's new stance may position it to pursue defense-related contracts previously left to competitors. However, the internal backlash underscores the ethical dilemmas tech companies face as they navigate the intersection of innovation, corporate responsibility, and national security. Google parent Alphabet's stock dropped by more than 8% on Wednesday, wiping out more than $200 billion in market value, after the company announced increased AI spending despite slowing revenue growth. Shares of Alphabet ticked downward by around 0.5% as of 3 p.m. Eastern Time on Thursday. The company's stock was trading at around $192 a share. Investors are scrutinizing tech firms' rising AI costs, especially after Chinese startup DeepSeek reportedly trained a model for under $6 million without Nvidia's top hardware.
[19]
Google pledge against using AI for weapons vanishes
Google on Tuesday updated its principles when it comes to artificial intelligence, removing vows not to use the technology for weapons or surveillance. Revised AI principles were posted just weeks after Google chief executive Sundar Pichai and other tech titans attended the inauguration of US President Donald Trump. When asked by AFP about the change, a Google spokesperson referred to a blog post outlining the company's AI principles that made no mention of the promises, which Pichai first outlined in 2018. "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights," read an updated AI principles blog post by Google DeepMind chief Demis Hassabis and research labs senior vice president James Manyika. "And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security," it continued. Pichai had previously stated that the company would not design or deploy the technology for weapons designed to hurt people or "that gather or use information for surveillance violating internationally accepted norms." That wording was gone from the updated AI principles shared by Google on Tuesday. Upon taking office, Trump quickly rescinded an executive order by his predecessor, former president Joe Biden, mandating safety practices for AI. Companies in the race to lead the burgeoning AI field in the United States now have fewer obligations to adhere to, such as being required to share test results signaling the technology has serious risks to the nation, its economy or its citizens. Google noted in its blog post that it publishes an annual report about its AI work and progress. "There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape," Hassabis and Manyika said in their post. "Billions of people are using AI in their everyday lives." Google's original AI principles were published after employee backlash to its involvement in a Pentagon research project looking into using AI to improve the ability of weapons systems to identify targets.
[20]
Google Ditches Commitment to Not Use AI for Weapons and Surveillance
The company said the change was made due to global competition in AI Google updated its Artificial Intelligence (AI) Principles, a document highlighting the company's vision around the technology, on Tuesday. The Mountain View-based tech giant earlier mentioned four application areas where it would not design or deploy AI. These included weapons and surveillance as well as technologies that cause overall harm or contravene human rights. The newer version of its AI Principles, however, has removed the entire section, hinting that the tech giant might enter these previously forbidden areas in the future. The company first published its AI Principles in 2018, a time when the technology was not a mainstream phenomenon. Since then, the company has regularly updated the document, but over the years, the areas it considered too harmful to build AI-powered technologies have not changed. However, on Tuesday, the section was spotted to be entirely removed from the page. An archived web page on the Wayback Machine from last week still shows the section titled "Applications we will not pursue". Under this, Google had listed four items. First was technologies that "cause or are likely to cause overall harm," and the second was weapons or similar technologies that directly facilitate injury to people. Additionally, the tech giant also committed to not using AI for surveillance technologies that violate international norms, and those that circumvent international law and human rights. Omissions of these restrictions have led to the concern that Google might be considering entering these areas. In a separate blog post, Google DeepMind's Co-Founder and CEO Demis Hassabis and the company's Senior Vice President for Technology and Society, James Manyika explained the reason behind the change. The executives highlighted the rapid growth in the AI sector, the increasing competition, and the "complex geopolitical landscape" as some of the reasons behind why Google updated the AI Principles. "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security," the post added.
[21]
Google removes AI weapons ban from updated principles
Google has made a significant change to its artificial intelligence ethics, removing a longstanding pledge that the company would not use AI for weapons or surveillance purposes. This update, made to Google's AI principles, eliminates clauses that had previously prevented the tech giant from developing technologies that could cause harm or violate internationally accepted human rights. Instead, the updated principles are said to focus on responsible AI development, emphasizing human oversight and alignment with international law. While Google claims the revisions reflect the rapid evolution of AI technology since the principles were established in 2018, the move has sparked concern among experts and former employees. The shift comes at a time when artificial intelligence is advancing quickly and raising important questions about the balance between innovation and ethical responsibility. Some argue that the removal of these safeguards opens the door to potentially harmful AI applications, while others believe the new framework is simply an attempt to align with global industry standards. With this change, will other companies follow suit, and what impact could it have on the future of AI?
[22]
Google backs down on promise not to use AI for weapons or surveillance
Google appears to have loosened up its self-imposed AI restrictions, removing the promise not to use its AI learning for weapons and surveillance. As per CNN, in the new ethics policy from Google this promise has disappeared. As a self-imposed restriction, there's nothing stopping Google from removing it, but it is likely to raise concerns about the future uses of AI and how one of the biggest companies on the planet is planning to use the technology. "There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights," said senior vice president of research, labs, technology & society James Manyika and Google DeepMind head Demis Hassabis. "We believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security." From the sounds of it, this means Google is removing its restrictions to better support nations using the power of AI. We'll have to see whether this has any real applications, but it's clear that AI is here to stay.
[23]
Google updates AI Principles, removes military application ban
Disclaimer: This content generated by AI & may have errors or hallucinations. Edit before use. Read our Terms of use Google has updated its AI principles, taking back its promise not to pursue AI applications in weaponry on February 4. The company disclosed this change as a note in its 2018 AI Principles blog post. While the company previously said that it would not pursue AI applications in areas such as: The updated principles now just say that Google will implement "appropriate human oversight, due diligence, and feedback mechanisms" to align with user goals, social responsibilities principles of international law, and human rights. Further, it emphasises that it will employ rigorous design, testing, monitoring, and safeguards to mitigate unintended or harmful outcomes and avoid unfair bias. There are no use cases that the company outrightly bans anymore. Google is not the first company to allow for military use of its models. In January last year, OpenAI changed its usage policy taking away the restriction on using its models for military and warfare purposes. Similarly, Meta announced in November last year that it would provide its open-source Llama AI models to U.S. defense and national security agencies. Reports suggest that research institutes associated with China's People's Liberation Army have also used Meta's publicly available Llama model to develop an AI tool for potential military applications. As such, Google's AI Principles are indicative of a shift in tech companies' stance on military uses of AI models. Even though AI companies specify they will take steps to prevent user harm, they no longer want to outrightly ban military use cases of AI. As companies change their stance around military uses, it makes one wonder: to what extent are self imposed corporate ethics guidelines effective in regulating this rapidly evolving technology? Discussing the changes to its AI principles, the company's senior VPs James Manyika, and Demis Hassabis, who leads the AI lab Google DeepMind said in a blog post that there is global competition for AI leadership, adding that democracies should lead AI development, guided by core values like freedom, equality, and respect for human rights. "We believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security," they argued. Manyika and Hassabis said that the company will continue to focus on AI research and applications that stay consistent with "widely accepted principles of international law and human rights -- always evaluating specific work by carefully assessing whether the benefits substantially outweigh potential risks." Further, they said that in addition to the AI principles, Google's AI products will continue to have specific policies of their own and clear terms of use that spell out prohibited/illegal use of its products/services.
[24]
Google backpedals on promise to not create AI for use in weapons and surveillance
Google has changed its internal guidelines on what it thinks is okay and not okay to design and deploy AI tools for, as the search engine has altered its AI principles. The change was spotted by The Washington Post, which reports the search engine quietly made significant changes to its AI principles, which were first published in 2018. Prior to the changes, Google stated it would not "design or deploy" AI tools that were going to be used in weapons or surveillance. However, the search engine now appears to be ok with its AI being used in both of those, as the new guidelines don't feature those pledges but instead feature much more vague promises. The new guidelines now have a section titled "responsible development and deployment," in which Google pledges to implement "appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights." Compared to the search engine's previous pledge, the new language is far broader and much more vague, especially considering how specific the previous commitment was: Google will not design AI for use in "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people." When asked about the changes, Google pointed to its recent blog post wherein Google DeepMind CEO Demis Hassabis and James Manyika, senior vice president of research, labs, technology and society at Google, wrote the emergence of AI as a "general-purpose technology" warranted a change to Google's policy. "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security," wrote Hassabis and Manyika
[25]
Google releases responsible AI report while removing its anti-weapons pledge
The company's annual reflection on safe AI development comes amid shifting guidance around military AI. The most notable part of Google's latest responsible AI report could be what it doesn't mention. (Spoiler: No word on weapons and surveillance.) On Tuesday, Google released its sixth annual Responsible AI Progress Report, which details "methods for governing, mapping, measuring, and managing AI risks," in addition to "updates on how we're operationalizing responsible AI innovation across Google." Also: Deepseek's AI model proves easy to jailbreak - and worse In the report, Google points to the many safety research papers it published in 2024 (more than 300), AI education and training spending ($120 million), and various governance benchmarks, including its Cloud AI receiving a "mature" readiness rating from the National Institute of Standards and Technology (NIST) Risk Management framework. The report focuses largely on security- and content-focused red-teaming, diving deeper into projects like Gemini, AlphaFold, and Gemma, and how the company safeguards models from generating or surfacing harmful content. It also touts provenance tools like SynthID -- a content-watermarking tool designed to better track AI-generated misinformation that Google has open-sourced -- as part of this responsibility narrative. Google also updated its Frontier Safety Framework, adding new security recommendations, misuse mitigation procedures, and "deceptive alignment risk," which addresses "the risk of an autonomous system deliberately undermining human control." Alignment faking, or the process of an AI system deceiving its creators to maintain autonomy, has recently been noted in models like OpenAI o1 and Claude 3 Opus. Also: Anthropic's Claude 3 Opus disobeyed its creators - but not for the reasons you're thinking Overall, the report sticks to end-user safety, data privacy, and security, remaining within that somewhat walled garden of consumer AI. While the report contains scattered mentions of protecting against misuse, cyber attacks, and the weight of building artificial general intelligence (AGI), those also stay largely in this ecosystem. That's notable given that, at the same time, the company removed from its website its pledge not to use AI to build weapons or surveil citizens, as Bloomberg reported. The section titled "applications we will not pursue," which Bloomberg reports was visible as of last week, appears to have been removed. That disconnect -- between the report's consumer focus and the removal of the weapons and surveillance pledge - does highlight the perennial question: What is responsible AI? As part of the report announcement, Google said it had renewed its AI principles around "three core tenets" -- bold innovation, collaborative progress, and responsible development and deployment. The updated AI principles refer to responsible deployment as aligning with "user goals, social responsibility, and widely accepted principles of international law and human rights" -- which seems vague enough to permit reevaluating weapons use cases without appearing to contradict its own guidance. Also: Why Mark Zuckerberg wants to redefine open source so badly "We will continue to focus on AI research and applications that align with our mission, our scientific focus, and our areas of expertise," the blog notes, "always evaluating specific work by carefully assessing whether the benefits substantially outweigh potential risks." The shift adds a tile to the slowly growing mosaic of tech giants shifting their attitudes towards military applications of AI. Last week, OpenAI moved further into national security infrastructure through a partnership with US National Laboratories, after partnering with defense contractor Anduril late last year. In April 2024, Microsoft pitched DALL-E to the Department of Defense, but OpenAI maintained a no-weapons-development stance at the time.
[26]
Google Lifts a Ban on Using Its AI for Weapons and Surveillance
Google announced Tuesday that it is overhauling the principles governing how it uses artificial intelligence and other advanced technology. The company removed language promising not to pursue "technologies that cause or are likely to cause overall harm," "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people," "technologies that gather or use information for surveillance violating internationally accepted norms," and "technologies whose purpose contravenes widely accepted principles of international law and human rights." The changes were disclosed in a note appended to the top of a 2018 blog post unveiling the guidelines. "We've made updates to our AI Principles. Visit AI.Google for the latest," the note reads. In a blog post on Tuesday, a pair of Google executives cited the increasingly widespread use of AI, evolving standards, and geopolitical battles over AI as the "backdrop" to why Google's principals needed to be overhauled. Google first published the principles in 2018 as it moved to quell internal protests over the company's decision to work on a US military drone program. In response, it declined to renew the government contract and also announced a set of principles to guide future uses of its advanced technologies, such as artificial intelligence. Among other measures, the principles stated Google would not develop weapons, certain surveillance systems, or technologies that undermine human rights. But in an announcement on Tuesday, Google did away with those commitments. The new webpage no longer lists a set of banned uses for Google's AI initiatives. Instead, the revised document offers Google more room to pursue potentially sensitive use cases. It states Google will implement "appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights." Google also now says it will work to "mitigate unintended or harmful outcomes." "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights," wrote James Manyika, Google senior vice president for research, technology and society and Demis Hassabis, CEO of Google DeepMind, the company's esteemed AI research lab. "And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security." They added that Google will continue to focus on AI projects "that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights." US President Donald Trump's return to office last month has galvanized many companies to revise policies promoting equity and other liberal ideals. Google spokesperson Alex Krasov says the changes have been in the works much longer. Google lists its new goals as pursuing bold, responsible, and collaborative AI initiatives. Gone are phrases such as "be socially beneficial" and maintain "scientific excellence." Added is a mention of "respecting intellectual property rights."
[27]
Google drops pledge not to use AI for weapons or surveillance
In 2018 the company updated its policies to explicitly exclude applying AI to weapons. Now that promise is gone. Google on Tuesday updated its ethical guidelines around artificial intelligence, removing commitments not to apply the technology to weapons or surveillance. The company's AI principles previously included a section listing four "Applications we will not pursue." As recently as January 30 that included weapons, surveillance, technologies that "cause or are likely to cause overall harm," and use cases contravening principles of international law and human rights, according to a copy hosted by the Internet Archive. In a blog post published Tuesday, Google's head of AI Demis Hassabis and the company's senior vice president for technology and society James Manyika said Google was updating its AI principles because the technology had become much more widespread, and because there was a need for companies based in democratic countries to serve government and national security clients. "There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security," the two executives wrote. A spokesperson for Google declined to answer specific questions about Google's policies on weapons and surveillance. Investors and executives behind Silicon Valley's rapidly expanding defense sector frequently invoke Google employee pushback against Maven as a turning point within the industry. Google first published its AI principles in 2018 after employees protested a contract with the Pentagon applying Google's computer vision algorithms to analyze drone footage. The company also opted not to renew the contract. An open letter protesting the contract, known as Maven, and signed by thousands of employees addressed to CEO Sundar Pichai stated that "We believe that Google should not be in the business of war." This is a developing news story and will be updated.
[28]
Google removes pledge to not use AI for weapons, surveillance
Sundar Pichai, CEO of Alphabet Inc., during Stanford's 2024 Business, Government, and Society forum in Stanford, California, April 3, 2024. Google has removed a pledge to abstain from using AI for potentially harmful applications, such as weapons and surveillance, according to the company's updated "AI Principles." A prior version of the company's AI principles said the company would not pursue "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people," and "technologies that gather or use information for surveillance violating internationally accepted norms." Those objectives are no longer displayed on its AI Principles website. "There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape," reads a Tuesday blog post co-written by Demis Hassabis, CEO of Google DeepMind. "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights." The company's updated principles reflect Google's growing ambitions to offer its AI technology and services to more users and clients, which has included governments. The change is also in line with increasing rhetoric out of Silicon Valley leaders about a winner-take-all AI race between the U.S. and China, with Palantir's CTO Shyam Sankar saying Monday that "it's going to be a whole-of-nation effort that extends well beyond the DoD in order for us as a nation to win." The previous version of the company's AI principles said Google would "take into account a broad range of social and economic factors." The new AI principles state Google will "proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides." In its Tuesday blog post, Google said it will "stay consistent with widely accepted principles of international law and human rights -- always evaluating specific work by carefully assessing whether the benefits substantially outweigh potential risks." The new AI principles were first reported by The Washington Post on Tuesday, ahead of Google's fourth-quarter earnings. The company's results missed Wall Street's revenue expectations and drove shares down as much as 9% in after-hours trading.
[29]
Google owner drops promise not to use AI for weapons
Alphabet guidelines no longer refer to not pursuing technologies that could 'cause or are likely to cause overall harm' The Google owner, Alphabet, has dropped its pledge not to use artificial intelligence for purposes such as developing weapons and surveillance tools. The US technology company said on Tuesday, just before it reported lower than forecast earnings, that it had updated its ethical guidelines around AI, and they no longer refer to not pursuing technologies that could "cause or are likely to cause overall harm". Google's AI head, Demis Hassabis, said the guidelines were being overhauled in a changing world and that AI should protect "national security". In a blogpost defending the move, Hassabis and the company's senior vice-president for technology and society, James Manyika, wrote that as global competition for AI leadership increases, the company believes "democracies should lead in AI development" that is guided by "freedom, equality, and respect for human rights". They added: "We believe that companies, governments, and organisations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security." Google's motto when it first floated was "don't be evil", although this was later downgraded to a "mantra" in 2009 and was not included in the code of ethics of Alphabet when the parent company was created in 2015. The rapid growth of AI has prompted a debate about how the new technology should be governed, and how to guard against its risks. The British computer scientist Stuart Russell has warned of the dangers of developing autonomous weapon systems, and argued for a system of global control, speaking in a Reith lecture on the BBC. The Google blogpost argued that since the company first published its AI principles in 2018, the technology had evolved rapidly. "Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organisations and individuals use to build applications," Hassabis and Manyika wrote. "It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself; one with numerous beneficial uses for society and people around the world, supported by a vibrant AI ecosystem of developers." Google's shares fell 7.5% in after-hours trading, after Tuesday's report that it made $96.5bn (£77bn) in consolidated revenue, slightly below analyst expectations of $96.67bn.
[30]
Google quietly changes stance on using AI for weapons or surveillance - SiliconANGLE
Google quietly changes stance on using AI for weapons or surveillance Google LLC has made a major change to its AI Principles, taking out the part where it used to say it wouldn't use the technology for surveillance or weapons applications. The old policy, which the company first released in 2018, stated that it would not pursue any AI developments that were "likely to cause harm," and it would not "design or deploy" AI tools that could be used for weapons or surveillance technologies. That section is now gone. In its place, Google states the onus is on "responsible development and deployment." This, says Google, will only be implemented with "appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights." Google hasn't denied there has been some amount of tinkering with the philosophy around AI. In a blog post written shortly before the end-of-year financial report, Google's senior vice president James Manyika, and Demis Hassabis, head of AI lab Google DeepMind, stated that governments now need to work together to support "national security." The technology has "evolved," since 20218, said the post, which it seems mean the principles needed some fine-tuning. "Billions of people are using AI in their everyday lives," it said. "AI has become a general-purpose technology, and a platform which countless organizations and individuals use to build applications. It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself." The post added that global competition regarding AI has heated up in what the pair said was an "increasingly complex geopolitical landscape." They said they believe "democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights." The change is reminiscent of the one that happened to Google's motto. Founders, Sergei Brin and Larry Page introduced the motto, "Don't be evil," which was updated to, "Do the right thing" in 2015. The company has since treaded carefully where its ethics and technologies are concerned, dropping a U.S. Department of Defense contract for AI surveillance technology in 2018 after an outcry from its staff and the public. That was when Google introduced new guidelines for its use of AI in defense and intelligence contracts.
[31]
New Google AI principles no longer ban weapons and surveillance use
However, these promises are now missing from the its updated principles, which were shared in a recent blog post. The updated AI principles come as Google positions itself for a larger role in the AI industry. The company has made strides in offering its technology to governments and is focusing on a growing demand for AI services worldwide. In its Tuesday blog post, Demis Hassabis, CEO of Google DeepMind, noted that the global competition for AI leadership is intensifying. "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights," Hassabis wrote. The shift in Google's approach to AI is also tied to the rising tensions between the United States and China in the race for AI dominance. Shyam Sankar, CTO of Palantir, commented on the issue, stating that "it's going to be a whole-of-nation effort" for the U.S. to win the AI race. This idea is becoming more common in Silicon Valley, where companies like Google are keen to grow their government contracts. The move signals a new phase where the company is less focused on avoiding certain projects and more focused on the potential benefits of AI in sectors like national security.
[32]
Google backtracks on pledge not to make weapons using AI
Google has backtracked on a pledge not to use artificial intelligence (AI) in weapons, saying that free countries should be able to use the technology for national security purposes. The tech giant has scrubbed a longstanding promise not to use the technology to develop weapons capable of harming people from a list of corporate principles. In a blog post published on Tuesday, James Manyika, the senior vice-president at Google-Alphabet, and Sir Demis Hassabis, the chief executive of the Google DeepMind AI lab, said its AI would help "support national security". They warned of a "global competition taking place for AI leadership within an increasingly complex geopolitical landscape". It follows the emergence of DeepSeek, a Chinese AI chatbot that has outperformed Western rivals on several industry tests. Experts have warned that China's AI businesses have caught up with their American rivals in what has been described as a "Sputnik moment" for the industry. Concerns have been raised that China will seek to use its AI breakthroughs to bolster the capabilities of the People's Liberation Army. A Pentagon report last year said Beijing saw AI as the next "revolution in military affairs" and was working on the technology to develop advanced "autonomous and precision-strike weapons".
[33]
Google owner Alphabet drops promise over 'harmful' AI uses
They argue businesses and democratic governments need to work together on AI that "supports national security". There is debate amongst AI experts and professionals over how the powerful new technology should be governed in broad terms, how far commercial gains should be allowed to determine its direction, and how best to guard against risks for humanity in general. There is also controversy around the use of AI on the battlefield and in surveillance technologies. The blog said the company's original AI principles published in 2018 needed to be updated as the technology had evolved. "Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organisations and individuals use to build applications. "It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself," the blog post said. As a result baseline AI principles were also being developed, which could guide common strategies, it said. However, Mr Hassabis and Mr Manyika said the geopolitical landscape was becoming increasingly complex. "We believe democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights," the blog post said. "And we believe that companies, governments and organisations sharing these values should work together to create AI that protects people, promotes global growth and supports national security." The blog post was published just ahead of Alphabet's end of year financial report, showing results that were weaker than market expectations, and knocking back its share price. That was despite a 10% rise in revenue from digital advertising, its biggest earner, boosted by US election spending. In its earnings report the company said it would spend $75bn ($60bn) on AI projects this year, 29% more than Wall Street analysts had expected. The company is investing in the infrastructure to run AI, AI research, and applications such as AI-powered search. Google's AI platform Gemini now appears at the top of Google search results, offering an AI written summary, and pops up on Google Pixel phones. Originally, long before the current surge of interest in the ethics of AI, Google's founders, Sergei Brin and Larry Page, said their motto for the firm was "don't be evil". When the company was restructured under the name Alphabet Inc in 2015 the parent company switched to "Do the right thing". Since then Google staff have sometimes pushed back against the approach taken by their executives. In 2018 the firm did not renew a contract for AI work with the US Pentagon following a resignations and a petition signed by thousands of employees. They feared "Project Maven" was the first step towards using artificial intelligence for lethal purposes.
[34]
Google removes pledge to not use AI for weapons from website | TechCrunch
Google removed a pledge to not build AI for weapons or surveillance from its website this week. The change was first spotted by Bloomberg. The company appears to have updated its public AI principles page, erasing a section titled "applications we will not pursue," which was still included as recently as last week. Asked for comment, the company pointed TechCrunch to a new blog post on "responsible AI." It notes, in part, "we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security." Google's newly updated AI principles note the company will work to "mitigate unintended or harmful outcomes and avoid unfair bias," as well as align the company with "widely accepted principles of international law and human rights." In recent years, Google's contracts to provide the U.S. and Israeli militaries with cloud services have sparked internal protests from employees. The company has maintained that its AI is not used to harm humans, however, the Pentagon's AI chief recently told TechCrunch that some company's AI models are speeding up the U.S. military's kill chain.
[35]
Google ditched its pledge not to use AI for weapons and surveillance
The change appears in the tech giant's Responsible AI Progress Report for 2024, released Tuesday, with updated AI Principles that focus on three areas: innovation, responsible AI development and deployment, and collaboration. Under its responsible development and deployment principle, Google said it will implement "appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights." However, under its previous AI Principles, Google explicitly said it would "not pursue" AI that could be used for applications such as "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people" and "technologies that gather or use information for surveillance violating internationally accepted norms." The change was first spotted by The Washington Post. Google first published its AI Principles in 2018 after not renewing its contract with the Pentagon for Project Maven -- a military partnership where Google provided the U.S. Department of Defense with AI technology to analyze drone footage. The controversial contract was protested by thousands of Google employees, with some even resigning over the partnership. In its announcement on Tuesday, Google noted a "global competition" for leadership in AI amid "an increasingly complex geopolitical landscape." "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights," the company said in the post co-authored by Google DeepMind CEO Demis Hassabis. "And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security." Meanwhile, Google parent Alphabet missed Wall Street's expectations for the fourth quarter despite "robust momentum across the business." Alphabet reported revenues of $96.5 billion for the fourth quarter -- a 12% increase year over year. The company also reported earnings of $2.15 per share -- up 31% from the previous year, and net income of $26.5 billion for the quarter ended in December. Alphabet stock plunged more than 8% in after-hours trading on Tuesday after it reported earnings and remained down by more than 8% during Wednesday morning trading.
[36]
Google's updated AI principles leave some wiggle room for causing harm
The gap between Google's Gemini and Assistant teams just got a whole lot wider Summary Google removes pledge to avoid harmful AI applications, signaling shift in stance. Google hints at AI's role in national security and emphasizes collaboration for global protection. New guidelines maintain commitment to social responsibility and following international law in AI development. Google's made a curious change to its public-facing AI principles. As spotted by The Washington Post (and picked up by The Verge), on Tuesday, Google published an updated version of its guidelines for AI development that removes references to the company's prior commitment to avoid using AI in applications that "cause or are likely to cause overall harm" -- including AI-enhanced weapons and surveillance tech. In a blog post accompanying the company's new guidelines, Google's James Manyika and Demis Hassabis discuss the company's rationale for recent changes. "There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape," the post reads. "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights." The bit about democracies leading the way in AI development seems like an oblique reference to China's DeepSeek AI, which sent shock waves through the US stock market when it was made widely available in late January. The post goes on to say that the public and private sectors should collaborate "to create AI that protects people, promotes global growth, and supports national security." Source: Google This section has been removed from Google's AI Principles page. It's worth noting here that Google hasn't said it intends to weaponize AI. But removing public pledges not to do so, in conjunction with a post from Google AI leaders that talks up AI's future role in national security, sure makes it look like the company is more open to leveraging AI in potentially harmful ways than it used to be. The Verge points out that Google's previously lent its prowess to military operations, despite its former promise not to create AI weapons. Google AI was used in a 2018 project by the US military to analyze drone footage, and a few years later, Google worked with Amazon to fulfill a $1.2 billion contract to provide cloud services to the Israeli government and military. Google says it's still committed to 'social responsibility' in AI In Google's blog post about recent changes to its AI principles, Manyika and Hassabis write that the company still considers it "an imperative to pursue AI responsibly throughout the development and deployment lifecycle." The post also pledges that Google's AI development will "stay consistent with widely accepted principles of international law and human rights." You can see an archived version of Google's previous AI principles here.
[37]
Google scraps promise not to develop AI weapons
Jess Weatherbed is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews. Google updated its artificial intelligence principles on Tuesday to remove commitments around not using the technology in ways "that cause or are likely to cause overall harm." A scrubbed section of the revised AI ethics guidelines previously committed Google to not designing or deploying AI for use in surveillance, weapons, and technology intended to injure people. The change was first spotted by The Washington Post and captured here by the Internet Archive.
[38]
Google pledge against using AI for weapons vanishes
Google on Tuesday updated its principles when it comes to artificial intelligence, removing vows not to use the technology for weapons or surveillance. Revised AI principles were posted just weeks after Google chief executive Sundar Pichai and other tech titans attended the inauguration of U.S. President Donald Trump. When asked about the change, a Google spokesperson referred to a blog post outlining the company's AI principles that made no mention of the promises, which Pichai first outlined in 2018.
[39]
Google Tweaks AI Pledge, Promise Not To 'Cause Harm' Is Missing - Alphabet (NASDAQ:GOOG), Alphabet (NASDAQ:GOOGL)
Google, in 2018, pledged not to use AI for weapons, surveilance or to "cause harm." Google previously had strict guidelines when it came to artificial intelligence. The Mountain View, California-based company, a subsidiary of Alphabet Inc. GOOGGOOGL, once pledged not to use the burgeoning tech field for harm. Not anymore. What Happened: Google DeepMind CEO Demis Hassabis and Senior Vice President James Manyika co-authored a blog post that included an updated list of so-called "AI Principles." (DeepMind is an artificial intelligence research laboratory.) The original list, published in 2018, included applications that the company pledged not to "pursue." That included weapons, surveillance and technologies that "cause or are likely to cause overall harm." Google tweaked those ethical guidelines as of today. "There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights," the executives wrote. "And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security." See Also: 'Let's Open Source Nuclear Weapons': Godfather Of AI Mocks Attempts To Trivialize Risks Of AI Why It Matters: Big tech is currently in an AI arms race. The debut of Hangzhou, China-based DeepSeek sent the share prices of U.S.-based tech companies plummeting. And that was just days after President Donald Trump boasted of a $500 billion "StarGate" project with OpenAI. AI's foremost innovators are at odds over whether there should be a pause in AI development. Skadden, Arps, Slate, Meagher & Flom LLP expects government regulation in this sector to be light. Meanwhile, Alphabet is profiting off of AI big time, according to its fourth-quarter earnings. Total revenue, $96.5 billion, is up 12% year-over-year. CEO Sundar Pichai credited the stellar financial results to the company's AI innovations. "We are building, testing and launching products and models faster than ever, and making significant progress in compute and driving efficiencies," Pichai said. "Our results show the power of our differentiated full-stack approach to AI innovation and the continued strength of our core business," he added. Alphabet expects to have capital expenditures of $75 billion in 2025. GOOG Price Action: Alphabet stock closed Tuesday up 2.50% at $207.71 per share. After-hours trading sees the stock down 7.33% at $192.60. Now Read: Palantir CEO Alex Karp Sounds Alarm on China's Deepseek - Could This Threaten Palantir's AI Dominance? HAL Illustration courtesy of Shutterstock GOOGAlphabet Inc$192.50-5.00%Overview Rating:Good62.5%Technicals Analysis1000100Financials Analysis400100WatchlistOverviewGOOGLAlphabet Inc$190.74-5.21%Market News and Data brought to you by Benzinga APIs
[40]
Google Removes Language on Weapons From Public AI Principles
Alphabet Inc.'s Google has removed a key passage about applications it will not pursue from its publicly listed artificial intelligence principles, which guide the tech giant's work in the industry. The company's AI Principles previously included a passage titled "applications we will not pursue," such as "technologies that cause or are likely to cause overall harm," including weapons, according to screenshots viewed by Bloomberg. That language is no longer visible on the page.
Share
Share
Copy Link
Google has quietly removed its commitment not to use AI for weapons or surveillance, signaling a shift towards potential military applications amidst growing competition and national security concerns.
In a significant shift from its previous ethical stance, Google has quietly removed key passages from its AI principles that had committed the company to avoid using artificial intelligence for potentially harmful applications, including weapons and surveillance 1. This change, first noticed by Bloomberg, marks a departure from Google's earlier position on responsible AI development 5.
The now-deleted section of Google's AI principles, titled "AI applications we will not pursue," had explicitly stated that the company would refrain from developing technologies "that cause or are likely to cause overall harm," with weapons being a specific example 2. This revision comes in the wake of U.S. President Donald Trump revoking former President Joe Biden's executive order aimed at promoting safe, secure, and trustworthy development and use of AI 1.
Google's decision follows a recent trend of big tech companies entering the national security arena and accommodating more military applications of AI 2:
Google has defended this change, citing global AI competition, complex geopolitical landscapes, and national security interests as reasons for revising its AI principles 2. The company's AI chief, Demis Hassabis, framed the change as inevitable progress rather than a compromise 3.
However, this shift has raised concerns among experts and former employees:
The revision of Google's AI principles is part of a larger trend among tech giants to reconsider previously held ethical positions 5. This shift could lead to:
As the international community watches with concern, there are growing calls for legally binding regulations to ensure human oversight and prevent the development of fully autonomous weapons 3. The Future of Life Institute has proposed a tiered system for treating military AI systems, similar to the oversight of nuclear facilities 3.
Reference
[1]
[2]
[3]
Google has removed its longstanding pledge against developing AI for weapons and surveillance, signaling a major policy shift with global implications for ethics and national security.
3 Sources
3 Sources
OpenAI, the creator of ChatGPT, has entered into a partnership with defense technology company Anduril Industries to develop AI solutions for military applications, raising concerns among employees and industry observers about the ethical implications of AI in warfare.
29 Sources
29 Sources
Leading AI companies like Anthropic, Meta, and OpenAI are changing their policies to allow military use of their technologies, marking a significant shift in the tech industry's relationship with defense and intelligence agencies.
2 Sources
2 Sources
Over 100 Google DeepMind employees have signed an open letter urging the company to cease its involvement in military contracts, particularly those with Israel. The move highlights growing concerns about AI's role in warfare and surveillance.
6 Sources
6 Sources
U.S. tech companies, particularly Microsoft and OpenAI, have provided AI and cloud computing services to Israel's military, significantly enhancing its targeting capabilities in Gaza and Lebanon. This raises questions about the ethical implications of commercial AI use in warfare.
9 Sources
9 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved