Google AI Pentagon deal triggers employee revolt as 600+ workers demand classified military AI ban

Reviewed byNidhi Govil

15 Sources

Share

Over 600 Google employees, including senior staff from DeepMind, signed a letter to CEO Sundar Pichai demanding the company refuse Pentagon deals for classified military AI use. The protest came just hours before reports emerged that Google had signed an agreement allowing the Department of Defense to use its AI models for any lawful government purpose, without the ethical restrictions that led to Anthropic's blacklisting.

Google Employees Challenge Pentagon AI Partnership

More than 600 Google employees have signed a letter to CEO Sundar Pichai urging him to block the Pentagon from using Google AI for classified military AI use

1

2

. The Google employees letter, organized primarily by staff at DeepMind, includes more than 20 principals, directors, and vice presidents

2

4

. About two-thirds of signatories agreed to be named publicly, while roughly a third requested anonymity for fear of retaliation

2

.

Source: The Hill

Source: The Hill

The letter states that "the only way to guarantee that Google does not become associated with such harms is to reject any classified workloads. Otherwise, such uses may occur without our knowledge or the power to stop them"

1

. Employees expressed deep concern about autonomous weapons and mass surveillance, warning that AI systems can centralize power and make critical mistakes

2

.

Pentagon Deal Reported Hours After Protest

The timing proves particularly striking. Just hours after the letter was sent to Sundar Pichai on Monday, The Information reported that Google had signed a classified AI deal with the Department of Defense

3

5

.

Source: FT

Source: FT

The agreement allows the Pentagon to use Google AI models for "any lawful government purpose," placing it alongside OpenAI and xAI in supplying AI for classified use

3

.

The Pentagon signed agreements worth up to $200 million each with major AI labs in 2025, including Anthropic, OpenAI, and Google

3

. Google's agreement requires the company to adjust AI safety settings and filters at the US government's request

3

. While the contract includes language noting the AI system "should not be used for domestic mass surveillance or autonomous weapons without appropriate human oversight and control," it also states the agreement "does not confer any right to control or veto lawful Government operational decision-making"

3

.

Echoes of Project Maven and Shifting Red Lines

This workforce protest marks a return to tensions that first erupted in 2018 over Project Maven, when Google employees forced the company to limit its defense work after learning the tech industry giant had signed on to use AI to detect and analyze objects on drone video feeds

2

4

. Google did not renew that contract and pledged not to work on AI for weapons or surveillance

4

.

However, the company quietly dropped that stance last year in an update of its AI principles, deleting language that promised not to pursue "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people"

4

. DeepMind co-founder Demis Hassabis explained the decision by saying the world has changed and US tech companies have a duty to help defend the country

4

.

The Anthropic Precedent and Ethical Implications of AI

The protest follows closely a legal battle between the Pentagon and Anthropic over military AI applications

1

2

. Anthropic refused to loosen guardrails around how the US military can use its AI models and was subsequently designated a "supply chain risk" by the Pentagon

1

5

. The company has challenged this designation in court with support from across the tech industry

1

.

Sofia Liguori, an AI research engineer at Google DeepMind in the UK, said she signed the letter because Google has failed to discuss concrete red lines about usage of its AI on classified workloads

2

. She believes it would be impossible for the company to monitor and limit how its AI tools are actually used on air-gapped classified systems isolated from the public internet

2

. "Agentic AI is particularly concerning because of the level of independence it can get to. It's like giving away a very powerful tool at the same time as giving up on any kind of control on its usage," Liguori stated

2

.

Stakes for Google's Gemini AI and Future Defense Collaborations

The letter specifically references recent reports about Google and the Pentagon discussing a deal for deploying Gemini AI in classified settings

1

. Microsoft already has deals to provide AI services in classified environments, and OpenAI announced a renegotiated agreement with the Pentagon in February

1

. OpenAI faced backlash from its researchers after striking its deal, with CEO Sam Altman later apologizing and calling his actions "opportunistic and sloppy"

4

.

Source: ET

Source: ET

The Google employees letter concludes with a stark warning about reputational damage: "Making the wrong call right now would cause irreparable damage to Google's reputation, business, and role in the world. We know from our own history that our leaders can make the right choices, for ourselves and for the world, when the stakes are high"

4

. The contrast between employee demands and reported Pentagon agreements creates a direct challenge that will test whether workforce pressure can still influence military AI policy as it did during Project Maven, or whether competitive pressure and national security arguments will prevail in shaping how classified networks deploy frontier AI models from major labs.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved