15 Sources
[1]
Google employees ask Sundar Pichai to say no to classified military AI use
Over 600 Google employees signed a letter to CEO Sundar Pichai demanding that Google block the Pentagon from using its AI models for classified purposes, reports the Washington Post. Its organizers claim many of the signers work in Google's DeepMind AI lab, and include more than 20 principals, directors, and vice presidents. According to the Post, the letter says that "The only way to guarantee that Google does not become associated with such harms is to reject any classified workloads. Otherwise, such uses may occur without our knowledge or the power to stop them." Anthropic is currently in a legal battle with the Pentagon over being designated a "supply chain risk," after refusing to loosen guardrails around how the US military can use its AI models, with support from across the tech industry, including employees at Google. The letter specifically references a recent report by The Information that said Google and the Pentagon are discussing a deal for deploying its Gemini AI in classified settings. Microsoft already has deals to provide AI services in classified environments, and OpenAI announced a renegotiated agreement with the Pentagon in February.
[2]
Google Staff Urge Pichai to Refuse Classified Military AI Work
Organizers of the letter said it has garnered more than 580 signatures, and will be sent to Pichai, as workers try to curtail the use of AI and the risks associated with such tools in classified national security settings. Hundreds of AI researchers at Alphabet Inc.'s Google have signed onto a letter urging Chief Executive Officer Sundar Pichai to refuse to make the company's artificial intelligence systems available for classified workloads for US defense missions, according to organizers of the effort. "We are Google employees who are deeply concerned about ongoing negotiations between Google and the US Department of Defense," reads the , which was provided to Bloomberg News. "As people working on AI, we know that these systems can centralize power and that they do make mistakes." Organizers of the letter said it has garnered more than 580 signatures, and about two thirds of signatories had agreed to be named while roughly a third requested anonymity. They said it would be sent to Pichai on Monday. Bloomberg spoke with three employees involved in organizing the letter, all of whom requested anonymity for fear of retaliation. The organizers shared some of those names for verification purposes, but Bloomberg wasn't able to vet the entire list. A spokesperson for Google didn't immediately respond to a request for comment. The protest letter follows closely a legal imbroglio between the Pentagon and Anthropic PBC over the use of AI for military applications. The Pentagon is seeking to eject Anthropic and its Claude AI tool from US defense supply chains and is casting around for new tech giant AI partners. The workforce protest marks a new effort by workers in Silicon Valley to try to curtail the use of AI and the risks associated with such tools in classified national security settings. The Pentagon is seeking to pour billions of dollars into expanding military usage of AI and developing autonomous weapons. Google employees were among the first to sound the alarm about the risks of AI warfare in 2018 and force the company to limit its defense work. But the company's ties to the US defense industry have been re-established in recent years, and it has watered down its own AI red lines. "We want to see AI benefit humanity, not to see it being used in inhumane or extremely harmful ways. This includes lethal autonomous weapons and mass surveillance but extends beyond," the letter states. "Currently, the only way to guarantee that Google does not become associated with such harms is to reject any classified workloads. Otherwise, such uses may occur without our knowledge or the power to stop them." Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Plus Signed UpPlus Sign UpPlus Sign Up By continuing, I agree to the Privacy Policy and Terms of Service. Sofia Liguori, an AI research engineer at Google DeepMind in the UK, said she signed the letter because she believes Google has failed to discuss with workers any concrete red lines about usage of its AI on classified or other networks. In addition, she believes it would be impossible for the company to monitor and limit how its AI tools are actually used on "air-gapped" classified systems -- isolated from the public internet or other unsecured networks. Liguori, a trained theoretical physicist from near Milan, said the main response to worker concerns about the US military's use of Google AI has been to encourage the workforce to trust company leadership to sign good contracts. "But it's all left very broad," she said. "Agentic AI is particularly concerning because of the level of independence it can get to. It's like giving away a very powerful tool at the same time as giving up on any kind of control on its usage." Take our Markets Pulse surveyWhat's the biggest risk factor set to weigh on tech earnings in the current reporting period? Let us know. Organizers said signatories of the letter include more than 20 directors, senior directors and vice presidents, in addition to a number of senior employees at Google DeepMind, the company's AI research laboratory that seeks to keep Google at the cutting edge of the AI race while unlocking applications that could benefit humanity. A protest in 2018 by Google workers over the company's work with the US military marked a previous high point of tension between the Pentagon and Silicon Valley. The employees said they were appalled to learn that Google had signed on to work on what they termed "the business of war" under Project Maven, a Pentagon effort to use AI to detect and analyze objects on drone video feeds. In the face of protests and resignations, the company ultimately introduced new AI principles and decided against renewing its contract for Project Maven. But last year, Google removed a passage from its artificial intelligence principles that pledged to avoid using the technology in potentially harmful applications, such as weapons. The organizers of Monday's letter said in a statement that "Maven is not over." "Workers are going to continue organizing against the weaponization of Google's AI technology until the company draws clear, enforceable lines," they said. In recent years, Google has strengthened its ties with the Pentagon. In March, for instance, the company made available its Gemini AI agents for the Pentagon's three million-strong workforce at the unclassified level, after previously making available its Gemini chatbot in December. Emil Michael, the under secretary of defense for research and engineering, told Bloomberg in March that the Pentagon would start with unclassified usage of Google's Gemini agents and "then we'll get to classified and top secret." He added that talks with Google over using the company's AI agents on the classified cloud were already underway. In April, the Information, a publication focused on the technology industry, subsequently reported that negotiations are underway between Google and the Pentagon for "all lawful uses" of the company's AI tools. That description falls short of red lines cited by leadership at rival Anthropic, which worried all lawful use could feasibly include using AI on fully autonomous weapons systems and for domestic mass surveillance. The Pentagon strongly contested such characterizations and argued commercial companies shouldn't be able to dictate usage policies during wartime or preparations for war.
[3]
Google signs classified AI deal with Pentagon, The Information reports
April 28 (Reuters) - Alphabet's (GOOGL.O), opens new tab Google joined a growing list of technology firms to sign a deal with the U.S. Department of Defense to use its artificial intelligence models for classified work, The Information reported on Tuesday, citing a person familiar with the matter. The agreement allows the Pentagon to use Google's AI for "any lawful government purpose", the report added, putting it alongside OpenAI and Elon Musk's xAI, which also have deals to supply AI models for classified use. Classified networks are used to handle a wide range of sensitive work, including mission planning and weapons targeting. The Pentagon signed agreements worth up to $200 million each with major AI labs in 2025, including Anthropic, OpenAI, and Google. The Pentagon is seeking to preserve all flexibility in defense and not be limited by warnings from the technology's creators against powering weapons with unreliable AI. Google's agreement requires it to help in adjusting the company's AI safety settings and filters at the government's request. The contract includes language noting "the parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control", according to the report, but also adds that the "Agreement does not confer any right to control or veto lawful Government operational decision-making". Reuters could not verify the report. Alphabet and the U.S. Department of Defense, which has now been renamed the Department of War by President Donald Trump, did not immediately respond to requests for comment. A spokesperson for Google Public Sector, the unit that handles U.S. government business, told The Information that the new agreement is an amendment to its existing contract. Reuters had earlier reported that the Pentagon had been pushing top AI companies such as OpenAI and Anthropic to make their tools available on classified networks without the standard restrictions they apply to users. Reporting by Chandni Shah in Bengaluru; Editing by Sherry Jacob-Phillips and Rashmi Aich Our Standards: The Thomson Reuters Trust Principles., opens new tab
[4]
Google staff urge chief executive to block US military AI use
More than 560 Google employees have signed an open letter to chief executive Sundar Pichai urging him to refuse to let the US government use its AI technology for classified military operations. "We want to see AI benefit humanity, not being used in inhumane or extremely harmful ways," read the letter, which was sent to Pichai on Monday. "This includes lethal autonomous weapons and mass surveillance, but extends beyond." "The only way to guarantee that Google does not become associated with such harms is to reject any classified workloads," it continued. "Otherwise, such uses may occur without our knowledge or the power to stop them." Big tech companies are under pressure to take a stance on military and intelligence use of their AI products after a clash between the Pentagon and AI start-up Anthropic. Anthropic's chief executive Dario Amodei refused to give the government unfettered access to its models and insisted on guardrails to prevent them being used for lethal autonomous weapons and mass domestic surveillance. In response, Anthropic was designated a supply-chain risk and President Donald Trump ordered that all government departments stop using its Claude chatbot. Anthropic has challenged the designation in court. Alphabet staff are responding to reports that Google is close to agreeing terms with the Department of Defense that will allow its Gemini model to be used in classified operations without the formal safeguards that Anthropic demanded. "This isn't just about the military, AI-powered mass surveillance is a direct threat to American civil liberties," said one person involved in the campaign, who asked to remain anonymous. "This is not low-risk and theoretical; we are already in these fights. We see AI being used to support authoritarianism in China." The letter to Pichai was co-ordinated by staff at DeepMind, Google's AI lab, said two of the people involved. Two-fifths of signatories work in the AI division, with a similar share in the Cloud unit and the rest across Alphabet. More than 18 senior staff -- including principals, directors and vice-presidents -- have signed, they added. About two-thirds chose to be named, with the rest deciding to remain anonymous. DeepMind's chief scientist Jeff Dean has been the most vocal executive on the issue so far. In February, he posted on X that "Mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression." He added that he still backed a 2018 commitment to ban lethal autonomous weapons. Google has faced previous protests against its military ties. In 2018, several staff quit and thousands signed a petition against Project Maven, which used AI to improve drone strikes. Google did not renew the contract and pledged not to work on AI for weapons or surveillance. However, last year it quietly dropped that stance in an update of its AI Principles, deleting language that promised not to pursue "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people". Co-founder Demis Hassabis explained the decision by saying that the world has changed since Google acquired DeepMind in 2014. Multiple frontier models are now widely available and US tech companies have a duty to help the country defend itself. "We take a lot of inspiration from what happened before, with the anti-Maven protest, and the fact that many senior leaders share these views," said a second person involved in the letter. "There is almost total consensus against the programme in DeepMind." OpenAI faced a backlash from its researchers after striking its own deal with the government soon after the Anthropic ban. Chief executive Sam Altman later apologised, calling his actions "opportunistic and sloppy". "Making the wrong call right now would cause irreparable damage to Google's reputation, business, and role in the world," the letter concludes. "We know from our own history that our leaders can make the right choices, for ourselves and for the world, when the stakes are high."
[5]
Google signs classified AI deal with the Pentagon
The Information reported the deal on Tuesday. The agreement allows the DoD to use Google's AI models without the restrictions that led Anthropic to be blacklisted in February. Google becomes the latest in a line of AI companies, alongside OpenAI and xAI, to supply classified AI capability to the US military. Google has signed a classified AI deal with the US Department of Defense allowing the Pentagon to use Google's AI models for "any lawful government purpose," The Information reported on Tuesday, citing a person familiar with the matter. The deal was reported hours after more than 560 Google employees published an open letter to CEO Sundar Pichai on Monday, urging him to refuse exactly this kind of classified military AI arrangement. Google had not publicly confirmed or commented on the agreement at the time of this article's publication. The agreement, as characterised by The Information, is structured without the ethical restrictions that Anthropic included in its Pentagon contract, the restrictions that led to Anthropic being designated a national security supply chain risk and blacklisted by the Trump administration in February 2026. Where Anthropic refused to remove contractual prohibitions on mass domestic surveillance and fully autonomous weapons without human oversight, Google's deal is described as permitting "any lawful government purpose" without such carve-outs. That framing aligns Google's deal with the unrestricted model preferred by the Trump administration, rather than the amended model that OpenAI negotiated, which included red lines on domestic surveillance while remaining within the Pentagon's contracting framework. The Pentagon has now signed classified AI deals with four of the largest AI companies in the United States: OpenAI, xAI, Google, and, until its blacklisting, Anthropic. The sequencing is notable. Anthropic was removed from the supplier pool for maintaining ethical restrictions; OpenAI renegotiated to stay in while preserving some restrictions; xAI signed without apparent restrictions; and now Google has signed in language that appears to give the broadest discretion of all to the Pentagon. The result is a classified AI vendor pool from which Anthropic is excluded, and in which the three remaining suppliers have varying but significant latitude to provide AI capability for military applications. The timing relative to Monday's employee letter is the most striking element. The 560 employees who signed the letter to Pichai on Monday morning were, by Tuesday morning, employees of a company that had signed the deal they were asking Pichai to refuse. That creates a direct and uncomfortable contrast that Pichai will be asked to address in town halls, in press conferences, and in the Musk v. Altman trial courtroom if the question of Google's AI ethics posture becomes relevant to testimony. Google has never confirmed the specific terms of its Pentagon AI engagements, and the "any lawful government purpose" framing comes from a single anonymous source reported by The Information. The employee letter and the Pentagon deal together define the fault line that every major AI company is now navigating. On one side: the US government's demand for unrestricted AI capability for classified military use. On the other: the published AI ethics principles that companies adopted, partly in response to the 2018 Project Maven controversy, that commit them to avoiding AI weapons without human oversight. Anthropic chose its principles and was blacklisted. OpenAI and Google appear to have chosen the contracts. Whether that choice is temporary, commercially reversible, or permanent will depend on how the political environment evolves, and on whether the 560 signatories of Monday's letter, and those who might join them, can change the calculus internally.
[6]
Google Employees Say They Do Not Want to Fill the Gap Left by Anthropic
With Anthropic and the US Department of Defense still in the process of mending their relationship, there remains a vacuum for other companies to pick up the slack and offer AI services for dubious purposes. Google employees are making it clear to their employer that they have no interest in filling that gap. More than 600 Googlers, including members of the company's DeepMind lab, signed a letter to CEO Sundar Pichai demanding that he refuse to allow Google’s artificial intelligence tools to be used by the Pentagon for classified work. The signeesâ€"which, according to The Verge, included principals, directors, and vice presidents within the companyâ€"noted that their work on AI has made them acutely aware of the potential dangers of using the technology without guardrails. Given that the Pentagon demanded Anthropic ignore its red lines for AI use and allow the agency to do anything it wants, including performing domestic surveillance and using fully autonomous weapons, it's a reasonable concern for Googlers to raise. "As people working on AI, we know that these systems can centralize power and that they do make mistakes," the employees wrote in the letter. "We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses." To that end, they explicitly mentioned the very same clauses that created issues for Anthropic as their primary objections. “We want to see AI benefit humanity; not to see it being used in inhumane or extremely harmful ways. This includes lethal autonomous weapons and mass surveillance but extends beyond,†the letter reads. The employees argued, "The only way to guarantee that Google does not become associated with such harms is to reject any classified workloads. Otherwise, such uses may occur without our knowledge or the power to stop them." Google has yet to publicly acknowledge the letter, but it seems the company has been playing footsie with the military. A report from The Information earlier this month suggested that Google was exploring a deal with the Pentagon, similar to the one OpenAI entered into when it offered up its services following the DoD's spat with Anthropic. The fact that OpenAI and Google have decided, as corporations, to do business with the Pentagon, which has demanded fealty from any AI partners so they can use the models for "all lawful uses," is pretty cowardly. Especially considering both firms filed amicus briefs in support of Anthropic in its lawsuit against the government after it was labeled a "supply chain risk." They're basically letting Anthropic take the hit of actually fighting the fight against the Pentagon while raking in lucrative contracts with the agency. They're also doing so in direct opposition to their respective employees, as OpenAI workers also petitioned leadership not to do business with the military without specific restrictions in place.
[7]
580+ Google employees including DeepMind researchers urge Pichai to refuse classified Pentagon AI deal
More than 580 Google employees, including over 20 directors, senior directors, and vice presidents, have signed a letter urging CEO Sundar Pichai to refuse classified military AI work for the Pentagon, according to Bloomberg. The letter, which includes senior researchers at Google DeepMind, was sent to Pichai on Monday. "We are Google employees who are deeply concerned about ongoing negotiations between Google and the US Department of Defense," it reads. "As people working on AI, we know that these systems can centralize power and that they do make mistakes." The signatories want Google to reject all classified workloads, arguing that on air-gapped classified networks, isolated from the public internet, the company would have no ability to monitor or limit how its AI tools are actually used. "Currently, the only way to guarantee that Google does not become associated with such harms is to reject any classified workloads," the letter states. "Otherwise, such uses may occur without our knowledge or the power to stop them." Google employees have fought this fight before. In 2018, roughly 4,000 workers signed an internal petition and at least 12 resigned over Project Maven, a Pentagon programme that used AI to detect and analyse objects in drone video feeds. The protest forced Google to introduce AI principles pledging not to pursue weapons or surveillance technology, and to let the Maven contract expire in March 2019. Palantir took it over. The Maven contract was worth a few million dollars. Palantir's Maven investment has since grown to $13 billion. The 2018 victory was real, but it was also the last time Google's workforce successfully constrained the company's defence ambitions. In the years since, Google has systematically rebuilt every bridge the protest burned. In December 2022, Google won a share of the Pentagon's $9 billion Joint Warfighting Cloud Capability contract alongside Amazon, Microsoft, and Oracle. In February 2025, Google removed the passage from its AI principles that pledged to avoid using the technology in "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people" and to avoid "technologies that gather or use information for surveillance violating internationally accepted norms." A blog post co-authored by Demis Hassabis, CEO of Google DeepMind, cited "a global competition taking place for AI leadership" as justification. Human Rights Watch and Amnesty International both condemned the reversal. In December 2025, the Pentagon launched GenAI.mil, a platform powered by Google's Gemini chatbot, available to all defence personnel. Defence Secretary Pete Hegseth said "the future of American warfare is here, and it's spelled AI." In March 2026, Google deployed Gemini AI agents to the Pentagon's three-million-strong workforce at the unclassified level, with eight pre-built agents for tasks including summarising meeting notes, building budgets, and checking actions against defence strategy. The classified deal is the next step. Emil Michael, the under secretary of defence for research and engineering, told Bloomberg in March that the Pentagon would "start with unclassified because that's where most of the users are, and then we'll get to classified and top secret." He confirmed that talks with Google over using Gemini agents on classified cloud infrastructure were already underway. In April, The Information reported that negotiations are progressing toward "all lawful uses" of Google's AI tools, a phrase that falls short of the red lines Anthropic established before being designated a supply-chain risk by the Pentagon for refusing to remove restrictions on autonomous weapons and domestic mass surveillance. The Pentagon strongly contested Anthropic's characterisation and argued that commercial companies should not be able to dictate usage policies during wartime or preparations for war. OpenAI signed its own Pentagon deal hours after the Anthropic blacklisting, with three stated red lines: no mass domestic surveillance, no autonomous weapons, and no high-stakes automated decisions. But the enforcement of those red lines on classified networks is the question that Google's employees are raising. On an air-gapped system, the AI operates in a network that is, by design, disconnected from Google's infrastructure. Google cannot see what queries are being run, what outputs are being generated, or what decisions are being made with those outputs. The "trust us" assurance from Pentagon leadership is the only mechanism preventing uses that would violate any red line the company might negotiate. Sofia Liguori, an AI research engineer at Google DeepMind in the UK who signed the letter, told Bloomberg that the main response to worker concerns has been to encourage the workforce to trust company leadership to sign good contracts. "But it's all left very broad," she said. "Agentic AI is particularly concerning because of the level of independence it can get to. It's like giving away a very powerful tool at the same time as giving up on any kind of control on its usage." The Pentagon's AI budget tells the story of what the classified deal would fund. The fiscal 2026 defence budget included $13.4 billion dedicated to AI and autonomy. The fiscal 2027 request, submitted in April, asks for $54.6 billion for the Defence Autonomous Warfare Group, a 24,000% increase over the prior year, within a total defence budget of $1.5 trillion that represents a 42% year-over-year increase. The Pentagon is already testing humanoid robot soldiers with Foundation Future Industries and has formalised Palantir's Maven as a core military system with multi-year funding. The scale of military AI investment has moved from the experimental phase that characterised Project Maven in 2018 to an industrial buildout that treats AI as a foundational capability of the American military. The classified workloads that Google's employees are objecting to would sit at the centre of that buildout. The letter's organisers said "Maven is not over. Workers are going to continue organizing against the weaponization of Google's AI technology until the company draws clear, enforceable lines." The framing is significant. In 2018, the fight was about one contract for one programme. In 2026, the fight is about whether Google's entire AI stack, Gemini, DeepMind's research, the TPU chips that power inference, becomes military infrastructure on classified networks where no one outside the Pentagon can see what it does. The paradox of the administration blacklisting Anthropic while urging banks to adopt its AI illustrates the political environment: companies that resist unconstrained military use face designation as supply-chain risks, while companies that comply receive contracts worth billions. Google's employees are asking Pichai to refuse a deal that the Pentagon has made clear it will punish refusal of, at a moment when the company has spent three years rebuilding its defence credentials precisely to win that deal. The 580 signatures are notable for their seniority. Twenty directors, senior directors, and vice presidents have signed, along with senior DeepMind researchers. Two-thirds of signatories agreed to be named; a third requested anonymity for fear of retaliation. An earlier cross-company letter in February, signed by approximately 800 Google employees and 100 OpenAI employees, expressed support for Anthropic's stance against unrestricted military AI use. Over 100 DeepMind employees separately signed an internal letter demanding that no DeepMind research or models be used for weapons development or autonomous targeting. Google's chief scientist Jeff Dean wrote on X that "mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression." The internal dissent is not marginal. It extends into the technical leadership that builds the systems the Pentagon wants to deploy. But the gap between internal dissent and corporate decision-making has widened since 2018. In 2018, 4,000 signatures and a dozen resignations were enough to kill a contract worth a few million dollars. In 2026, 580 signatures face a classified AI market worth tens of billions, a Pentagon that has shown it will retaliate against companies that refuse its terms, a company that has already removed its own red lines, and a CEO who approved the Gemini deployment to three million Pentagon personnel without, according to the letter's organisers, discussing concrete usage restrictions with the workforce that built it. Trump has signalled openness to a Pentagon deal with Anthropic if the company drops its restrictions, suggesting that the administration views compliance as the eventual destination for every AI company, regardless of where it starts. Google's employees are asking their company to be the exception. The company's trajectory over the past three years suggests it intends to be the rule.
[8]
Google Employees Demand CEO Block Military AI Contracts in Open Letter - Decrypt
The letter demands transparency on existing Pentagon contracts and creation of an ethics board with employee representation. More than 580 Google employees have signed an open letter demanding CEO Sundar Pichai block the Pentagon from using the search giant's artificial intelligence technology for military applications, escalating Silicon Valley's internal battles over defense work. The letter calls for an immediate moratorium on deploying Google's AI for military purposes. "We want to see AI benefit humanity; not to see it being used in inhumane or extremely harmful ways," the employees wrote in the document. The signatories, who include more than 18 senior staff ranging from principals and directors to vice presidents, specifically express concerns about the company's cloud computing services and machine learning tools potentially powering lethal autonomous weapons systems. They demand greater transparency around existing Pentagon contracts and want Google to establish a permanent ethics board with employee representation to review future military partnerships. Employees pointed to Google's 2018 withdrawal from Project Maven -- a Pentagon program that used AI to analyze drone footage -- as precedent for the company declining military work on ethical grounds. The Google employees' action follows a high-profile clash between the Pentagon and Anthropic, which saw the Department of Defense drop the AI startup two months ago after the company declined to remove contractual restrictions on domestic surveillance and fully autonomous weapons use. The Defense Secretary subsequently designated Anthropic a supply chain risk, prompting the White House to order federal agencies to phase out its tools. A federal judge temporarily blocked the ban in March. The pushback comes as the Pentagon increasingly views AI as central to future military operations. Chairman of the Joint Chiefs of Staff General Dan Caine has described autonomous weapons as a "key and essential part of everything we do" going forward. Despite the U.S. government's dispute with Anthropic, the NSA has reportedly been granted access to Mythos Preview, an AI model that Anthropic has restricted to a small cadre of researchers and cybersecurity organizations. President Donald Trump recently suggested tensions may be easing, describing Anthropic as "shaping up."
[9]
Hundreds of Google workers urge CEO to refuse classified AI work with Pentagon
Megan Cerullo is a New York-based reporter for CBS MoneyWatch covering small business, workplace, health care, consumer spending and personal finance topics. She regularly appears on CBS News 24/7 to discuss her reporting. Hundreds of Google employees are urging CEO Sundar Pichai to refuse to make the company's artificial intelligence tools available for the Pentagon in classified settings. In an open letter to the chief executive, Google workers assigned to AI systems voiced concerns about the tech giant's ongoing negotiations with the U.S. Department of Defense, saying the technology not appropriate for "classified workloads." "We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses," the letter read. "Therefore, we ask you to refuse to make our AI systems available for classified workloads." Neither Google nor the Pentagon immediately responded to CBS News' request for comment on the letter. They fear that if made available for military applications, Google's AI systems could be used in "inhumane or extremely harmful ways," the employees wrote. The letter cited lethal autonomous weapons and mass surveillance as examples of potentially harmful applications of AI. "Making the wrong call right now would cause irreparable damage to Google's reputation, business and role in the world," the letter added. Google is negotiating a possible deal with the Defense Department to deploy its AI in classified work, according to reporting from The Information. OpenAI earlier this year struck an agreement with the Pentagon, which agreed not to use Open AI technology for mass domestic surveillance or to direct autonomous weapons systems.
[10]
Hundreds of Google employees sign letter urging CEO to reject US military AI use - SiliconANGLE
Hundreds of Google employees sign letter urging CEO to reject US military AI use Around 600 employees at Google LLC have signed a letter urging Chief Executive Sundar Pichai not to make the company's artificial intelligence tools available to the Pentagon in classified settings. The open letter, signed mostly by staff working with Google's AI systems, airs concern over the "ongoing negations" between Google and the Department of Defense. "As people working on AI, we know that these systems can centralize power and that they do make mistakes," the letter said. "We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses." This comes after recent reports stating that Google is currently in negations with the DOD relating to its Gemini AI models being used in classified settings. The reports suggest that if the deal goes through, the DOD will be able to use the AI systems for all lawful purposes. The latter expression was the reason why Anthropic PBC is currently in a legal wrangle with the U.S. government. Launching lawsuits after the company negotiations broke down with the Pentagon regarding a $200 million contract. Anthropic was opposed to the military using its Claude system "for all lawful purposes." The DOD then designated Anthropic as a "supply chain risk." OpenAI Group PBC subsequently revised a deal it had made with the Pentagon with the new language reported to deter the use of its AI for mass surveillance. One of the revise clauses read that the AI will not be used for "deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." The Google staffers remain critical of contractual language, writing that the only way to ensure the technology isn't used for such purposes is to "reject any classified workloads." Last year, Google made changes to its AI Principles. Following protests by staff at the company in 2018 over the technology being used for certain military purposes, Google assured its staff it would not pursue AI developments that would be "likely to cause harm," and it would not "design or deploy" AI tools that could be used for weapons or surveillance technologies. That language has now evolved. "Making the wrong call right now would cause irreparable damage to Google's reputation, business and role in the world," the signatories warned on the most recent letter. They went on, "Human lives are already being lost and civil liberties put at risk at home and abroad from misuses of the technology we are playing a key role in building."
[11]
Google workers urge CEO to reject classified AI work with Pentagon
Hundreds of employees at Google are pressing the company's CEO to reject any deal with the Pentagon to use the company's artificial intelligence in classified settings, warning of the same risks as Anthropic before it was banned in military work earlier this year. The letter, signed by more than 600 employees at Google DeepMind and Cloud, comes nearly two months after Anthropic was dropped from the Department of Defense after requesting guardrails around AI used for domestic mass surveillance or autonomous weapons. The letter, sent Monday to Google CEO Sundar Pichai, argues Google does not have a way at this point to guarantee the company's tools would not risk unmonitored harm. "As people working on AI, we know that these systems can centralize power and that they do make mistakes," employees wrote in the letter, a copy of which was shared with The Hill. "We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses." The Information reported earlier this month Google is negotiating an agreement with the Pentagon to deploy its Gemini AI models in classified settings. The agreement reportedly would allow the Pentagon to use Google's AI for all lawful purposes. The parties also discussed language to prevent its AI from being used for mass surveillance or autonomous weapons without human control, The Information reported, but signatories on Monday's letter argued enforcing these provisions in practice is not possible. "The only way to guarantee that Google does not become associated with such harms is to reject any classified workloads," the letter said. Google currently has a contract with the Pentagon to use its AI models on non-classified workloads through its genAI.mil platforms. The employees warned any approval of Google's AI in classified work could cause "irreparable harm to Google's reputation, business and role in the world." "A lot of it comes down to what technical safeguards companies can put in place; but the DoD specifically prohibits any controls," one of the letter's organizers said in a press release. "If leadership is truly serious about preventing downstream harms, they must reject classified workloads entirely for now." The Hill reached out to the Pentagon and Google for comment. The issue of the Pentagon's use of AI was thrown into the spotlight earlier this year after the defense agency labelled Anthropic a supply chain risk when it would not agree for its models to be used for any lawful purpose. Anthropic has sued the Trump administration over the designation, which is usually reserved for foreign adversaries. Hundreds of Google and OpenAI employees signed a letter in support of Anthropic at the time. Within hours of the designation late February, OpenAI, the maker of ChatGPT, struck a deal with the DOD. The move quickly drew backlash, and CEO Sam Altman later said the company asked for additions to the contract regarding domestic surveillance, admitting the deal "looked opportunistic and sloppy."
[12]
Google signs classified AI deal with Pentagon: The Information
Google has inked a deal with the U.S. Department of Defense, allowing the Pentagon to utilise its AI models for classified government work. This agreement, alongside similar ones with OpenAI and xAI, permits 'any lawful government purpose.' While Google's contract includes safeguards against domestic mass surveillance and autonomous weapons, it doesn't grant the company veto power over military decisions. Alphabet's Google joined a growing list of technology firms to sign a deal with the U.S. Department of Defense to use its artificial intelligence models for classified work, The Information reported on Tuesday, citing a person familiar with the matter. The agreement allows the Pentagon to use Google's AI for "any lawful government purpose", the report added, putting it alongside OpenAI and Elon Musk's xAI, which also have deals to supply AI models for classified use. Classified networks are used to handle a wide range of sensitive work, including mission planning and weapons targeting. The Pentagon signed agreements worth up to $200 million each with major AI labs in 2025, including Anthropic, OpenAI, and Google. Reuters had earlier reported that the Pentagon had been pushing top AI companies such as OpenAI and Anthropic to make their tools available on classified networks without the standard restrictions they apply to users. Safety and oversight Google's agreement requires it to help in adjusting the company's AI safety settings and filters at the government's request, according to The Information report. The contract includes language stating, "the parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control." However, the agreement also says it does not give Google the right to control or veto lawful government operational decision-making, the report added. Reuters could not verify the report. The U.S. Department of Defense, which has now been renamed the Department of War by President Donald Trump, did not immediately respond to requests for comment. Google said it supports government agencies across both classified and non-classified projects. A spokesperson for the company said that the company remains committed to the consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight. "We believe that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, represents a responsible approach to supporting national security," a spokesperson for Google told Reuters. The Pentagon has said it has no interest in using AI to conduct mass surveillance of Americans or to develop weapons that operate without human involvement, but wants 'any lawful use' of AI to be allowed. Anthropic faced fallout with the Pentagon earlier in the year after the startup refused to remove guardrails against using its AI for autonomous weapons or domestic surveillance, and the department designated the Claude-maker a supply-chain risk.
[13]
Google inks deal allowing Pentagon to use AI models for classified work
Alphabet's Google has joined a growing list of technology firms to sign a deal with the U.S. Department of Defense, allowing its artificial intelligence models to be used for classified work, The Information reported Tuesday, citing a person familiar with the matter. The agreement allows the Pentagon to use Google's AI for "any lawful government purpose", the report added, putting it alongside OpenAI and Elon Musk's xAI, which also have to supply AI models for classified use. Classified networks are used to handle a wide range of sensitive work, including mission planning and weapons targeting.
[14]
Google signs Pentagon deal to provide AI models for classified government work
Alphabet's Google joined a growing list of technology firms to sign a deal with the US Department of Defense to use its artificial intelligence models for classified work, The Information reported on Tuesday, citing a person familiar with the matter. The agreement allows the Pentagon to use Google's AI for "any lawful government purpose", the report added, putting it alongside OpenAI and Elon Musk's xAI, which also have deals to supply AI models for classified use. Classified networks are used to handle a wide range of sensitive work, including mission planning and weapons targeting. The Pentagon signed agreements worth up to $200 million each with major AI labs in 2025, including Anthropic, OpenAI, and Google. The Pentagon is seeking to preserve all flexibility in defense and not be limited by warnings from the technology's creators against powering weapons with unreliable AI. Google's agreement requires it to help in adjusting the company's AI safety settings and filters at the government's request. Google Pentagon AI contract restricts military use The contract includes language noting "the parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control", according to the report, but also adds that the "Agreement does not confer any right to control or veto lawful Government operational decision-making". Reuters could not verify the report. Alphabet and the US Department of Defense, which has now been renamed the Department of War by President Donald Trump, did not immediately respond to requests for comment. A spokesperson for Google Public Sector, the unit that handles US government business, told The Information that the new agreement is an amendment to its existing contract. Reuters had earlier reported that the Pentagon had been pushing top AI companies such as OpenAI and Anthropic to make their tools available on classified networks without the standard restrictions they apply to users.
[15]
Hundreds of Google employees urge CEO not to sign deal with Pentagon in open letter
The letter stated that the signatories feel their "proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses." Hundreds of Google employees signed an open letter on Monday urging CEO Sundar Pichai not to allow the Pentagon to use the company's artificial intelligence systems for classified workloads. The letter, which was signed by more than 18 senior staff members and hundreds of other employees, stated that the signatories feel their "proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses." Concerns were expressed about potential lethal autonomous weapons and mass surveillance uses, but "extend beyond" that, according to the letter. "Currently, the only way to guarantee that Google does not become associated with such harms is to reject any classified workloads," it continued. In addition to concerns over unethical use, the signatories expressed concern about the platform's accuracy, noting that AI systems do make mistakes. Potential deal between Pentagon and Google The letter follows a report published in The Information on negotiations between the US Department of Defense and Google over a possible deal. The report stated that the agreement would allow the Pentagon to use Google's AI for all lawful uses. These reported negotiations come two months after the Department of Defense called Anthropic a supply-chain risk, a label the government can apply to companies that expose military systems to potential infiltration or sabotage by adversaries. Hegseth's unprecedented move to bar Anthropic from certain military contracts followed the company's refusal to allow the military to use its AI chatbot, Claude, for US surveillance or autonomous weapons. US President Donald Trump also urged all government departments to stop using Anthropic systems. Reuters contributed to this report.
Share
Copy Link
Over 600 Google employees, including senior staff from DeepMind, signed a letter to CEO Sundar Pichai demanding the company refuse Pentagon deals for classified military AI use. The protest came just hours before reports emerged that Google had signed an agreement allowing the Department of Defense to use its AI models for any lawful government purpose, without the ethical restrictions that led to Anthropic's blacklisting.
More than 600 Google employees have signed a letter to CEO Sundar Pichai urging him to block the Pentagon from using Google AI for classified military AI use
1
2
. The Google employees letter, organized primarily by staff at DeepMind, includes more than 20 principals, directors, and vice presidents2
4
. About two-thirds of signatories agreed to be named publicly, while roughly a third requested anonymity for fear of retaliation2
.
Source: The Hill
The letter states that "the only way to guarantee that Google does not become associated with such harms is to reject any classified workloads. Otherwise, such uses may occur without our knowledge or the power to stop them"
1
. Employees expressed deep concern about autonomous weapons and mass surveillance, warning that AI systems can centralize power and make critical mistakes2
.The timing proves particularly striking. Just hours after the letter was sent to Sundar Pichai on Monday, The Information reported that Google had signed a classified AI deal with the Department of Defense
3
5
.
Source: FT
The agreement allows the Pentagon to use Google AI models for "any lawful government purpose," placing it alongside OpenAI and xAI in supplying AI for classified use
3
.The Pentagon signed agreements worth up to $200 million each with major AI labs in 2025, including Anthropic, OpenAI, and Google
3
. Google's agreement requires the company to adjust AI safety settings and filters at the US government's request3
. While the contract includes language noting the AI system "should not be used for domestic mass surveillance or autonomous weapons without appropriate human oversight and control," it also states the agreement "does not confer any right to control or veto lawful Government operational decision-making"3
.This workforce protest marks a return to tensions that first erupted in 2018 over Project Maven, when Google employees forced the company to limit its defense work after learning the tech industry giant had signed on to use AI to detect and analyze objects on drone video feeds
2
4
. Google did not renew that contract and pledged not to work on AI for weapons or surveillance4
.However, the company quietly dropped that stance last year in an update of its AI principles, deleting language that promised not to pursue "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people"
4
. DeepMind co-founder Demis Hassabis explained the decision by saying the world has changed and US tech companies have a duty to help defend the country4
.Related Stories
The protest follows closely a legal battle between the Pentagon and Anthropic over military AI applications
1
2
. Anthropic refused to loosen guardrails around how the US military can use its AI models and was subsequently designated a "supply chain risk" by the Pentagon1
5
. The company has challenged this designation in court with support from across the tech industry1
.Sofia Liguori, an AI research engineer at Google DeepMind in the UK, said she signed the letter because Google has failed to discuss concrete red lines about usage of its AI on classified workloads
2
. She believes it would be impossible for the company to monitor and limit how its AI tools are actually used on air-gapped classified systems isolated from the public internet2
. "Agentic AI is particularly concerning because of the level of independence it can get to. It's like giving away a very powerful tool at the same time as giving up on any kind of control on its usage," Liguori stated2
.The letter specifically references recent reports about Google and the Pentagon discussing a deal for deploying Gemini AI in classified settings
1
. Microsoft already has deals to provide AI services in classified environments, and OpenAI announced a renegotiated agreement with the Pentagon in February1
. OpenAI faced backlash from its researchers after striking its deal, with CEO Sam Altman later apologizing and calling his actions "opportunistic and sloppy"4
.
Source: ET
The Google employees letter concludes with a stark warning about reputational damage: "Making the wrong call right now would cause irreparable damage to Google's reputation, business, and role in the world. We know from our own history that our leaders can make the right choices, for ourselves and for the world, when the stakes are high"
4
. The contrast between employee demands and reported Pentagon agreements creates a direct challenge that will test whether workforce pressure can still influence military AI policy as it did during Project Maven, or whether competitive pressure and national security arguments will prevail in shaping how classified networks deploy frontier AI models from major labs.Summarized by
Navi
[5]
1
Technology

2
Policy and Regulation

3
Science and Research
