9 Sources
[1]
Google DeepMind Workers Vote to Unionize Over Military AI Deals
Employees at Google DeepMind in London have voted to unionize as part of a bid to block the AI lab from providing its technology to the US and Israeli militaries. In a letter addressed to Google's managing director for the UK and Ireland, Debbie Weinstein, the workers asked the company to recognize the Communication Workers Union and Unite the Union as joint representatives for DeepMind employees. "Fundamentally, the push for unionization is about holding Google to its own ethical standards on AI, how they monetize it, what the products do, and who they work with," John Chadfield, national officer for technology at the CWU, tells WIRED. "Through the process of unionization, workers are collectively in a much stronger place to put [demands] to an increasingly deaf management." The push to unionize began in February 2025, when Google's parent company Alphabet removed a pledge not to use AI for purposes like weapons development and surveillance from its ethics guidelines, according to a DeepMind employee, who asked to remain anonymous for fear of retaliation. "A lot of people here bought into the Google DeepMind tagline 'to build AI responsibly to benefit humanity,'" the DeepMind employee told WIRED. "The direction of travel is to further militarization of the AI models we're building here." Increasingly, that concern is reflected across the industry. In late February, staff at DeepMind and OpenAI signed an open letter in support of Anthropic, after the US Department of Defense sought to designate the lab a supply chain risk over its refusal to allow its AI to be used in autonomous weapons or for mass surveillance of US citizens. Last week, The New York Times reported that Google had entered into a deal allowing the Pentagon to use its AI for "any lawful government purpose." (On Friday, the US Department of Defense confirmed that it had reached deals with seven leading AI companies -- including Google, SpaceX, OpenAI, and Microsoft -- to use their models on classified networks.) Roughly 600 US-based Google employees reportedly signed a letter protesting the deal. "We think the [any lawful purpose] clause is vague enough to be effectively meaningless," the DeepMind employee says. Google did not respond immediately to a request for comment. The company has previously defended its deals with government organizations. "We are proud to be part of a broad consortium of leading AI labs and technology and cloud companies providing AI services and infrastructure in support of national security," Jenn Crider, a Google spokeswoman, told The New York Times last week. "We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight." In 2021, Google employees in the US formed the Alphabet Workers Union. The union is not recognized by Alphabet for collective bargaining purposes, but has previously succeeded in negotiating agreements on behalf of Google contractors. The DeepMind employee tells WIRED that if the staff succeeds in unionizing in the UK, they will likely demand that Google pulls out of its long-standing contract with the Israeli military, and seek greater transparency over how its AI products will be used, and some sort of assurance relating to layoffs made possible by automation. If Google does not engage, the letter states, the employees will ask an arbitration committee to compel the company to recognize the unions. Since the turn of the year, both Anthropic and OpenAI have announced large-scale expansions of their operations in London. CWU hopes the unionization effort at DeepMind will spur workers at those labs into similar action. "These conversations are happening," claims Chadfield. "The workers at other frontier labs have seen what Google DeepMind workers have done. They've come to us asking for help as well."
[2]
Google DeepMind workers are unionizing over AI military contracts
Staffers at Google DeepMind's headquarters have voted to unionize in an effort to prevent the AI firm's technology from being used by Israel and the US military. In a letter to Google management on Tuesday, employees requested that the Communication Workers Union (CWU) and Unite the Union be recognized as joint representatives, with 98 percent of CWU members at DeepMind voting in support of the move. "We don't want our AI models complicit in violations of international law, but they already are aiding Israel's genocide of Palestinians," an unnamed DeepMind employee said in a statement shared by the CWU. "Even if our work is only used for administrative purposes, as leadership has repeatedly told us, it is still helping make genocide cheaper, faster, and more efficient. That must end immediately, as must harm to Iranians and human lives anywhere."
[3]
Google's Pentagon AI deal reportedly drove the DeepMind team to unionize - Engadget
Google's UK-based DeepMind workers have voted to unionize. They're sending the company's management a letter asking it to recognize the Communication Workers Union and Unite the Union as their representatives, according to The Guardian. The workers voted back in April, driven by reports that the company was close to reaching a deal with the US Defense Department. They reportedly cited the US government's "capricious Iran war" and its feud with Anthropic as evidence that the department was "not a responsible partner." The Pentagon announced last week that it had signed deals with several leading AI companies, including Google. Those agreements allow the Defense Department to use their AI technologies for "any lawful use." According to report from The Information, the deals don't allow the government to use the technologies in domestic mass surveillance or autonomous weapons projects "without appropriate human oversight and control." However, Google and the other companies don't have "any right to control or veto" the government's decisions on how and where to use their AI tech. Google Gemini and its other AI technologies, which were part of the agreement, were developed by its unified AI team that included the DeepMind workers. Some of the workers who spoke to The Guardian also expressed concerns that the technology they developed were being used to help the Israel Defense Forces (IDF). Google reportedly worked with the IDF to expand its access to the company's AI tools, and it entered into a $1.2 billion cloud computing contract with the Israeli government back in 2021. According to The Guardian, the workers wanted to organize to put pressure on Google so that it would commit not to develop technology "whose primary purpose is to cause harm or injury to people." They also want the company establish an independent ethics oversight body and give workers the right to refuse to contribute to specific projects on moral grounds.
[4]
Google DeepMind workers vote to unionise after classified Pentagon AI deal overrides eight years of ethics pledges
In 2018, four thousand Google employees signed a petition against Project Maven, a Pentagon contract that used the company's AI to analyse drone surveillance footage. Google did not renew the contract. It published a set of AI principles pledging not to develop weapons or surveillance technology that violates international norms. It built an AI ethics team. The episode was treated as proof that tech workers could shape the moral boundaries of the companies they worked for. Eight years later, Google has signed a classified AI deal with the Pentagon for "any lawful governmental purpose," removed its weapons pledge from its published principles, and fired the leaders of the ethics team it created in response to Project Maven. The researchers who built the AI now being offered to the military have responded in the only way the company has left them: they have voted to unionise. Workers at Google DeepMind's UK offices voted in April to join the Communication Workers Union and Unite the Union, with 98 per cent of ballots cast in favour. They sent a letter to management this week requesting formal recognition of the unions as their official representatives. If recognised, they would become the first frontier AI laboratory in the world to have unionised workers. The vote was not primarily about pay, benefits, or working conditions. It was about the Pentagon. More than 580 Google employees, including 20 directors and vice presidents and senior DeepMind researchers, had already signed a letter urging CEO Sundar Pichai to refuse the classified military AI deal. Over 100 DeepMind employees separately signed an internal letter demanding that no DeepMind research or models be used for weapons development or autonomous targeting. The company signed the deal anyway. The union demands are specific: an end to the use of Google AI by the Israeli military and the US military, the restoration of the company's scrapped commitment not to build AI weapons or surveillance tools, the creation of an independent ethics oversight body, and the individual right for researchers to refuse to contribute to projects on moral grounds. These are not typical union demands. They are governance demands, imposed from below because the governance structures that were supposed to exist from above, the AI principles, the ethics board, the internal review processes, were dismantled or overridden when they conflicted with revenue. Google signed the classified Pentagon deal for "any lawful purpose" while simultaneously withdrawing from a $100 million drone swarm competition after an internal ethics review, a contradiction that researchers described as incoherent. The classified deal gives the Pentagon access to Google's AI models on air-gapped networks where Google cannot monitor what queries are run, what outputs are generated, or what decisions are made. DeepMind research scientist Alex Turner criticised the agreement publicly, posting that Google "can't veto usage" and is relying on "aspirational language with no legal restrictions." The contract includes advisory guardrails discouraging mass surveillance and autonomous weapons without human oversight, but the government can request adjustments to safety settings, and on a classified network, there is no independent verification that any guardrail is honoured. Google's deal is reportedly more permissive than OpenAI's, which retains "full discretion" over its safety mechanisms. Only Anthropic refused to grant the Pentagon unrestricted access, insisting that its models not be used for autonomous weapons or mass domestic surveillance. The Pentagon designated Anthropic a supply-chain risk in response, ordered the military to stop using its products, and signed deals with seven other companies, including Google, that agreed to the terms Anthropic rejected. The message to AI researchers is plain: the company that maintained ethical limits was punished, and the companies that removed theirs were rewarded. The 2018 Project Maven protest succeeded because Google's business did not depend on military contracts. The company could afford to walk away from a few million dollars in Pentagon revenue without material impact on its advertising-driven business model. In 2026, the classified AI market is worth tens of billions of dollars, the Pentagon has demonstrated that it will retaliate against companies that refuse to cooperate, and Google's competitors have already signed equivalent deals. The structural conditions that made worker leverage possible in 2018 no longer exist. Google removed its explicit pledge not to develop AI for weapons from its published principles in February 2025, a quiet edit that eliminated the internal standard employees had used to challenge military projects. The firing of Timnit Gebru and Margaret Mitchell, the co-leads of Google's Ethical AI team, in 2020 and 2021 was the first signal that internal dissent on AI ethics would not be tolerated. The firing of 28 employees who protested Project Nimbus, the $1.2 billion contract providing cloud computing and AI services to the Israeli government, in 2024 was the second. By 2026, the pattern is clear: Google will pursue military and government AI contracts regardless of internal objection, and employees who object publicly will be removed. The union vote is the workers' response to that pattern. If individual protest results in dismissal, collective bargaining is the remaining mechanism for exerting influence over how the technology they build is used. Meta and Microsoft have collectively cut 23,000 jobs while increasing AI capital expenditure by tens of billions, converting human payroll into GPU infrastructure. The restructuring of Big Tech around AI is eliminating roles across customer support, content moderation, quality assurance, and engineering while concentrating investment in the researchers and engineers who build the models. DeepMind's workers are among the most valuable employees in the AI industry, and their decision to unionise reflects an awareness that their leverage is temporary: as models become more capable, the number of researchers needed to advance the frontier may shrink, and the window for workers to shape how their work is used narrows with every generation of model that requires fewer humans to build. Chinese courts have ruled that replacing workers with AI is not legal grounds for dismissal, establishing a precedent that the most aggressive AI-deploying economy in the world has limits on how the technology can be used to eliminate human roles. The ruling illustrates a global divergence: governments are beginning to define the boundaries of AI's impact on workers, but the boundaries differ by jurisdiction, and the ethical use of AI in military applications has no comparable legal framework in any country. DeepMind's union is operating in a gap between employment law, which protects the right to organise, and defence procurement, where governments have broad discretion over which AI capabilities they acquire and how they use them. The practical impact of a DeepMind union depends on whether Google recognises it voluntarily. UK law provides a statutory recognition process if the employer refuses, but that process can take months and requires demonstrating majority support within a defined bargaining unit. Even with recognition, the union's ability to influence military contracts is limited: collective bargaining in the UK covers pay, hours, and working conditions, not corporate strategy or government procurement decisions. The union's leverage is reputational and retention-based. If enough senior researchers leave, or credibly threaten to leave, over military AI contracts, the cost to Google's research capabilities could exceed the revenue from the contracts themselves. But that calculation depends on whether the researchers are irreplaceable, and in a market where every AI lab is hiring, the answer is less clear than it was in 2018. What the DeepMind union represents is something larger than a labour dispute. It is the first organised attempt by the people who build frontier AI to claim a formal role in deciding how that AI is used. The question the union raises is whether the researchers who create the most powerful technology in the world have any right to constrain its application, or whether that right belongs entirely to the companies that employ them and the governments that buy from them. In 2018, Google's workers won that argument without a union. In 2026, they have concluded that they cannot win it without one. Whether they are right will depend not on the outcome of a recognition ballot but on whether the company values the researchers who build its AI more than it values the military contracts their AI enables. The union is a bet that it does. The Pentagon deal is evidence that it does not.
[5]
DeepMind Workers Vote to Unionize Following Google’s Pentagon AI Deal
Workers are standing up to big tech's push to partner with the war machine. Staff at Google’s AI research lab DeepMind have voted to unionize due to growing concerns surrounding the company’s ties to the U.S. military and the use of its AI products in warfare. Workers at DeepMind’s London headquarters sent a letter to management on Tuesday asking the company to recognize the Communication Workers Union and Unite the Union as their joint representatives. Citing unnamed DeepMind employees, The Guardian reports that the vote took place last month amid reports that Google was preparing to sign a deal with the U.S. Department of Defense allowing the Pentagon to use the company’s AI models in classified settings. Google ultimately signed the agreement last week despite employee backlash. The day before news of the Google-Pentagon AI deal broke, more than 600 Google employeesâ€"including directors and vice presidentsâ€"signed a letter to CEO Sundar Pichai urging the company not to allow its AI systems to be used in classified military settings. “We want to see AI benefit humanity; not to see it being used in inhumane or extremely harmful ways. This includes lethal autonomous weapons and mass surveillance but extends beyond,†the letter read. One DeepMind worker told The Guardian that the U.S. war in Iran and the Trump administration’s feud with Anthropic suggested that the Pentagon is “not a responsible partner.†“I have joined the union due to concerns about AI being used to empower authoritarianism, whether through military or surveillance applications, both foreign and domestic,†the worker told The Guardian. “By unionizing, we are taking the traditional route for workers to organize and have a say.†The tech branch of the Communication Workers Union also posted a list of demands from DeepMind staff, including calls for stronger AI principles. Primarily, the workers want Google to commit to not developing weapons or AI systems meant to harm people or surveillance technology that could violate human rights. The workers are also calling for stronger whistleblower protections and the right for employees to abstain from work that violates their ethical or moral beliefs. In the letter to management, the unions warned that if Google refuses to voluntarily recognize them, they will ask the U.K.’s Central Arbitration Committee to intervene and potentially force the company to negotiate. Google did not immediately respond to a request for comment from Gizmodo. However, the company previously defended its deal with the Pentagon. “We are proud to be part of a broad consortium of leading AI labs and technology and cloud companies providing AI services and infrastructure in support of national security. We support government agencies across both classified and non-classified projects, applying our expertise to areas like logistics, cybersecurity, diplomatic translation, fleet maintenance, and the defense of critical infrastructure,†a Google spokesperson told Gizmodo in an emailed statement at the time. “We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.†All of this comes as Silicon Valley’s biggest tech companies are increasingly agreeing to let their AI models be used in classified U.S. military work. Last week, Microsoft, Nvidia, Amazon Web Services, and startup Reflection AI were announced as the latest companies to sign agreements with the Pentagon related to classified AI work. They joined Google, OpenAI, and SpaceX, bringing the total number of major AI companies participating in classified military projects to seven.
[6]
Google DeepMind workers in UK vote to unionize amid deal with US military
Exclusive: Worker pointed to Iran war and Pentagon's Anthropic feud as indications the department is 'not a responsible partner' Workers developing Google's artificial intelligence products in the UK have voted to unionize, in part out of concerns about a deal between the company and the US military that was announced last week. In a letter slated to go to management on Tuesday and shared exclusively with the Guardian, workers at Google DeepMind, the company's AI research laboratory, requested recognition of the Communication Workers Union and Unite the Union as joint representatives of the lab's UK-based staff. DeepMind's UK workers voted to unionize in April. One of the workers said they were particularly driven by reports that Google was close to reaching a deal with the defense department and pointed to the US's "capricious Iran war" and the Trump administration's feud with Anthropic as indications that the department is "not a responsible partner". The deal was ultimately announced on Friday. "I have joined the union due to concerns about AI being used to empower authoritarianism, whether through military or surveillance applications, both foreign and domestic," added the worker, who requested anonymity because of fear of retaliation. "By unionizing, we are taking the traditional route for workers to organize and have a say." Another worker, who also requested anonymity, said that many at the company had struggled with what they had come to view as their complicity in Israel's war in Gaza. The company provided the Israeli military with increased access to its AI tools from the early days of the war in Gaza, the Washington Post reported last year, and in 2021, it signed, along with Amazon, a $1.2bn cloud-computing contract with the Israeli government. "Our technology helped the IDF," said the second UK worker, referring to Israel's military. "I want AI to benefit humanity, not to facilitate a genocide." Google did not respond to a request for comment. Concerns by Google workers and investors have been mounting for years but have particularly escalated after the company last year dropped a pledge not to develop militarized AI. That development was a driving motivation for Google DeepMind workers' union in the UK, two of them said. While small groups of Google employees have unionized in the US before, the UK workers are the first in a "frontier" AI lab to seek union recognition, they said. Google DeepMind is headquartered in London but has about a dozen offices across North America and Europe. At least 1,000 workers will be represented if the company recognizes the union, according to union officials. On Friday, the Pentagon confirmed it had reached agreements with seven leading AI companies, Google among them. Others included SpaceX, OpenAI, Nvidia, Reflection, Microsoft and Amazon Web Services. Anthropic, whose technology is in wide use by the US military but which has sparred with the Pentagon over future contracts, was notably absent from the group. "These agreements accelerate the transformation toward establishing the United States military as an AI-first fighting force and will strengthen our warfighters' ability to maintain decision superiority across all domains of warfare," the defense department officials said in a statement. The Trump administration has pushed AI companies to make their tools available on classified networks without the standard restrictions they apply to users. Google's contract with the Pentagon reportedly includes language stating: "The parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control." But that language is non-binding, and the agreement also says Google has no right to control or veto "lawful" government operational decision-making. Workers who voted to join the union said they did so to raise pressure on Google to meet demands already made by other employees at the company, including that it commit not to develop technology "whose primary purpose is to cause harm or injury to people", establish an independent ethics oversight body, and grant workers the individual right to refuse to contribute to projects on moral grounds. Should the company refuse, they said, they are considering protests and "research strikes", during which staff abstain from work expected to significantly improve core products such as Gemini, Google's AI bot, while avoiding detection by continuing to perform less significant updates. Workers across Google have been increasingly vocal about their opposition to militarized applications of their technology. Last week, amid reports of the pending deal, more than 600 Google employees signed an open letter to CEO, Sundar Pichai, demanding the company not make its AI systems available for classified use. "We want to see AI benefit humanity; not to see it being used in inhumane or extremely harmful ways," they wrote. "Making the wrong call right now would cause irreparable damage to Google's reputation, business, and role in the world." Tech workers have increasingly challenged management over the use of the technology they have helped develop. In 2024, Google fired 50 workers who had protested against Project Nimbus, the 2021 contract with the Israeli government. At Microsoft, which the Guardian revealed supplied Israel with cloud storage used in the mass surveillance of Palestinians, workers occupied a company campus with signs reading "No Labor for Genocide". (The company terminated the Israeli military's access to that technology after the Guardian's reporting.) Investors have also raised concerns. A coalition of shareholders who own about $2.2bn of Alphabet's shares wrote a letter to Google's parent company last week demanding a meeting and greater transparency about Google Cloud and AI deployments in "high-risk" contexts. They cited concerns about the company providing services to US immigration authorities, as well as Project Nimbus, and raised questions about "the effectiveness of policy guardrails, internal escalation processes, and Board oversight of AI deployments in conflict-affected or security-sensitive environments". In 2018, Google also dealt with widespread employee protests over a military contract known as Project Maven, in which the company agreed to build AI products for the Pentagon's analysis of drone footage. In response to the backlash, the company did not renew the contract in 2019 and published a set of principles for its work on AI that included the pledge, now dropped, not to design AI for weapons. Palantir took over Project Maven, which continues today.
[7]
Google DeepMind workers in the U.K. vote to unionize over military AI contracts amid internal backlash over its Pentagon deal | Fortune
Google's UK-based DeepMind workers have launched a bid to form what would be the world's first union at a frontier AI lab. The move follows a controversial deal Google inked with the Pentagon, sparking a wave of internal backlash over the company's military contracts. Last week, Google agreed to let the U.S. Department of Defense use its Gemini AI models inside classified military networks for "any lawful purpose," a deal critics say could open the door to autonomous weapons and mass surveillance of American citizens with few enforceable limits. Google is not the only leading AI lab to sign such a deal -- OpenAI, xAI, Nvidia, Microsoft, and Amazon have all agreed to similar contracts. Only Anthropic has refused, resulting in the Pentagon ordering the military and all defense contractors to stop using its products and labeling it a "supply chain risk," a designation Anthropic is challenging in court. Within Google, the deal has kicked off internal protests, with more than 600 Google employees signing an open letter opposing the deal, and several employees criticizing the agreement in the press and on social media. Now, employees are seeking to force an end to Google AI being used by the U.S. Department of Defense as well as the Israeli military, according to a statement from the Communication Workers Union, which is representing the DeepMind workers. The employees are also requesting the reinstatement of a previous company commitment -- originally published following employee uproar over Project Maven in 2018 but quietly removed from Google's public website in February 2025 -- not to develop AI for weapons or surveillance that violates internationally accepted norms. In addition, the workers are asking for an independent ethics oversight body and the individual right to refuse to contribute to projects on moral grounds. The union attempt is part of a wider campaign against Google's military contracts that, according to the Communication Workers Union (CWU), includes in-person protests and research strikes that would include employees abstaining from work on core products such as the Gemini AI assistant. One Google DeepMind employee with knowledge of the union bid but who asked for anonymity to speak freely about their employer told Fortune: "Hopefully this will help employees help the DeepMind and Google leadership grow a spine when it comes to standing up to what they have preached and publicly endorsed as our values and principles for the last two decades." Workers below VP level backed the union vote by 98%, according to the CWU, and have asked Google to recognize the CWU and Unite the Union as their official representatives. The unionization bid would cover at least 1,000 staff tied to DeepMind's London office. Representatives for Google did not respond to Fortune's request for comment by press time. "By exercising their rights to collectivise, they are in a strong position to demand their employer stop circling the ethical drain of military-industrial contracts, echoing the sentiment of many working people in the UK and elsewhere," John Chadfield, CWU national officer for tech workers, said. The move is an attempt to claw back some of the leverage Google employees have enjoyed in the past. In 2018, thousands of employees signed a petition and several resigned over Project Maven, eventually forcing Google to abandon the contract. But that leverage has since eroded, according to former and current employees who previously spoke to Fortune. Cost-cutting, AI spending, and layoffs across the tech sector have weakened bargaining power, they said. "One of the things we can look at through unionization is restoring that leverage," another DeepMind researcher told Fortune. "If we can manage to get a seat at the table, whether that's in the ethics review, the AI review, deployments, or even on the Alphabet board, that's where we could restore leverage." "In general, I don't think that leverage has ever been very direct; it's always been pointing out the problem, and making the cost to continue these controversial projects high enough that they are not worth it," they added. The workers' letter gave Google management 10 working days to voluntarily recognise the CWU and Unite -- or to agree to mediated negotiations -- before a formal legal process is launched to compel recognition.
[8]
Google DeepMind UK staff move to unionise over military AI use - The Economic Times
Google DeepMind workers in the UK have sought official union recognition. They are concerned about the company's artificial intelligence technology being used by the US and Israeli militaries. Employees are demanding an end to such applications and the restoration of a commitment against creating AI weapons or surveillance tools.UK employees at Google's AI lab DeepMind on Tuesday requested official recognition of two unions, as concern grows over the company's technology being used by the US and Israeli militaries. Workers requested recognition of the Communication Workers Union and Unite in a letter to management shared by the CWU on Tuesday. "The unionising DeepMind workers are seeking an end to use of Google AI by Israel and the US military," the CWU said in a statement. The move came after the Pentagon last week announced agreements with Google and six other AI companies to deploy their technology on classified military networks. Prior to the announcement, more than 600 Google employees urged the company not to sign the deal because of concerns the tools could be leveraged by the military to cause harm. Demands from the unionising UK employees include "restoring a scrapped commitment not to make AI weapons or surveillance tools". It also requests "the creation of an independent ethics oversight body and the individual right to refuse to contribute to projects on moral grounds". The letter gives management 10 working days to voluntarily recognise the unions, or organisers would take formal legal proceedings. "Google staff worry how the technology will be used given the deal could reportedly open the door to autonomous weapons and mass surveillance of Americans," the CWU said. In 2018, an employee movement that successfully pushed Google to abandon Project Maven, a Pentagon program to integrate AI into drone operations. But in recent years Google has embarked on a strategy shift, steadily rebuilding its military business and competing with rivals Amazon Web Services and Microsoft for defence cloud contracts.
[9]
Google DeepMind UK staff move to unionise over military AI use
(Alliance News) - UK employees at Alphabet Inc company Google's AI lab DeepMind on Tuesday requested official recognition of two unions, as concern grows over the company's technology being used by the US and Israeli militaries. Workers requested recognition of the Communication Workers Union and Unite in a letter to management shared by the CWU on Tuesday. "The unionising DeepMind workers are seeking an end to use of Google AI by Israel and the US military," the CWU said in a statement. The move came after the Pentagon last week announced agreements with Google and six other AI companies to deploy their technology on classified military networks. Prior to the announcement, more than 600 Google employees urged the company not to sign the deal because of concerns the tools could be leveraged by the military to cause harm. Demands from the unionising UK employees include "restoring a scrapped commitment not to make AI weapons or surveillance tools". It also requests "the creation of an independent ethics oversight body and the individual right to refuse to contribute to projects on moral grounds". The letter gives management 10 working days to voluntarily recognise the unions, or organisers would take formal legal proceedings. "Google staff worry how the technology will be used given the deal could reportedly open the door to autonomous weapons and mass surveillance of Americans," the CWU said. A Google DeepMind spokesperson confirmed to AFP that it had recently received a letter from the CWU and Unite requesting recognition" for UK employees. "At this stage in the process, there has been no vote to unionise. We have always valued constructive dialogue with employees and we'll remain focused on creating a positive and successful workplace," they added. In 2018, an employee movement successfully pushed Google to abandon Project Maven, a Pentagon program to integrate AI into drone operations. But in recent years Google has embarked on a strategy shift, steadily rebuilding its military business and competing with rivals Amazon.com Inc's Amazon Web Services and Microsoft Corp for defence cloud contracts. Copyright 2026 Alliance News Ltd. All Rights Reserved.
Share
Copy Link
Employees at Google DeepMind's London headquarters voted to unionize with 98% support, driven by concerns over the company's classified Pentagon AI deal and contracts with Israeli and US military forces. The workers are demanding stronger ethical standards on AI, an independent oversight body, and the right to refuse involvement in projects on moral grounds.
Employees at Google DeepMind in London have voted to unionize, marking a significant moment in the AI industry's ongoing struggle over ethical boundaries and military partnerships. The workers sent a letter to Google's UK and Ireland managing director Debbie Weinstein, requesting recognition of the Communication Workers Union and Unite the Union as joint representatives for DeepMind staff
1
. The vote, which took place in April 2025, saw 98 percent of CWU members at DeepMind casting ballots in favor of unionization2
. This effort represents the first attempt to unionize workers at a frontier AI laboratory globally, driven not by traditional labor concerns like wages or benefits, but by deep-seated objections to how their research is being deployed4
.
Source: ET
The push for unionization began in February 2025, when Google's parent company Alphabet removed a pledge not to use AI for purposes like weapons development and surveillance from its ethics guidelines
1
. The situation intensified when reports emerged that Google was close to reaching a classified Pentagon AI deal with the US Department of Defense. Last week, the Pentagon confirmed it had signed deals with seven leading AI companies, including Google, SpaceX, OpenAI, and Microsoft, allowing them to use AI models on classified networks for "any lawful government purpose"1
. More than 600 Google employees, including 20 directors and vice presidents, signed a letter protesting the deal before it was finalized4
. An additional 100 DeepMind employees separately signed an internal letter demanding that no DeepMind research or models be used for AI for weapons development or autonomous targeting4
.
Source: The Verge
The union demands extend beyond the Pentagon deal. Workers are calling for Google to end contracts allowing technology to be used by the Israeli and US military, restore commitments against building AI weapons or surveillance technology, and establish an independent ethics oversight body
4
. They also want the right to refuse involvement in projects on moral grounds and stronger whistleblower protections5
. "We don't want our AI models complicit in violations of international law, but they already are aiding Israel's genocide of Palestinians," an unnamed DeepMind employee stated2
. Google has maintained a $1.2 billion cloud computing contract with the Israeli government since 2021 and reportedly expanded the Israel Defense Forces' access to its AI tools3
.Related Stories
The current situation stands in stark contrast to 2018's Project Maven protest, when 4,000 Google employees successfully pressured the company to abandon a Pentagon contract using AI to analyze drone surveillance footage
4
. Following that episode, Google published AI principles and built an ethics team. However, the company has since fired the leaders of that ethics team and removed explicit pledges against weapons development from its published principles4
. "Fundamentally, the push for unionization is about holding Google to its own ethical standards on AI, how they monetize it, what the products do, and who they work with," John Chadfield, national officer for technology at the CWU, explained1
. The classified deal gives the Pentagon access to Google's AI models on air-gapped networks where the company cannot monitor queries, outputs, or decisions4
.The unionization effort reflects broader tensions across the AI industry. In late February, staff at DeepMind and OpenAI signed an open letter supporting Anthropic after the US Department of Defense designated the lab a supply chain risk over its refusal to allow its AI to be used in autonomous weapons or for mass surveillance
1
. Anthropic maintained ethical limits and insisted its models not be used for autonomous weapons or mass domestic surveillance, while Google's deal is reportedly more permissive than OpenAI's, which retains "full discretion" over safety mechanisms4
. The Pentagon retaliated against Anthropic by ordering the military to stop using its products and signing deals with seven other companies that agreed to terms Anthropic rejected4
. Google defended its position, stating it remains "committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight"5
. However, workers note the contracts include only advisory guardrails, and on classified networks, there is no independent verification that any guardrail is honored4
. The CWU hopes this unionization effort will inspire similar action at other frontier labs, with both Anthropic and OpenAI having announced large-scale expansions in London since the start of the year1
.Summarized by
Navi
[4]
26 Apr 2025•Policy and Regulation

23 Aug 2024

27 Apr 2026•Policy and Regulation

1
Health

2
Technology

3
Policy and Regulation
