8 Sources
[1]
Microsoft says its Azure and AI tech hasn't harmed people in Gaza
Microsoft says it has found no evidence that the Israeli military has used its Azure and AI technology to harm Palestinian civilians or anyone else in Gaza. The software maker says it has "conducted an internal review and engaged an external firm," to perform a review, after some Microsoft employees have repeatedly called on the company to cut its contracts with the Israeli government. Microsoft says that its relationship with the Israel Ministry of Defense (IMOD) is "structured as a standard commercial relationship," and that it has "found no evidence that Microsoft's Azure and AI technologies, or any of our other software, have been used to harm people or that IMOD has failed to comply with our terms of service or our AI Code of Conduct." Microsoft's AI code of conduct requires that customers use human oversight and access controls to ensure cloud and AI services don't inflict harm "in any way that is prohibited by law."
[2]
Microsoft: Our Tech Isn't Being Used to Hurt Civilians in Gaza
Microsoft claims it has found "no evidence" that its AI technologies or its cloud computing service Microsoft Azure have been used to target or harm civilians during the ongoing conflict in Gaza. In an official statement, the tech giant says it conducted an internal review into the issue and engaged an external firm (which it did not name) to undertake additional fact-finding. Microsoft says the review process included interviewing dozens of employees and assessing military documents. Microsoft did confirm that it provides Israel's Ministry of Defense (IMOD) with software, professional services, Azure cloud services, and Azure AI services such as language translation, as well as cybersecurity support, but denied these technologies were being used to target civilians. However, Microsoft pointed out that it "does not have visibility into how customers use our software on their own servers or other devices," and that it does not have "visibility into the IMOD's government cloud operations," which use other providers. "By definition, our reviews do not cover these situations," said a Microsoft spokesperson. The statement is unlikely to silence Microsoft's harshest critics on the issue. In May, the company axed two employees who disrupted its 50th-anniversary event to protest the use of its tech by Israel. Meanwhile, investigations by outlets like The Associated Press have alleged that commercially available AI models produced by Microsoft and OpenAI were used to select bombing targets in Gaza and Lebanon. The report noted that the Israeli military's usage of Microsoft and OpenAI artificial intelligence in March 2024 was nearly 200 times higher than before the Oct. 7 attack, citing internal company information shared with AP. Hossam Nasr, an organizer of No Azure for Apartheid, criticized the validity of Microsoft's statement in an interview with GeekWire earlier this week, saying it was "filled with both lies and contradictions." Nasr, a former Microsoft employee, said the company claims "that their technology is not being used to harm people in Gaza," but highlighted its admission that "they don't have insight into how their technologies are being used." Microsoft isn't the only Big Tech firm that has been contending with the allegations from staff it's supporting harm to civilians. In 2024, Google axed 28 employees who participated in an office sit-in protest against the search giant's role in Project Nimbus, a $1.2 billion cloud contract between Google, Amazon, and Israel's government and military.
[3]
Microsoft says it provided AI to Israeli military for war but denies use to harm people in Gaza
WASHINGTON (AP) -- Microsoft acknowledged Thursday that it sold advanced artificial intelligence and cloud computing services to the Israeli military during the war in Gaza and aided in efforts to locate and rescue Israeli hostages. But the company also said it has found no evidence to date that its Azure platform and AI technologies were used to target or harm people in Gaza. The unsigned blog post on Microsoft's corporate website appears to be the company's first public acknowledgement of its deep involvement in the war, which started after Hamas killed about 1,200 people in Israel and has led to the deaths of tens of thousands in Gaza. It comes nearly three months after an investigation by The Associated Press revealed previously unreported details about the American tech giant's close partnership with the Israeli Ministry of Defense, with military use of commercial AI products skyrocketing by nearly 200 times after the deadly Oct. 7, 2023, Hamas attack. The AP reported that the Israeli military uses Azure to transcribe, translate and process intelligence gathered through mass surveillance, which can then be cross-checked with Israel's in-house AI-enabled targeting systems and vice versa. The partnership reflects a growing drive by tech companies to sell their artificial intelligence products to militaries for a wide range of uses, including in Israel, Ukraine and the United States. However, human rights groups have raised concerns that AI systems, which can be flawed and prone to errors, are being used to help make decisions about who or what to target, resulting in the deaths of innocent people. Microsoft said Thursday that employee concerns and media reports had prompted the company to launch an internal review and hire an external firm to undertake "additional fact-finding." The statement did not identify the outside firm or provide a copy of its report. The statement also did not directly address several questions about precisely how the Israeli military is using its technologies, and the company declined Friday to comment further. Microsoft declined to answer written questions from The AP about how its AI models helped translate, sort and analyze intelligence used by the military to select targets for airstrikes. The company's statement said it had provided the Israeli military with software, professional services, Azure cloud storage and Azure AI services, including language translation, and had worked with the Israeli government to protect its national cyberspace against external threats. Microsoft said it had also provided "special access to our technologies beyond the terms of our commercial agreements" and "limited emergency support" to Israel as part of the effort to help rescue the more than 250 hostages taken by Hamas on Oct. 7. "We provided this help with significant oversight and on a limited basis, including approval of some requests and denial of others," Microsoft said. "We believe the company followed its principles on a considered and careful basis, to help save the lives of hostages while also honoring the privacy and other rights of civilians in Gaza." The company did not answer whether it or the outside firm it hired communicated or consulted with the Israeli military as part of its internal probe. It also did not respond to requests for additional details about the special assistance it provided to the Israeli military to recover hostages or the specific steps to safeguard the rights and privacy of Palestinians. In its statement, the company also conceded that it "does not have visibility into how customers use our software on their own servers or other devices." The company added that it could not know how its products might be used through other commercial cloud providers. In addition to Microsoft, the Israeli military has extensive contracts for cloud or AI services with Google, Amazon, Palantir and several other major American tech firms. Microsoft said the Israeli military, like any other customer, was bound to follow the company's Acceptable Use Policy and AI Code of Conduct, which prohibit the use of products to inflict harm in any way prohibited by law. In its statement, the company said it had found "no evidence" the Israeli military had violated those terms. Emelia Probasco, a senior fellow for the Center for Security and Emerging Technology at Georgetown University, said the statement is noteworthy because few commercial technology companies have so clearly laid out standards for working globally with international governments. "We are in a remarkable moment where a company, not a government, is dictating terms of use to a government that is actively engaged in a conflict," she said. "It's like a tank manufacturer telling a country you can only use our tanks for these specific reasons. That is a new world." Israel has used its vast trove of intelligence to both target Islamic militants and conduct raids into Gaza seeking to rescue hostages, with civilians often caught in the crossfire. For example, a February 2024 operation that freed two Israeli hostages in Rafah resulted in the deaths of 60 Palestinians. A June 2024 raid in the Nuseirat refugee camp freed four Israeli hostages from Hamas captivity but resulted in the deaths of at least 274 Palestinians. Overall, Israel's invasions and extensive bombing campaigns in Gaza and Lebanon have resulted in the deaths of more than 50,000 people, many of them women and children. No Azure for Apartheid, a group of current and former Microsoft employees, called on Friday for the company to publicly release a full copy of the investigative report. "It's very clear that their intention with this statement is not to actually address their worker concerns, but rather to make a PR stunt to whitewash their image that has been tarnished by their relationship with the Israeli military," said Hossam Nasr, a former Microsoft worker fired in October after he helped organize an unauthorized vigil at the company's headquarters for Palestinians killed in Gaza. Cindy Cohn, executive director of the Electronic Frontier Foundation, applauded Microsoft Friday for taking a step toward transparency. But she said the statement raised many unanswered questions, including details about how Microsoft's services and AI models were being used by the Israeli military on its own government servers. "I'm glad there's a little bit of transparency here," said Cohn, who has long called on U.S. tech giants to be more open about their military contracts. "But it is hard to square that with what's actually happening on the ground." ___ Burke reported from San Francisco and Mednick from Jerusalem. The Associated Press receives financial assistance from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. Find AP's standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.
[4]
Microsoft Says There's 'No Evidence' Its Azure and AI Models Have Harmed People in Gaza
But even in a statement to clear its name, Microsoft acknowledged it can't know the full extent of Israel's use of its technologies. Once again, Big Tech is under scrutiny for its role in Israel's genocide against Palestinians. Recently, Microsoft's sale of artificial intelligence models and cloud computing services to the Israeli military prompted a series of worker-led protests. But now, Microsoft says there's no evidence that its products have been used to harm people in Gaza. At least not as far as Microsoft can examine. On Thursday, Microsoft announced that it conducted internal and external reviews into the Israel Ministry Defense's use of its products, writing, "We take these concerns seriously." The company went on to add that it "found no evidence to date that Microsoft's Azure and AI technologies have been used to target or harm people in the conflict in Gaza." Microsoft did not clarify which company it contracted with for external review. Nor did it provide details about the process outside of stating that it included "interviewing dozens of employees and assessing documents." However, the company added that its reviews are limited. It doesn't have visibility into how software is used on private servers or on systems outside of its cloud. Tensions at Microsoft have risen sinceΓ a February report revealed the extent of its $133 million contract with Israel. According to AP News, Israel's use of Microsoft and OpenAI technology increased nearly 200 times after Palestinian resistance groups in Gaza launched an attack on Israel on Oct. 7, 2023.Γ The military specifically uses Microsoft's cloud platform, Azure, to compile information obtained through mass surveillance, like phone calls or texts, which the system transcribes and translates. Overall, it stores over 13.6 petabytes worth of data on Microsoft servers, which, AP News explained, is about 350 times more than what's needed for the entire Library of Congress. Last year, Microsoft fired two employees for organizing an "unauthorized" vigil in memory of Palestinians killed in Gaza. In February, Microsoft also kicked five employees out of a town hall meeting for protesting its Israeli contracts. Then last month, Ibtihal Aboussad, a software engineer on Microsoft's AI Platform team, interrupted the company's head of AI during a 50th anniversary celebration. "Shame on you," Aboussad said. "You are a war profiteer. Stop using AI for genocide. Stop using AI for genocide in our region. You have blood on your hands. All of Microsoft has blood on its hands. How dare you all celebrate when Microsoft is killing children. Shame on you all." The Verge reported that Aboussad also sent an email to distribution lists containing hundreds of thousands of Microsoft employees. She wrote, "Microsoft cloud and AI enabled the Israeli military to be more lethal and destructive in Gaza than they otherwise could," and urged people to sign the No Azure for Apartheid petition, stating, "We will not write code that kills." The company's post comes only a week ahead of a Seattle conference where No Azure for Apartheid intends to protest. In its blog, Microsoft also said that the Israeli military is bound to its conditions of use, which "require customers to implement responsible AI practices" and "prohibits" using its technologies "in any manner that inflicts harm on individuals or organizations or affects individuals in any way that is prohibited by law." That assurance falls flat when considering Israel's track record. Last year, a group of independent human rights experts said that "Israel has openly defied international law time and again, inflicting maximum suffering on civilians in the occupied Palestinian territory and beyond." That includes murder, torture, sexual violence, forced displacement, bombing vital institutions like hospitals, targeting healthcare workers, journalists, humanitarian workers, and purposefully destroying food systems as a method of war, per a breakdown by the United Nations' Office for the High Commissioner for Human Rights. Plus, Israel is committing genocide, which is a war crime. The definition put forth at the Geneva Convention includes specific actions "committed with intent to destroy, in whole or in part, a national, ethnical, racial, or religious group." Last month, the Gaza Health Ministry reported that deaths in the region now exceed 50,000. In addition, a Reuters analysis found that Israel completely eliminated at least 1,200 families. Although some argue that Israel's actions don't meet the condition of "intent to destroy," its response to the Oct. 7 attacks killed over 5,000 people in the first week, extreme violence that caused a major shift for many experts. South Africa formally brought genocide charges against Israel last year. Big Tech has provided Israel with support for years, like with Google and Amazon's Project Nimbus. Microsoft can try to downplay its role by saying that its technology wasn't directly used for harm. Even if that is true, its technologies make it possible for the Israeli military to expand its destruction of Palestine and its people.
[5]
Microsoft's Build conference interrupted by renewed protests over its ties with the Israeli military
Protesters claim Microsoft's Azure and AI technologies are being used to support the ongoing assaults against Gaza. Microsoft is facing more pushback over its dealings with the Israeli military, as The Verge reports that the company's Build developer conference has been interrupted twice by protesters. The first incident occurred shortly after the start of CEO Satya Nadella's keynote on May 19, when firmware engineer and No Azure for Apartheid organizer Joe Lopez interrupted Nadella's speech to demand he "show them how Microsoft is killing Palestinians," according to a statement released by NOAA. "How about you show them how Israeli war crimes are powered by Azure?" Almost immediately after, a second protester, a former Google employee, disrupted Nadella's address again to say that "all tech workers should know that big tech is complicit in the Israeli genocide against Palestinians." After the protesters were removed from the hall, Lopez sent an email to Microsoft employees accusing company leadership of lying about the role played by Azure in the Israeli military's ongoing assault on Gaza. "Leadership rejects our claims that Azure technology is being used to target or harm civilians in Gaza," Lopez wrote. "Those of us who have been paying attention know that this is a bold-faced lie. Every byte of data that is stored on the cloud (much of it likely containing data obtained by illegal mass surveillance) can and will be used as justification to level cities and exterminate Palestinians." The second protest occurred on May 20 during an address being given by Jay Parikh, head of Microsoft's CoreAI: An unidentified Palestinian tech worker interrupted Parikh to say "My people are suffering" and call on Microsoft to "cut ties" with Israel: "No Azure for apartheid! Free, free Palestine!" Microsoft employee group "No Azure for Apartheid," which also held protests outside the event, later confirmed that it had assisted the unnamed worker's protest. The protests come less than two months after former Microsoft employees Ibtihal Aboussad and Vaniya Agrawal interrupted the company's 50 anniversary event to protest its work with the Israeli military in Gaza. They also happened less than a week after Microsoft issued a statement, referenced by Lopez in his company-wide email, absolving itself of any culpability in Israel's relentless attacks on Gaza. "Based on our review, including both our internal assessments and external review, we have found no evidence that Microsoft's Azure and AI technologies, or any of our other software, have been used to harm people or that IMOD [Israel's Ministry of Defense] has failed to comply with our terms of service or our AI Code of Conduct," Microsoft said in its statement. It is hard to square the claim that IMOD does not use Azure in a way that harms people with the news from Gaza, where more than 53,000 people are believed to have been killed since Israel's attacks began in October 2023 in response to a Hamas assault on Israel, although some estimates put the death toll much higher. Microsoft gave itself a little wiggle room on the 'we investigated ourselves and found no wrongdoing' front: "It is important to acknowledge that Microsoft does not have visibility into how customers use our software on their own servers or other devices. This is typically the case for on premise software. Nor do we have visibility to the IMOD's government cloud operations, which are supported through contracts with cloud providers other than Microsoft. By definition, our reviews do not cover these situations." Along with these in-person protests at high-profile Microsoft events, the company has also been targeted by the BDS (Boycott, Divestment, and Sanctions) Movement, which claims Microsoft "knowingly provides Israel with technology, including artificial intelligence (AI), that is deployed to facilitate grave human rights violations, war crimes, crimes against humanity (including apartheid), as well as genocide." "Microsoft provides the Israeli military with Azure cloud and AI services that are crucial in empowering and accelerating Israel's genocidal war on 2.3 million Palestinians in the illegally occupied Gaza Strip," the BDS website states. "Microsoft's extensive ties with Israel's military are revealed in investigations by The Guardian with the Israeli-Palestinian publication +972 Magazine, demonstrating how the Israeli military turned to Microsoft to meet the technological demands of genocide." Microsoft hasn't yet issued any sort of statement on the incidents but Aboussad and Agrawal, the employees who staged protests against the company's Israeli entanglements in April, were both fired in short order and I'll be very surprised if we don't see a similar outcome in response to these protests. I've reached out for comment and will update if I receive a reply.
[6]
Microsoft says it provided AI to Israeli military for war but denies use to harm people in Gaza
WASHINGTON (AP) -- Microsoft acknowledged Thursday that it sold advanced artificial intelligence and cloud computing services to the Israeli military during the war in Gaza and aided in efforts to locate and rescue Israeli hostages. But the company also said it has found no evidence to date that its Azure platform and AI technologies were used to target or harm people in Gaza. The unsigned blog post on Microsoft's corporate website appears to be the company's first public acknowledgement of its deep involvement in the war, which started after Hamas killed about 1,200 people in Israel and has led to the deaths of tens of thousands in Gaza. It comes nearly three months after an investigation by The Associated Press revealed previously unreported details about the American tech giant's close partnership with the Israeli Ministry of Defense, with military use of commercial AI products skyrocketing by nearly 200 times after the deadly Oct. 7, 2023, Hamas attack. The AP reported that the Israeli military uses Azure to transcribe, translate and process intelligence gathered through mass surveillance, which can then be cross-checked with Israel's in-house AI-enabled targeting systems and vice versa. The partnership reflects a growing drive by tech companies to sell their artificial intelligence products to militaries for a wide range of uses, including in Israel, Ukraine and the United States. However, human rights groups have raised concerns that AI systems, which can be flawed and prone to errors, are being used to help make decisions about who or what to target, resulting in the deaths of innocent people. Microsoft said Thursday that employee concerns and media reports had prompted the company to launch an internal review and hire an external firm to undertake "additional fact-finding." The statement did not identify the outside firm or provide a copy of its report. The statement also did not directly address several questions about precisely how the Israeli military is using its technologies, and the company declined Friday to comment further. Microsoft declined to answer written questions from The AP about how its AI models helped translate, sort and analyze intelligence used by the military to select targets for airstrikes. The company's statement said it had provided the Israeli military with software, professional services, Azure cloud storage and Azure AI services, including language translation, and had worked with the Israeli government to protect its national cyberspace against external threats. Microsoft said it had also provided "special access to our technologies beyond the terms of our commercial agreements" and "limited emergency support" to Israel as part of the effort to help rescue the more than 250 hostages taken by Hamas on Oct. 7. "We provided this help with significant oversight and on a limited basis, including approval of some requests and denial of others," Microsoft said. "We believe the company followed its principles on a considered and careful basis, to help save the lives of hostages while also honoring the privacy and other rights of civilians in Gaza." The company did not answer whether it or the outside firm it hired communicated or consulted with the Israeli military as part of its internal probe. It also did not respond to requests for additional details about the special assistance it provided to the Israeli military to recover hostages or the specific steps to safeguard the rights and privacy of Palestinians. In its statement, the company also conceded that it "does not have visibility into how customers use our software on their own servers or other devices." The company added that it could not know how its products might be used through other commercial cloud providers. In addition to Microsoft, the Israeli military has extensive contracts for cloud or AI services with Google, Amazon, Palantir and several other major American tech firms. Microsoft said the Israeli military, like any other customer, was bound to follow the company's Acceptable Use Policy and AI Code of Conduct, which prohibit the use of products to inflict harm in any way prohibited by law. In its statement, the company said it had found "no evidence" the Israeli military had violated those terms. Emelia Probasco, a senior fellow for the Center for Security and Emerging Technology at Georgetown University, said the statement is noteworthy because few commercial technology companies have so clearly laid out standards for working globally with international governments. "We are in a remarkable moment where a company, not a government, is dictating terms of use to a government that is actively engaged in a conflict," she said. "It's like a tank manufacturer telling a country you can only use our tanks for these specific reasons. That is a new world." Israel has used its vast trove of intelligence to both target Islamic militants and conduct raids into Gaza seeking to rescue hostages, with civilians often caught in the crossfire. For example, a February 2024 operation that freed two Israeli hostages in Rafah resulted in the deaths of 60 Palestinians. A June 2024 raid in the Nuseirat refugee camp freed four Israeli hostages from Hamas captivity but resulted in the deaths of at least 274 Palestinians. Overall, Israel's invasions and extensive bombing campaigns in Gaza and Lebanon have resulted in the deaths of more than 50,000 people, many of them women and children. No Azure for Apartheid, a group of current and former Microsoft employees, called on Friday for the company to publicly release a full copy of the investigative report. "It's very clear that their intention with this statement is not to actually address their worker concerns, but rather to make a PR stunt to whitewash their image that has been tarnished by their relationship with the Israeli military," said Hossam Nasr, a former Microsoft worker fired in October after he helped organize an unauthorized vigil at the company's headquarters for Palestinians killed in Gaza. Cindy Cohn, executive director of the Electronic Frontier Foundation, applauded Microsoft Friday for taking a step toward transparency. But she said the statement raised many unanswered questions, including details about how Microsoft's services and AI models were being used by the Israeli military on its own government servers. "I'm glad there's a little bit of transparency here," said Cohn, who has long called on U.S. tech giants to be more open about their military contracts. "But it is hard to square that with what's actually happening on the ground." ___ Burke reported from San Francisco and Mednick from Jerusalem. ___ Contact AP's global investigative team at Investigative@ap.org or https://www.ap.org/tips/ ___ The Associated Press receives financial assistance from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. Find AP's standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.
[7]
Microsoft says it provided AI to Israeli military for war but denies use to harm people in Gaza
Microsoft acknowledged Thursday that it sold advanced artificial intelligence and cloud computing services to the Israeli military during the war in Gaza and aided in efforts to locate and rescue Israeli hostages. But the company also said it has found no evidence to date that its Azure platform and AI technologies were used to target or harm people in Gaza. The unsigned blog post on Microsoft's corporate website appears to be the company's first public acknowledgement of its deep involvement in the war, which started after Hamas killed about 1,200 people in Israel and has led to the deaths of tens of thousands in Gaza. It comes nearly three months after an investigation by The Associated Press revealed previously unreported details about the American tech giant's close partnership with the Israeli Ministry of Defense, with military use of commercial AI products skyrocketing by nearly 200 times after the deadly Oct. 7, 2023, Hamas attack. The AP reported that the Israeli military uses Azure to transcribe, translate and process intelligence gathered through mass surveillance, which can then be cross-checked with Israel's in-house AI-enabled targeting systems and vice versa. The partnership reflects a growing drive by tech companies to sell their artificial intelligence products to militaries for a wide range of uses, including in Israel, Ukraine and the United States. However, human rights groups have raised concerns that AI systems, which can be flawed and prone to errors, are being used to help make decisions about who or what to target, resulting in the deaths of innocent people. Microsoft said Thursday that employee concerns and media reports had prompted the company to launch an internal review and hire an external firm to undertake "additional fact-finding." The statement did not identify the outside firm or provide a copy of its report. The statement also did not directly address several questions about precisely how the Israeli military is using its technologies, and the company declined Friday to comment further. Microsoft declined to answer written questions from The AP about how its AI models helped translate, sort and analyze intelligence used by the military to select targets for airstrikes. The company's statement said it had provided the Israeli military with software, professional services, Azure cloud storage and Azure AI services, including language translation, and had worked with the Israeli government to protect its national cyberspace against external threats. Microsoft said it had also provided "special access to our technologies beyond the terms of our commercial agreements" and "limited emergency support" to Israel as part of the effort to help rescue the more than 250 hostages taken by Hamas on Oct. 7. "We provided this help with significant oversight and on a limited basis, including approval of some requests and denial of others," Microsoft said. "We believe the company followed its principles on a considered and careful basis, to help save the lives of hostages while also honoring the privacy and other rights of civilians in Gaza." The company did not answer whether it or the outside firm it hired communicated or consulted with the Israeli military as part of its internal probe. It also did not respond to requests for additional details about the special assistance it provided to the Israeli military to recover hostages or the specific steps to safeguard the rights and privacy of Palestinians. In its statement, the company also conceded that it "does not have visibility into how customers use our software on their own servers or other devices." The company added that it could not know how its products might be used through other commercial cloud providers. In addition to Microsoft, the Israeli military has extensive contracts for cloud or AI services with Google, Amazon, Palantir and several other major American tech firms. Microsoft said the Israeli military, like any other customer, was bound to follow the company's Acceptable Use Policy and AI Code of Conduct, which prohibit the use of products to inflict harm in any way prohibited by law. In its statement, the company said it had found "no evidence" the Israeli military had violated those terms. Emelia Probasco, a senior fellow for the Center for Security and Emerging Technology at Georgetown University, said the statement is noteworthy because few commercial technology companies have so clearly laid out standards for working globally with international governments. "We are in a remarkable moment where a company, not a government, is dictating terms of use to a government that is actively engaged in a conflict," she said. "It's like a tank manufacturer telling a country you can only use our tanks for these specific reasons. That is a new world." Israel has used its vast trove of intelligence to both target Islamic militants and conduct raids into Gaza seeking to rescue hostages, with civilians often caught in the crossfire. For example, a February 2024 operation that freed two Israeli hostages in Rafah resulted in the deaths of 60 Palestinians. A June 2024 raid in the Nuseirat refugee camp freed four Israeli hostages from Hamas captivity but resulted in the deaths of at least 274 Palestinians. Overall, Israel's invasions and extensive bombing campaigns in Gaza and Lebanon have resulted in the deaths of more than 50,000 people, many of them women and children. No Azure for Apartheid, a group of current and former Microsoft employees, called on Friday for the company to publicly release a full copy of the investigative report. "It's very clear that their intention with this statement is not to actually address their worker concerns, but rather to make a PR stunt to whitewash their image that has been tarnished by their relationship with the Israeli military," said Hossam Nasr, a former Microsoft worker fired in October after he helped organize an unauthorized vigil at the company's headquarters for Palestinians killed in Gaza. Cindy Cohn, executive director of the Electronic Frontier Foundation, applauded Microsoft Friday for taking a step toward transparency. But she said the statement raised many unanswered questions, including details about how Microsoft's services and AI models were being used by the Israeli military on its own government servers. "I'm glad there's a little bit of transparency here," said Cohn, who has long called on U.S. tech giants to be more open about their military contracts. "But it is hard to square that with what's actually happening on the ground."
[8]
Former Microsoft engineer Vaniya Agrawal continues anti-Israel protests by disrupting Build 2025 AI event
Tensions over Microsoft's controversial Israel cloud agreements are boiling over once again, this time inside its own developer conference halls. On day three of the Build 2025 event in Seattle, the company was forced to pause an AI security session as former employees Vaniya Agrawal and Hossam Nasr confronted executives on stage. The disruption came during a high-profile presentation led by Neta Haiby, Microsoft's Head of AI Security, and Sarah Bird, the company's Head of Responsible AI. As the session progressed, Agrawal and Nasr loudly interrupted, accusing the company of complicity in the war in Gaza and demanding Microsoft sever ties with the Israeli military. Also read: Indian-American techie shouts 'Shame on you' to Microsoft bosses days before her resignation: Who is Vaniya Agrawal? The episode marked the latest in a string of escalating protests targeting Microsoft's ongoing cloud services agreement with Israel, reportedly worth $133 million. This year's Build 2025, intended to showcase Microsoft's innovations in artificial intelligence and cloud computing, has instead become a flashpoint for internal dissent and activism. On May 19, a Microsoft employee interrupted CEO Satya Nadella's keynote address with a protest. A day later, a Palestinian tech professional disrupted an Azure AI session led by Jay Parikh, Executive Vice President of CoreAI, calling for Microsoft to "cut ties" with Israel. Agrawal's protest, however, adds deeper weight due to her public resignation and repeated activism. She first made headlines in April when she stormed Microsoft's 50th anniversary celebration at its Redmond headquarters, confronting Bill Gates, Steve Ballmer, and Nadella in person. Her actions led to her dismissal from the company. Agrawal had joined Microsoft in September 2023 after spending over three years at Amazon. At Microsoft, she worked in the AI division, but her tenure ended just seven months later when she submitted a fiery resignation letter denouncing what she called Microsoft's role in enabling "genocide in the Gaza Strip." Also read: Microsoft says it provided AI to Israeli military for war but denies use to harm people in Gaza "Microsoft cloud and AI enable the Israeli military to be more lethal and destructive in Gaza," she wrote in a company-wide email before leaving the firm on April 11. Microsoft swiftly condemned the disruptions, calling the behavior "hostile, unprovoked, and highly inappropriate." Alongside Agrawal, another employee involved in the April anniversary protest was also terminated. Despite the backlash, Agrawal has remained defiant. Her repeated interventions, including at Build 2025, suggest a growing movement within the tech community questioning corporate ties with military regimes and demanding greater ethical accountability in AI and cloud computing operations. Microsoft's cloud services deal with Israel's Ministry of Defence has drawn increasing scrutiny from activists and employees alike. Critics argue the company's AI and Azure platforms are not only supporting military logistics but also exacerbating human rights violations in the Gaza Strip. Also read: Protester shouts 'show how Microsoft is killing Palestinians' during CEO Satya Nadella's key speech; Watch video here These employee-led protests are part of a broader trend across Silicon Valley, where tech workers are using their platforms to challenge corporate policies tied to global conflicts. As Microsoft continues to invest heavily in AI and expand its cloud footprint, the ethical dimensions of these contracts may become a permanent issue on its public stage.
Share
Copy Link
Microsoft claims its internal review found no evidence of its Azure and AI technologies being used to harm civilians in Gaza, while facing continued protests over its contracts with the Israeli military.
Microsoft has announced that it has found "no evidence" that its Azure cloud computing and AI technologies have been used to harm civilians in Gaza 1. The company stated that it conducted an internal review and engaged an external firm to perform additional fact-finding 2. This review process included interviewing dozens of employees and assessing military documents.
The tech giant confirmed that it provides Israel's Ministry of Defense (IMOD) with software, professional services, Azure cloud services, and Azure AI services such as language translation, as well as cybersecurity support 2. Microsoft described its relationship with IMOD as "structured as a standard commercial relationship" 1. The company also acknowledged providing "special access" to its technologies and "limited emergency support" to assist in hostage rescue efforts 3.
Microsoft pointed out that it "does not have visibility into how customers use our software on their own servers or other devices," and that it does not have "visibility into the IMOD's government cloud operations," which use other providers 2. This admission highlights the limitations of Microsoft's ability to fully assess the use of its technologies in military operations.
Investigations by outlets like The Associated Press have alleged that commercially available AI models produced by Microsoft and OpenAI were used to select bombing targets in Gaza and Lebanon. The report noted that the Israeli military's usage of Microsoft and OpenAI artificial intelligence in March 2024 was nearly 200 times higher than before the Oct. 7 attack 2.
Source: Economic Times
Despite Microsoft's statement, the company continues to face protests and criticism from employees and activists. The "No Azure for Apartheid" group, consisting of current and former Microsoft employees, has called for the company to publicly release a full copy of the investigative report 3. Protests have occurred at Microsoft events, including the recent Build developer conference, where demonstrators interrupted speeches to voice concerns about the company's ties with the Israeli military 5.
Microsoft is not the only tech company facing scrutiny over its contracts with military organizations. Google has also faced employee protests and has terminated staff members who participated in demonstrations against Project Nimbus, a $1.2 billion cloud contract between Google, Amazon, and Israel's government and military 2.
Human rights groups have raised concerns about the use of AI systems in military operations, noting that these technologies can be flawed and prone to errors, potentially leading to civilian casualties 3. The ongoing conflict in Gaza has resulted in significant loss of life, with over 50,000 people reported killed since October 2023 4.
As the debate over the role of technology in military operations continues, Microsoft's statement and the ongoing protests highlight the complex ethical considerations surrounding the use of AI and cloud technologies in conflict zones.
Summarized by
Navi
[3]
A Belgian software engineer has created an augmented reality app that uses AI to identify and block advertisements in the real world, sparking discussions about the future of ad-free experiences and content control in physical spaces.
2 Sources
Technology
15 hrs ago
2 Sources
Technology
15 hrs ago
AMD's upcoming UDNA architecture is set to deliver substantial improvements in rasterization, ray tracing, and AI performance for future Radeon GPUs and next-generation gaming consoles.
2 Sources
Technology
7 hrs ago
2 Sources
Technology
7 hrs ago
AMD introduces the Radeon AI PRO R9700, a high-performance GPU designed for AI and professional workloads, offering significant improvements in AI processing capabilities and memory capacity.
3 Sources
Technology
2 days ago
3 Sources
Technology
2 days ago
A new study shows that US-based developers are the world's top users of AI coding assistants, with potential annual economic benefits ranging from $9.6 billion to $96 billion.
2 Sources
Technology
2 days ago
2 Sources
Technology
2 days ago
South Korea's SK Group and Amazon Web Services announce a joint investment of $5.11 billion to build the country's largest AI data centre in Ulsan, set to be operational by 2029 with plans for future expansion.
2 Sources
Business and Economy
2 days ago
2 Sources
Business and Economy
2 days ago