Curated by THEOUTPOST
On Sat, 17 May, 12:06 AM UTC
6 Sources
[1]
Microsoft says its Azure and AI tech hasn't harmed people in Gaza
Microsoft says it has found no evidence that the Israeli military has used its Azure and AI technology to harm Palestinian civilians or anyone else in Gaza. The software maker says it has "conducted an internal review and engaged an external firm," to perform a review, after some Microsoft employees have repeatedly called on the company to cut its contracts with the Israeli government. Microsoft says that its relationship with the Israel Ministry of Defense (IMOD) is "structured as a standard commercial relationship," and that it has "found no evidence that Microsoft's Azure and AI technologies, or any of our other software, have been used to harm people or that IMOD has failed to comply with our terms of service or our AI Code of Conduct." Microsoft's AI code of conduct requires that customers use human oversight and access controls to ensure cloud and AI services don't inflict harm "in any way that is prohibited by law."
[2]
Microsoft: Our Tech Isn't Being Used to Hurt Civilians in Gaza
Microsoft claims it has found "no evidence" that its AI technologies or its cloud computing service Microsoft Azure have been used to target or harm civilians during the ongoing conflict in Gaza. In an official statement, the tech giant says it conducted an internal review into the issue and engaged an external firm (which it did not name) to undertake additional fact-finding. Microsoft says the review process included interviewing dozens of employees and assessing military documents. Microsoft did confirm that it provides Israel's Ministry of Defense (IMOD) with software, professional services, Azure cloud services, and Azure AI services such as language translation, as well as cybersecurity support, but denied these technologies were being used to target civilians. However, Microsoft pointed out that it "does not have visibility into how customers use our software on their own servers or other devices," and that it does not have "visibility into the IMOD's government cloud operations," which use other providers. "By definition, our reviews do not cover these situations," said a Microsoft spokesperson. The statement is unlikely to silence Microsoft's harshest critics on the issue. In May, the company axed two employees who disrupted its 50th-anniversary event to protest the use of its tech by Israel. Meanwhile, investigations by outlets like The Associated Press have alleged that commercially available AI models produced by Microsoft and OpenAI were used to select bombing targets in Gaza and Lebanon. The report noted that the Israeli military's usage of Microsoft and OpenAI artificial intelligence in March 2024 was nearly 200 times higher than before the Oct. 7 attack, citing internal company information shared with AP. Hossam Nasr, an organizer of No Azure for Apartheid, criticized the validity of Microsoft's statement in an interview with GeekWire earlier this week, saying it was "filled with both lies and contradictions." Nasr, a former Microsoft employee, said the company claims "that their technology is not being used to harm people in Gaza," but highlighted its admission that "they don't have insight into how their technologies are being used." Microsoft isn't the only Big Tech firm that has been contending with the allegations from staff it's supporting harm to civilians. In 2024, Google axed 28 employees who participated in an office sit-in protest against the search giant's role in Project Nimbus, a $1.2 billion cloud contract between Google, Amazon, and Israel's government and military.
[3]
Microsoft says it provided AI to Israeli military for war but denies use to harm people in Gaza
WASHINGTON (AP) -- Microsoft acknowledged Thursday that it sold advanced artificial intelligence and cloud computing services to the Israeli military during the war in Gaza and aided in efforts to locate and rescue Israeli hostages. But the company also said it has found no evidence to date that its Azure platform and AI technologies were used to target or harm people in Gaza. The unsigned blog post on Microsoft's corporate website appears to be the company's first public acknowledgement of its deep involvement in the war, which started after Hamas killed about 1,200 people in Israel and has led to the deaths of tens of thousands in Gaza. It comes nearly three months after an investigation by The Associated Press revealed previously unreported details about the American tech giant's close partnership with the Israeli Ministry of Defense, with military use of commercial AI products skyrocketing by nearly 200 times after the deadly Oct. 7, 2023, Hamas attack. The AP reported that the Israeli military uses Azure to transcribe, translate and process intelligence gathered through mass surveillance, which can then be cross-checked with Israel's in-house AI-enabled targeting systems and vice versa. The partnership reflects a growing drive by tech companies to sell their artificial intelligence products to militaries for a wide range of uses, including in Israel, Ukraine and the United States. However, human rights groups have raised concerns that AI systems, which can be flawed and prone to errors, are being used to help make decisions about who or what to target, resulting in the deaths of innocent people. Microsoft said Thursday that employee concerns and media reports had prompted the company to launch an internal review and hire an external firm to undertake "additional fact-finding." The statement did not identify the outside firm or provide a copy of its report. The statement also did not directly address several questions about precisely how the Israeli military is using its technologies, and the company declined Friday to comment further. Microsoft declined to answer written questions from The AP about how its AI models helped translate, sort and analyze intelligence used by the military to select targets for airstrikes. The company's statement said it had provided the Israeli military with software, professional services, Azure cloud storage and Azure AI services, including language translation, and had worked with the Israeli government to protect its national cyberspace against external threats. Microsoft said it had also provided "special access to our technologies beyond the terms of our commercial agreements" and "limited emergency support" to Israel as part of the effort to help rescue the more than 250 hostages taken by Hamas on Oct. 7. "We provided this help with significant oversight and on a limited basis, including approval of some requests and denial of others," Microsoft said. "We believe the company followed its principles on a considered and careful basis, to help save the lives of hostages while also honoring the privacy and other rights of civilians in Gaza." The company did not answer whether it or the outside firm it hired communicated or consulted with the Israeli military as part of its internal probe. It also did not respond to requests for additional details about the special assistance it provided to the Israeli military to recover hostages or the specific steps to safeguard the rights and privacy of Palestinians. In its statement, the company also conceded that it "does not have visibility into how customers use our software on their own servers or other devices." The company added that it could not know how its products might be used through other commercial cloud providers. In addition to Microsoft, the Israeli military has extensive contracts for cloud or AI services with Google, Amazon, Palantir and several other major American tech firms. Microsoft said the Israeli military, like any other customer, was bound to follow the company's Acceptable Use Policy and AI Code of Conduct, which prohibit the use of products to inflict harm in any way prohibited by law. In its statement, the company said it had found "no evidence" the Israeli military had violated those terms. Emelia Probasco, a senior fellow for the Center for Security and Emerging Technology at Georgetown University, said the statement is noteworthy because few commercial technology companies have so clearly laid out standards for working globally with international governments. "We are in a remarkable moment where a company, not a government, is dictating terms of use to a government that is actively engaged in a conflict," she said. "It's like a tank manufacturer telling a country you can only use our tanks for these specific reasons. That is a new world." Israel has used its vast trove of intelligence to both target Islamic militants and conduct raids into Gaza seeking to rescue hostages, with civilians often caught in the crossfire. For example, a February 2024 operation that freed two Israeli hostages in Rafah resulted in the deaths of 60 Palestinians. A June 2024 raid in the Nuseirat refugee camp freed four Israeli hostages from Hamas captivity but resulted in the deaths of at least 274 Palestinians. Overall, Israel's invasions and extensive bombing campaigns in Gaza and Lebanon have resulted in the deaths of more than 50,000 people, many of them women and children. No Azure for Apartheid, a group of current and former Microsoft employees, called on Friday for the company to publicly release a full copy of the investigative report. "It's very clear that their intention with this statement is not to actually address their worker concerns, but rather to make a PR stunt to whitewash their image that has been tarnished by their relationship with the Israeli military," said Hossam Nasr, a former Microsoft worker fired in October after he helped organize an unauthorized vigil at the company's headquarters for Palestinians killed in Gaza. Cindy Cohn, executive director of the Electronic Frontier Foundation, applauded Microsoft Friday for taking a step toward transparency. But she said the statement raised many unanswered questions, including details about how Microsoft's services and AI models were being used by the Israeli military on its own government servers. "I'm glad there's a little bit of transparency here," said Cohn, who has long called on U.S. tech giants to be more open about their military contracts. "But it is hard to square that with what's actually happening on the ground." ___ Burke reported from San Francisco and Mednick from Jerusalem. The Associated Press receives financial assistance from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. Find AP's standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.
[4]
Microsoft Says There's 'No Evidence' Its Azure and AI Models Have Harmed People in Gaza
But even in a statement to clear its name, Microsoft acknowledged it can't know the full extent of Israel's use of its technologies. Once again, Big Tech is under scrutiny for its role in Israel's genocide against Palestinians. Recently, Microsoft's sale of artificial intelligence models and cloud computing services to the Israeli military prompted a series of worker-led protests. But now, Microsoft says there's no evidence that its products have been used to harm people in Gaza. At least not as far as Microsoft can examine. On Thursday, Microsoft announced that it conducted internal and external reviews into the Israel Ministry Defense's use of its products, writing, "We take these concerns seriously." The company went on to add that it "found no evidence to date that Microsoft's Azure and AI technologies have been used to target or harm people in the conflict in Gaza." Microsoft did not clarify which company it contracted with for external review. Nor did it provide details about the process outside of stating that it included "interviewing dozens of employees and assessing documents." However, the company added that its reviews are limited. It doesn't have visibility into how software is used on private servers or on systems outside of its cloud. Tensions at Microsoft have risen since a February report revealed the extent of its $133 million contract with Israel. According to AP News, Israel's use of Microsoft and OpenAI technology increased nearly 200 times after Palestinian resistance groups in Gaza launched an attack on Israel on Oct. 7, 2023. The military specifically uses Microsoft's cloud platform, Azure, to compile information obtained through mass surveillance, like phone calls or texts, which the system transcribes and translates. Overall, it stores over 13.6 petabytes worth of data on Microsoft servers, which, AP News explained, is about 350 times more than what's needed for the entire Library of Congress. Last year, Microsoft fired two employees for organizing an "unauthorized" vigil in memory of Palestinians killed in Gaza. In February, Microsoft also kicked five employees out of a town hall meeting for protesting its Israeli contracts. Then last month, Ibtihal Aboussad, a software engineer on Microsoft's AI Platform team, interrupted the company's head of AI during a 50th anniversary celebration. "Shame on you," Aboussad said. "You are a war profiteer. Stop using AI for genocide. Stop using AI for genocide in our region. You have blood on your hands. All of Microsoft has blood on its hands. How dare you all celebrate when Microsoft is killing children. Shame on you all." The Verge reported that Aboussad also sent an email to distribution lists containing hundreds of thousands of Microsoft employees. She wrote, "Microsoft cloud and AI enabled the Israeli military to be more lethal and destructive in Gaza than they otherwise could," and urged people to sign the No Azure for Apartheid petition, stating, "We will not write code that kills." The company's post comes only a week ahead of a Seattle conference where No Azure for Apartheid intends to protest. In its blog, Microsoft also said that the Israeli military is bound to its conditions of use, which "require customers to implement responsible AI practices" and "prohibits" using its technologies "in any manner that inflicts harm on individuals or organizations or affects individuals in any way that is prohibited by law." That assurance falls flat when considering Israel's track record. Last year, a group of independent human rights experts said that "Israel has openly defied international law time and again, inflicting maximum suffering on civilians in the occupied Palestinian territory and beyond." That includes murder, torture, sexual violence, forced displacement, bombing vital institutions like hospitals, targeting healthcare workers, journalists, humanitarian workers, and purposefully destroying food systems as a method of war, per a breakdown by the United Nations' Office for the High Commissioner for Human Rights. Plus, Israel is committing genocide, which is a war crime. The definition put forth at the Geneva Convention includes specific actions "committed with intent to destroy, in whole or in part, a national, ethnical, racial, or religious group." Last month, the Gaza Health Ministry reported that deaths in the region now exceed 50,000. In addition, a Reuters analysis found that Israel completely eliminated at least 1,200 families. Although some argue that Israel's actions don't meet the condition of "intent to destroy," its response to the Oct. 7 attacks killed over 5,000 people in the first week, extreme violence that caused a major shift for many experts. South Africa formally brought genocide charges against Israel last year. Big Tech has provided Israel with support for years, like with Google and Amazon's Project Nimbus. Microsoft can try to downplay its role by saying that its technology wasn't directly used for harm. Even if that is true, its technologies make it possible for the Israeli military to expand its destruction of Palestine and its people.
[5]
Microsoft says it provided AI to Israeli military for war but denies use to harm people in Gaza
WASHINGTON (AP) -- Microsoft acknowledged Thursday that it sold advanced artificial intelligence and cloud computing services to the Israeli military during the war in Gaza and aided in efforts to locate and rescue Israeli hostages. But the company also said it has found no evidence to date that its Azure platform and AI technologies were used to target or harm people in Gaza. The unsigned blog post on Microsoft's corporate website appears to be the company's first public acknowledgement of its deep involvement in the war, which started after Hamas killed about 1,200 people in Israel and has led to the deaths of tens of thousands in Gaza. It comes nearly three months after an investigation by The Associated Press revealed previously unreported details about the American tech giant's close partnership with the Israeli Ministry of Defense, with military use of commercial AI products skyrocketing by nearly 200 times after the deadly Oct. 7, 2023, Hamas attack. The AP reported that the Israeli military uses Azure to transcribe, translate and process intelligence gathered through mass surveillance, which can then be cross-checked with Israel's in-house AI-enabled targeting systems and vice versa. The partnership reflects a growing drive by tech companies to sell their artificial intelligence products to militaries for a wide range of uses, including in Israel, Ukraine and the United States. However, human rights groups have raised concerns that AI systems, which can be flawed and prone to errors, are being used to help make decisions about who or what to target, resulting in the deaths of innocent people. Microsoft said Thursday that employee concerns and media reports had prompted the company to launch an internal review and hire an external firm to undertake "additional fact-finding." The statement did not identify the outside firm or provide a copy of its report. The statement also did not directly address several questions about precisely how the Israeli military is using its technologies, and the company declined Friday to comment further. Microsoft declined to answer written questions from The AP about how its AI models helped translate, sort and analyze intelligence used by the military to select targets for airstrikes. The company's statement said it had provided the Israeli military with software, professional services, Azure cloud storage and Azure AI services, including language translation, and had worked with the Israeli government to protect its national cyberspace against external threats. Microsoft said it had also provided "special access to our technologies beyond the terms of our commercial agreements" and "limited emergency support" to Israel as part of the effort to help rescue the more than 250 hostages taken by Hamas on Oct. 7. "We provided this help with significant oversight and on a limited basis, including approval of some requests and denial of others," Microsoft said. "We believe the company followed its principles on a considered and careful basis, to help save the lives of hostages while also honoring the privacy and other rights of civilians in Gaza." The company did not answer whether it or the outside firm it hired communicated or consulted with the Israeli military as part of its internal probe. It also did not respond to requests for additional details about the special assistance it provided to the Israeli military to recover hostages or the specific steps to safeguard the rights and privacy of Palestinians. In its statement, the company also conceded that it "does not have visibility into how customers use our software on their own servers or other devices." The company added that it could not know how its products might be used through other commercial cloud providers. In addition to Microsoft, the Israeli military has extensive contracts for cloud or AI services with Google, Amazon, Palantir and several other major American tech firms. Microsoft said the Israeli military, like any other customer, was bound to follow the company's Acceptable Use Policy and AI Code of Conduct, which prohibit the use of products to inflict harm in any way prohibited by law. In its statement, the company said it had found "no evidence" the Israeli military had violated those terms. Emelia Probasco, a senior fellow for the Center for Security and Emerging Technology at Georgetown University, said the statement is noteworthy because few commercial technology companies have so clearly laid out standards for working globally with international governments. "We are in a remarkable moment where a company, not a government, is dictating terms of use to a government that is actively engaged in a conflict," she said. "It's like a tank manufacturer telling a country you can only use our tanks for these specific reasons. That is a new world." Israel has used its vast trove of intelligence to both target Islamic militants and conduct raids into Gaza seeking to rescue hostages, with civilians often caught in the crossfire. For example, a February 2024 operation that freed two Israeli hostages in Rafah resulted in the deaths of 60 Palestinians. A June 2024 raid in the Nuseirat refugee camp freed four Israeli hostages from Hamas captivity but resulted in the deaths of at least 274 Palestinians. Overall, Israel's invasions and extensive bombing campaigns in Gaza and Lebanon have resulted in the deaths of more than 50,000 people, many of them women and children. No Azure for Apartheid, a group of current and former Microsoft employees, called on Friday for the company to publicly release a full copy of the investigative report. "It's very clear that their intention with this statement is not to actually address their worker concerns, but rather to make a PR stunt to whitewash their image that has been tarnished by their relationship with the Israeli military," said Hossam Nasr, a former Microsoft worker fired in October after he helped organize an unauthorized vigil at the company's headquarters for Palestinians killed in Gaza. Cindy Cohn, executive director of the Electronic Frontier Foundation, applauded Microsoft Friday for taking a step toward transparency. But she said the statement raised many unanswered questions, including details about how Microsoft's services and AI models were being used by the Israeli military on its own government servers. "I'm glad there's a little bit of transparency here," said Cohn, who has long called on U.S. tech giants to be more open about their military contracts. "But it is hard to square that with what's actually happening on the ground." ___ Burke reported from San Francisco and Mednick from Jerusalem. ___ Contact AP's global investigative team at Investigative@ap.org or https://www.ap.org/tips/ ___ The Associated Press receives financial assistance from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. Find AP's standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.
[6]
Microsoft says it provided AI to Israeli military for war but denies use to harm people in Gaza
Microsoft acknowledged Thursday that it sold advanced artificial intelligence and cloud computing services to the Israeli military during the war in Gaza and aided in efforts to locate and rescue Israeli hostages. But the company also said it has found no evidence to date that its Azure platform and AI technologies were used to target or harm people in Gaza. The unsigned blog post on Microsoft's corporate website appears to be the company's first public acknowledgement of its deep involvement in the war, which started after Hamas killed about 1,200 people in Israel and has led to the deaths of tens of thousands in Gaza. It comes nearly three months after an investigation by The Associated Press revealed previously unreported details about the American tech giant's close partnership with the Israeli Ministry of Defense, with military use of commercial AI products skyrocketing by nearly 200 times after the deadly Oct. 7, 2023, Hamas attack. The AP reported that the Israeli military uses Azure to transcribe, translate and process intelligence gathered through mass surveillance, which can then be cross-checked with Israel's in-house AI-enabled targeting systems and vice versa. The partnership reflects a growing drive by tech companies to sell their artificial intelligence products to militaries for a wide range of uses, including in Israel, Ukraine and the United States. However, human rights groups have raised concerns that AI systems, which can be flawed and prone to errors, are being used to help make decisions about who or what to target, resulting in the deaths of innocent people. Microsoft said Thursday that employee concerns and media reports had prompted the company to launch an internal review and hire an external firm to undertake "additional fact-finding." The statement did not identify the outside firm or provide a copy of its report. The statement also did not directly address several questions about precisely how the Israeli military is using its technologies, and the company declined Friday to comment further. Microsoft declined to answer written questions from The AP about how its AI models helped translate, sort and analyze intelligence used by the military to select targets for airstrikes. The company's statement said it had provided the Israeli military with software, professional services, Azure cloud storage and Azure AI services, including language translation, and had worked with the Israeli government to protect its national cyberspace against external threats. Microsoft said it had also provided "special access to our technologies beyond the terms of our commercial agreements" and "limited emergency support" to Israel as part of the effort to help rescue the more than 250 hostages taken by Hamas on Oct. 7. "We provided this help with significant oversight and on a limited basis, including approval of some requests and denial of others," Microsoft said. "We believe the company followed its principles on a considered and careful basis, to help save the lives of hostages while also honoring the privacy and other rights of civilians in Gaza." The company did not answer whether it or the outside firm it hired communicated or consulted with the Israeli military as part of its internal probe. It also did not respond to requests for additional details about the special assistance it provided to the Israeli military to recover hostages or the specific steps to safeguard the rights and privacy of Palestinians. In its statement, the company also conceded that it "does not have visibility into how customers use our software on their own servers or other devices." The company added that it could not know how its products might be used through other commercial cloud providers. In addition to Microsoft, the Israeli military has extensive contracts for cloud or AI services with Google, Amazon, Palantir and several other major American tech firms. Microsoft said the Israeli military, like any other customer, was bound to follow the company's Acceptable Use Policy and AI Code of Conduct, which prohibit the use of products to inflict harm in any way prohibited by law. In its statement, the company said it had found "no evidence" the Israeli military had violated those terms. Emelia Probasco, a senior fellow for the Center for Security and Emerging Technology at Georgetown University, said the statement is noteworthy because few commercial technology companies have so clearly laid out standards for working globally with international governments. "We are in a remarkable moment where a company, not a government, is dictating terms of use to a government that is actively engaged in a conflict," she said. "It's like a tank manufacturer telling a country you can only use our tanks for these specific reasons. That is a new world." Israel has used its vast trove of intelligence to both target Islamic militants and conduct raids into Gaza seeking to rescue hostages, with civilians often caught in the crossfire. For example, a February 2024 operation that freed two Israeli hostages in Rafah resulted in the deaths of 60 Palestinians. A June 2024 raid in the Nuseirat refugee camp freed four Israeli hostages from Hamas captivity but resulted in the deaths of at least 274 Palestinians. Overall, Israel's invasions and extensive bombing campaigns in Gaza and Lebanon have resulted in the deaths of more than 50,000 people, many of them women and children. No Azure for Apartheid, a group of current and former Microsoft employees, called on Friday for the company to publicly release a full copy of the investigative report. "It's very clear that their intention with this statement is not to actually address their worker concerns, but rather to make a PR stunt to whitewash their image that has been tarnished by their relationship with the Israeli military," said Hossam Nasr, a former Microsoft worker fired in October after he helped organize an unauthorized vigil at the company's headquarters for Palestinians killed in Gaza. Cindy Cohn, executive director of the Electronic Frontier Foundation, applauded Microsoft Friday for taking a step toward transparency. But she said the statement raised many unanswered questions, including details about how Microsoft's services and AI models were being used by the Israeli military on its own government servers. "I'm glad there's a little bit of transparency here," said Cohn, who has long called on U.S. tech giants to be more open about their military contracts. "But it is hard to square that with what's actually happening on the ground."
Share
Share
Copy Link
Microsoft acknowledges providing AI and cloud services to the Israeli military but claims no evidence of harm to Gaza civilians. The company faces scrutiny from employees and human rights groups over its role in the ongoing conflict.
Microsoft has publicly addressed its role in providing artificial intelligence (AI) and cloud computing services to the Israeli military during the ongoing conflict in Gaza. The tech giant claims it has found no evidence that its Azure platform and AI technologies have been used to harm civilians in Gaza 123.
In response to employee concerns and media reports, Microsoft conducted an internal review and engaged an external firm for additional fact-finding 13. The company stated that it interviewed dozens of employees and assessed military documents as part of this process 2. However, Microsoft has not disclosed the name of the external firm or provided a copy of the investigative report 3.
Microsoft confirmed that it has provided the Israeli Ministry of Defense (IMOD) with:
The company also revealed that it offered "special access" to its technologies and "limited emergency support" to aid in the rescue of hostages taken by Hamas on October 7, 2023 35.
Despite its assurances, Microsoft acknowledged significant limitations in its ability to monitor the use of its products:
These limitations have raised concerns about the extent to which Microsoft can ensure its technologies are not being misused 4.
An Associated Press investigation revealed that the Israeli military's use of Microsoft and OpenAI artificial intelligence increased nearly 200 times following the October 7 attack 23. The Israeli military reportedly uses Azure to transcribe, translate, and process intelligence gathered through mass surveillance 35.
The controversy has led to internal dissent at Microsoft:
Human rights groups have expressed concerns about the use of AI systems in military operations, citing potential flaws and errors that could lead to civilian casualties 35. The conflict has resulted in the deaths of over 50,000 people in Gaza and Lebanon, many of them women and children 35.
Microsoft maintains that the Israeli military, like all customers, is bound by its Acceptable Use Policy and AI Code of Conduct, which prohibit the use of products to inflict harm in ways prohibited by law 35. This stance raises questions about the role of tech companies in regulating the use of their products by governments engaged in conflicts 35.
As the controversy continues, a group of current and former Microsoft employees, known as No Azure for Apartheid, has called for the company to publicly release the full investigative report 35. The ongoing debate highlights the complex ethical considerations surrounding the use of AI and cloud technologies in military operations and international conflicts.
Reference
[3]
Microsoft faces internal dissent as employees protest the company's AI and cloud services contracts with the Israeli military, highlighting ethical concerns about tech's role in conflict zones.
4 Sources
4 Sources
U.S. tech companies, particularly Microsoft and OpenAI, have provided AI and cloud computing services to Israel's military, significantly enhancing its targeting capabilities in Gaza and Lebanon. This raises questions about the ethical implications of commercial AI use in warfare.
9 Sources
9 Sources
Google employees have been working to provide Israel's military with access to advanced AI technology since the early weeks of the Israel-Gaza war, despite public efforts to distance the company from military operations.
4 Sources
4 Sources
Microsoft's 50th anniversary event was disrupted by employee protests against the company's AI contracts with the Israeli military, leading to terminations and resignations.
42 Sources
42 Sources
Israel's military has deployed advanced AI systems, including 'Habsora', in the Gaza conflict, raising questions about the ethical implications and effectiveness of AI in modern warfare.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved