3 Sources
[1]
Microsoft Investors Prod Company Over Work With Israeli Military
A group of Microsoft Corp. investors is pressuring the company to assess how effectively it identifies customers who misuse its artificial intelligence tools, a push that follows reports detailing the Israeli military's use of Microsoft software during its war in Gaza. The shareholder resolution, which could be voted on at the company's annual meeting in December, asks the board to commission a public report disclosing more on how Microsoft prevents its products from being used in ways that violate human rights or international humanitarian law.
[2]
Microsoft shareholders demand report into the company's 'human rights due diligence' over allegations of war crime complicity in Gaza
A group of more than 60 Microsoft shareholders has filed a proposal that will be voted on at the company's next Annual General Meeting. Microsoft is facing new pushback over the Israeli military's use of its AI and cloud products, as a group of more than 60 shareholders have filed a proposal calling on the company to publish a report assessing the effectiveness of its "human rights due diligence [HRDD] processes in preventing, identifying, and addressing customer misuse of Microsoft artificial intelligence and cloud products or services that violates human rights or international humanitarian law." "Microsoft states it conducts ongoing HRDD across its value chain, in line with its obligations under the UNGPs, but it neither explains its HRDD processes related to customer end use, nor reports on their effectiveness," the resolution states. "Recent allegations of severe customer misuse suggest Microsoft's HRDD may be ineffective. "In the face of serious allegations of complicity in genocide and other international crimes, Microsoft's HRDD processes appear ineffective. Microsoft recently published a statement responding to these allegations, explaining it conducted an internal review and commissioned a third-party firm to 'undertake additional fact-finding,' and concluding it 'found no evidence to date that Microsoft's Azure and AI technologies have been used to target or harm people in the conflict in Gaza.' "The statement provides no additional information on the nature of the assessments, the definition of 'harm,' nor the identity of the external firm. Notably, the statement admits a significant gap in Microsoft's HRDD: 'Microsoft does not have visibility into how customers use our software on their own servers or other devices'." That statement, issued in May, followed a February 2025 AP report saying Israel's use of Microsoft and OpenAI technology "skyrocketed" following the Hamas attack on Israel in October 2023, in which the group killed nearly 1,200 people and took 251 hostages. While IDF analysts "use AI-enabled systems to help identify targets," according to the report, they also "independently examine them together with high-ranking officers to meet international law." But as Israel's brutal assault against Gaza has raged on, leaving tens of thousands of innocent Palestinians dead, many have come to believe that concerns for international law have fallen by the wayside. A United Nations commission, for instance, found in October 2024 that "Israel has perpetrated a concerted policy to destroy Gaza's healthcare system as part of a broader assault on Gaza, committing war crimes and the crime against humanity of extermination with relentless and deliberate attacks on medical personnel and facilities." The ongoing bloodshed, and growing awareness of Microsoft's entanglements with the Israeli military, has drawn criticism and sparked protest, including from its own employees. In April, former Microsoft employee Ibtihal Aboussad -- she was fired shortly after for her actions -- interrupted the company's 50 anniversary celebration to demand Microsoft "stop using AI for genocide"; larger protests occurred both inside and outside Microsoft's Build conference in May. Art rock legend Brian Eno, creator of the Windows 95 startup jingle, also spoke out against the company in May, with a pointed statement calling on Microsoft to sever its ties with Israel: "If you knowingly build systems that can enable war crimes, you inevitably become complicit in those crimes." Today's resolution represents an escalation of those protests. The shareholders involved represent more than $80 million worth of Microsoft shares, which is both an awful lot of money but also a tiny slice of Microsoft's total valuation. But Rewan Haddad, campaign director at consumer watchdog organization Eko, said the number and diversity of co-filers attached to the resolution -- the largest number of co-filers on a single Microsoft shareholders resolution ever, according to the org -- "shows the scale of shareholder frustration with Microsoft." That's also reflected in the resolution itself, which notes that "Inadequate HRDD exposes Microsoft to material legal, operational, and reputational risks," all of which can negatively impact shareholder value. That's pure business-speak, and definitely a concern for shareholders whose primary concern is money (ie., most of them), but the Religious of the Sacred Heart of Mary, the lead filer of the resolution, said in a statement that "the moral issue is paramount." This isn't the first time the Religious of the Sacred Heart of Mary has led shareholder action over Microsoft's treatment of human rights: In 2021 it was the lead filer on a shareholder proposal calling on the company to evaluate "human rights commitments" with regard to "the development of products, contracts, and business relationships with government agencies, including law enforcement, that create a high risk of adverse human rights impacts." Microsoft agreed to do so in October 2021. The shareholder resolution calling for a report on Microsoft's human rights due diligence processes will be voted on at the company's Annual General Meeting, which will be held later this year. I've reached out to Microsoft for comment and will update if I receive a reply.
[3]
Microsoft investors prod company over work with Israeli military
A group of Microsoft investors is pressuring the company to assess how effectively it identifies customers who misuse its artificial intelligence tools, a push that follows reports detailing the Israeli military's use of Microsoft software during its war in Gaza. The shareholder resolution, which could be voted on at the company's annual meeting in December, asks the board to commission a public report disclosing more on how Microsoft prevents its products from being used in ways that violate human rights or international humanitarian law. Microsoft employees have been protesting software sales to the Israel Defense Forces and other government entities, contending the company's AI services are enabling the killing of civilians in Israel's war in Gaza. Hamas' Oct. 7, 2023, attacks on Israel killed about 1,200 people. More than 57,000 people have been killed since as Israel retaliated and occupied large swaths of the Palestinian enclave, according to the local health authorities. In April, Microsoft fired two software engineers who protested sales to the Israeli military during an event celebrating the company's 50th anniversary. Both employees were affiliated with No Azure for Apartheid, a Microsoft group dedicated to forcing their employer to ax the Israeli government as a customer. The Associated Press, the Guardian and the Israeli news organization +972 Magazine have reported that the Israeli military and intelligence customers increased their use of Microsoft products such as the Azure cloud platform and AI models from OpenAI following the outbreak of the war. The military uses AI in part to search for patterns in data gathered through mass surveillance, work that helps identify candidates for future strikes, the AP reported. "In the face of serious allegations of complicity in genocide and other international crimes, Microsoft's HRDD processes appear ineffective," the resolution says, referring to human rights due diligence. Delivered to the company earlier this month, the proposal has about 60 co-signers, who collectively hold about $80 million in Microsoft shares, according to Ekō, an advocacy group that helped convene investors, and Investor Advocates for Social Justice. The resolution was brought by the Religious of the Sacred Heart of Mary, an organization of Catholic women. Such shareholder votes are nonbinding, and they rarely gain majority support among investors, who tend to defer to company management. But the resolution is an indication of the ongoing pressure Microsoft faces over its sales to Israel and the company's response to employees who have protested the matter. The largest U.S. technology companies have deepened their ties to militaries as their cloud-computing platforms became increasingly important infrastructure for governments and businesses. "These companies are not just technology companies anymore," said Rewan Al-Haddad, a senior campaign director with Ekō, which has pressured other big tech companies on social and governance issues. "They are weapons companies now." Ekō was previously called SumOfUs. Microsoft declined to comment on the shareholder proposal, pointing to a May statement that outlined an investigation it commissioned into the use of its technology by the Israel Ministry of Defense. "We have found no evidence that Microsoft's Azure and AI technologies, or any of our other software, have been used to harm people or that IMOD has failed to comply with our terms of service or our AI Code of Conduct," the company said in the unsigned blog post. "The work we do everywhere in the world is informed and governed by our Human Rights Commitments. Based on everything we currently know, we believe Microsoft has abided by these Commitments in Israel and Gaza." The shareholder resolution says that disclosure provided insufficient detail and notes that Microsoft conceded it doesn't know what customers do with its software on their own servers or other devices.
Share
Copy Link
Microsoft investors are demanding greater transparency on the company's human rights due diligence processes, particularly regarding the use of its AI and cloud products by the Israeli military in the Gaza conflict.
A group of Microsoft investors has filed a shareholder resolution demanding greater transparency on the company's human rights due diligence processes, particularly concerning the use of its artificial intelligence (AI) and cloud products by the Israeli military in the Gaza conflict. The resolution, which could be voted on at Microsoft's annual meeting in December, calls for a public report detailing how the company prevents its products from being used in ways that violate human rights or international humanitarian law 12.
Source: The Seattle Times
The shareholder action follows reports that the Israeli military's use of Microsoft's Azure cloud platform and AI models "skyrocketed" after the Hamas attack on Israel in October 2023 2. While the Israel Defense Forces (IDF) claim to use AI-enabled systems to identify targets in compliance with international law, concerns have been raised about the potential misuse of these technologies in the ongoing conflict, which has resulted in tens of thousands of Palestinian casualties 23.
In response to these allegations, Microsoft conducted an internal review and commissioned a third-party investigation. The company stated that it found "no evidence to date that Microsoft's Azure and AI technologies have been used to target or harm people in the conflict in Gaza" 2. However, critics argue that this statement lacks transparency and fails to address significant gaps in Microsoft's oversight of how customers use its software 2.
Source: Bloomberg Business
The controversy has sparked protests both inside and outside Microsoft, including from its own employees. In April, the company fired two software engineers who protested sales to the Israeli military during a company event 3. These actions have intensified the debate over corporate responsibility in the tech sector, with some arguing that major tech companies are increasingly becoming "weapons companies" due to their military contracts 3.
The shareholder resolution, backed by over 60 investors collectively holding about $80 million in Microsoft shares, emphasizes the potential legal, operational, and reputational risks associated with inadequate human rights due diligence 13. The Religious of the Sacred Heart of Mary, the lead filer of the resolution, stressed that while financial concerns are valid, "the moral issue is paramount" 2.
This shareholder action reflects growing concerns about the role of big tech companies in conflicts and human rights issues. It highlights the challenges these companies face in balancing their business interests with ethical considerations, especially as their technologies become increasingly integral to military and government operations worldwide 3.
NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather, potentially improving the protection of Earth's critical infrastructure from solar storms.
5 Sources
Technology
7 hrs ago
5 Sources
Technology
7 hrs ago
Meta introduces an AI-driven voice translation feature for Facebook and Instagram creators, enabling automatic dubbing of content from English to Spanish and vice versa, with plans for future language expansions.
8 Sources
Technology
23 hrs ago
8 Sources
Technology
23 hrs ago
OpenAI CEO Sam Altman reveals plans for GPT-6, focusing on memory capabilities to create more personalized and adaptive AI interactions. The upcoming model aims to remember user preferences and conversations, potentially transforming the relationship between humans and AI.
2 Sources
Technology
23 hrs ago
2 Sources
Technology
23 hrs ago
Chinese AI companies DeepSeek and Baidu are making waves in the global AI landscape with their open-source models, challenging the dominance of Western tech giants and potentially reshaping the AI industry.
2 Sources
Technology
7 hrs ago
2 Sources
Technology
7 hrs ago
A comprehensive look at the emerging phenomenon of 'AI psychosis', its impact on mental health, and the growing concerns among experts and tech leaders about the psychological risks associated with AI chatbots.
3 Sources
Technology
7 hrs ago
3 Sources
Technology
7 hrs ago