Curated by THEOUTPOST
On Mon, 30 Dec, 4:05 PM UTC
2 Sources
[1]
AI Meets War: Inside Israel's AI Factory and Its Deployment in Gaza
The system, known internally as Habsora -- Hebrew for "the Gospel" -- became one of the most significant tools in this AI arsenal. Designed to analyze vast datasets, it could pinpoint potential targets in real-time, even in the absence of human analysts. However, as the system evolved, questions arose about whether human oversight was sufficient to ensure ethical decision-making. Following the devastating Hamas attack, Israel launched a military campaign targeting Gaza with relentless airstrikes. Initially, the IDF relied on its meticulously curated database, which included details of Hamas's operational infrastructure, from tunnels to weapons storage sites and command centers. This database, built over years of intelligence gathering, allowed for targeted strikes in the early days of the war. However, as the conflict continued, the IDF found its "target bank" depleting rapidly. The intensity of the military campaign required additional targets at an accelerated pace. At this juncture, the Habsora system came into full effect. Leveraging advanced machine learning and data analytics, it generated hundreds of new targets within hours. This capability allowed the IDF to sustain the campaign's momentum, even as traditional methods of intelligence gathering struggled to keep pace. Habsora's deployment underscores the growing reliance on AI to augment and, in some cases, replace human decision-making in warfare. The system could rapidly cross-reference data from a range of sources, including surveillance drones, signal intercepts, and ground reports, to identify potential threats. Yet, the absence of comprehensive human review has raised alarms about the accuracy and ethical implications of these decisions.
[2]
IDF uses AI in war against Hamas in Gaza - report
The emphasis on technology eroded Unit 8200's "culture of warning," where even low-level analysts could easily brief top commanders about concerns. The IDF used AI to rapidly refill their "target bank," a list of Hamas and Hezbollah terrorists to be killed during military operations, along with details about their whereabouts and routines, according to a report published in The Washington Post on Sunday. Some experts consider the target bank to be the most advanced military AI initiative ever to be deployed. One such AI tool referenced is called Habsora -- or "the Gospel" -- which could quickly generate hundreds of additional targets. However, the Washington Post report discusses previously unreported details of the inner workings of the machine-learning program, along with the secretive, decade-long history of its development. The report also revealed a debate within the IDF's senior echelons about the quality of intelligence gathered by AI, whether the technologies' recommendations garnered sufficient scrutiny, and whether the focus on AI weakened the IDF's intelligence capabilities. Some critics argue the AI program has been a behind-the-scenes force accelerating the death toll in Gaza. "What's happening in Gaza is a forerunner of a broader shift in how war is being fought," said Steven Feldstein, senior fellow at the Carnegie Endowment, who researches the use of AI in war. "Combine that with the acceleration these systems offer -- as well as the questions of accuracy -- and the end result is a higher death count than was previously imagined in war." The IDF said claims that its use of AI endangers lives are "off the mark." "The more ability you have to compile pieces of information effectively, the more accurate the process is," the IDF said in a statement to The Washington Post. "If anything, these tools have minimized collateral damage and raised the accuracy of the human-led process." No autonomous AI The Gospel and other AI tools do not make decisions autonomously, according to an Israeli intelligence official who spoke with The Washington Post. Stay updated with the latest news! Subscribe to The Jerusalem Post Newsletter Subscribe Now Reviewing reams of data from intercepted communications, satellite footage, and social networks, the algorithms spit out the coordinates of tunnels, rockets, and other military targets. Recommendations that survive vetting by an intelligence analyst are placed in the "target bank" by a senior officer. Using the software's image recognition, soldiers could unearth subtle patterns, including minuscule changes in years of satellite footage of Gaza suggesting that Hamas had buried a rocket launcher or dug a new tunnel on agricultural land, compressing a week's worth of work into 30 minutes, a former military leader who worked on the systems told The Washington Post. In the Israel-Hamas war, estimates of how many civilians might be harmed in a bombing raid are derived through data-mining software, using image recognition tools to analyze drone footage alongside smartphones pinging cell towers to tally the number of civilians in an area, two of the people who spoke to The Washington Post said. The IDF says its assessments of collateral damage adhere to the Law of Armed Conflict, which mandates nations differentiate between civilians and combatants and take precautions to protect lives. Some proponents of Israel's use of the technology argue that aggressively deploying innovations such as AI is essential for the survival of a small country facing determined and powerful enemies. "Technological superiority is what keeps Israel safe," said Blaise Misztal, vice president for policy at the Jewish Institute for National Security of America, who was briefed by the IDF's intelligence division on its AI capabilities in 2021. "The faster Israel is able to identify enemy capabilities and take them off the battlefield, the shorter a war is going to be, and it will have fewer casualties." The "human bottleneck" However, the technologies, while widely recognized as promising, had limitations. Sometimes, the sheer volume of intercepts overwhelmed Unit 8200's analysts. For example, Hamas operatives often used the word "batikh," or watermelon, as code for a bomb, one of the people familiar with the efforts told The Washington Post. However, an internal audit found that the system wasn't smart enough to understand the difference between a conversation about an actual watermelon and a coded conversation among terrorists. Issues with other key slang words and phrases were also found. "If you pick up a thousand conversations a day, do I really want to hear about every watermelon in Gaza?" the person told The Washington Post. The military invested in new cloud technologies that processed algorithms quickly in preparation for an anticipated conflict with Hezbollah on Israel's northern border. Lavender, an algorithmic program developed in 2020, pored over data to produce lists of potential Hamas and Islamic Jihad terrorists, giving each person a score estimating their likelihood to be a member, three people familiar with the systems told The Washington Post. Factors that could raise a person's score included being in a WhatsApp group with a known militant, frequently switching addresses and phone numbers, or being named in Hamas files, the people said. Estimates from the various algorithms fed into the umbrella system, Gospel, which could be queried by intelligence analysts.
Share
Share
Copy Link
Israel's military has deployed advanced AI systems, including 'Habsora', in the Gaza conflict, raising questions about the ethical implications and effectiveness of AI in modern warfare.
In a significant development in modern warfare, Israel has deployed an advanced artificial intelligence system called 'Habsora' (Hebrew for "the Gospel") in its military operations in Gaza. This AI-driven tool has become a cornerstone of Israel's military strategy, particularly following the recent Hamas attack 1.
Habsora is designed to analyze vast datasets and identify potential targets in real-time, even without human analysts. The system leverages machine learning and data analytics to generate hundreds of new targets within hours, allowing the Israel Defense Forces (IDF) to maintain the momentum of their military campaign 1.
The AI tool can rapidly cross-reference data from various sources, including surveillance drones, signal intercepts, and ground reports. It uses image recognition to detect subtle patterns in satellite footage, potentially uncovering hidden rocket launchers or newly dug tunnels 2.
Following the Hamas attack, Israel launched a military campaign targeting Gaza with airstrikes. Initially, the IDF relied on its existing database of Hamas's operational infrastructure. However, as this "target bank" depleted, Habsora came into full effect, generating new targets at an accelerated pace 1.
The system's deployment has significantly enhanced the IDF's ability to sustain its campaign, even as traditional intelligence gathering methods struggled to keep up. Some experts consider this target bank to be the most advanced military AI initiative ever deployed 2.
The use of AI in warfare has raised significant ethical concerns. Critics argue that the AI program has been a behind-the-scenes force accelerating the death toll in Gaza. The absence of comprehensive human review has alarmed some about the accuracy and ethical implications of AI-driven decisions in warfare 1 2.
However, the IDF maintains that these tools have minimized collateral damage and increased the accuracy of human-led processes. They emphasize that AI tools like Habsora do not make decisions autonomously, and recommendations are vetted by human intelligence analysts 2.
Proponents of Israel's use of AI in warfare argue that such technological superiority is essential for the survival of a small country facing determined enemies. They contend that faster identification and neutralization of enemy capabilities can lead to shorter wars with fewer casualties 2.
Despite its advanced capabilities, the AI system has limitations. For instance, it sometimes struggles to differentiate between coded language used by militants and everyday conversations. An internal audit revealed issues with key slang words and phrases, highlighting the ongoing need for human oversight and refinement of the technology 2.
As AI continues to play an increasingly significant role in modern warfare, the deployment of systems like Habsora raises important questions about the future of conflict, the role of human decision-making in war, and the ethical implications of AI-driven military operations.
Reference
[1]
[2]
U.S. tech companies, particularly Microsoft and OpenAI, have provided AI and cloud computing services to Israel's military, significantly enhancing its targeting capabilities in Gaza and Lebanon. This raises questions about the ethical implications of commercial AI use in warfare.
9 Sources
9 Sources
Google employees have been working to provide Israel's military with access to advanced AI technology since the early weeks of the Israel-Gaza war, despite public efforts to distance the company from military operations.
4 Sources
4 Sources
Israeli AI and drone startups are experiencing rapid growth and international recognition due to their crucial role in the ongoing conflict, showcasing the potential of AI-driven military technology.
2 Sources
2 Sources
Ukraine has collected millions of hours of drone footage from the ongoing conflict with Russia, which is being used to train AI models for battlefield decision-making and target identification.
5 Sources
5 Sources
Google has quietly removed its commitment not to use AI for weapons or surveillance, signaling a shift towards potential military applications amidst growing competition and national security concerns.
40 Sources
40 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved