Israel’s Targeting AI: How Capable is It?
The ‘Habsora’ AI system used by the Israeli military is said to use intelligence data to generate targets for attack, including reports on the likely number of civilian casualties. But the odds of even the Israel Defense Forces using an AI with such a degree of sophistication and autonomy are low.
In 2021, Israel claimed to have used AI in its brief conflict against militant groups in the Gaza Strip, sparking headlines about the first ‘AI War’. Recent reports claim much more intensive usage of AI-powered systems in Israel’s current war on Gaza, although details about these systems remain scarce due to the highly classified nature of Israeli intelligence and the layers of mis- and disinformation surrounding the ongoing war. Despite the dearth of information, there is reason to inject a note of caution into discussions of both the alleged capabilities and the role of Israel’s military AI.
Habsora: AI Targeting?
In 2019, the Israeli government announced the creation of a ‘targeting directorate’ to produce targets for the Israeli Defense Force (IDF), especially the Israeli Air Force (IAF). In previous conflicts, the IAF would run out of targets after just a few weeks of fighting, having hit all the targets of which they knew. The targeting directorate was created to mitigate this shortage by preemptively creating a ‘bank’ of militant targets prior to any conflict, thereby ensuring enough targets when hostilities began. The directorate, consisting of hundreds of soldiers and analysts, creates targets by aggregating data from a variety of sources – drone footage, intercepted communications, surveillance data, open source information, and data from monitoring the movements and behaviour of both individuals and large groups.
Both media and IDF sources claim that AI is used by the targeting directorate to process the aggregated data and then generate targets at a much higher speed than human analysts can. The AI system used (dubbed ‘Habsora’ (בשורה), or ‘Gospel’) is said to use the intelligence collected to output actionable targets of militants’ locations for brigade- or division-level targeting. The reporting claims that the target outputs include reports on the likely number of civilian casualties. Media sources claim that the human analyst’s role is confined to confirming the target before it is given to the commander for rejection or approval. The IDF states that the goal is a ‘complete match’ between Habsora’s and a human analyst’s targeting recommendation prior to a strike.
The IDF is one of the most technologically advanced and integrated militaries in the world, yet the odds of even the IDF using an AI with such a degree of sophistication and autonomy are low.
Despite the dearth of information, there is reason to inject a note of caution into discussions of both the alleged capabilities and the role of Israel’s military AI
Although ‘creating targets’ might sound like a simple concept, targeting is an immensely complex task, meaning that Habsora would be miles beyond any other tactical/operational system deployed by militaries around the world. The AI would need to be capable of accepting a variety of data formats from numerous sources, weighing the relevance and reliability of each data point, combining this data with existing records, and creating an actionable target profile, while allegedly also estimating civilian collateral damage.
While such capability is may be technically feasible, there is an even lower likelihood of such a system being allowed such a broad remit in a combat environment, because trust in AI, especially military AI, remains lacking. A system like Habsora, with its reported lack of output detailing reasons for target selection, is unlikely to reduce this apprehensiveness.
Even in the less autonomous, but still vague, role for Habsora described by the IDF, no description is given of how the system creates targets, or what happens when the two do not align. Commanders would likely desire significant cross-checking by several human sources before confirming a strike, which would occupy many of the directorate’s human analysts, thus negating at least some of the efficiency gained by deploying AI.
Every decision to strike is made with near-comprehensive knowledge of conditions in the target location and anticipated effects of the strike, including anticipated casualties
A much more likely scenario is that the targeting directorate uses machine-automated systems in cohering intelligence to identify patterns in the massive quantities of data collected, with humans creating the actual target. Even in a secondary role such as this, AI-enhanced processing would undoubtedly increase the speed at which the directorate produces targets, relative to analysts doing this by hand. Using a system to amass and automatically process collected intelligence is not new; such computing has been used in targeting for decades, albeit with varying technical capability. Upon receiving intelligence (which could be anything from geospatial to signals intelligence), a machine-automated system could indicate areas of interest where further attention or action might be merited, but not output actionable targets.
However, the absence of explicit AI targeting does not mean that Israel’s aerial war on Gaza is imprecise, or that it is unable to avoid civilian casualties in its strikes. Despite its extensive use of ‘dumb’ munitions, Israel has access to massive quantities of precision munitions. These precision munitions are available with a variety of warheads and capabilities, allowing the IDF to limit or expand the level of destruction at will, and indicating that the majority of civilian collateral damage from air strikes is intentional and accounted for.
The IDF has maintained an extraordinarily dense surveillance network over Gaza for many years, and retains absolute supremacy in electronic, communications, geospatial, and measurement and signature intelligence. Every decision to strike is made with near-comprehensive knowledge of conditions in the target location and anticipated effects of the strike, including anticipated casualties. Israel also has a rigorous legal core within its military, whose lawyers must sign off on every target – human- or AI-generated – even if there are hundreds in a single day. When Israeli air strikes kill or injure tens of thousands of civilians, it seems beyond any reasonable doubt that every single target is generated, approved, ordered and struck with the full knowledge and consent of human IDF operators.
The views expressed in this Commentary are the author’s, and do not represent those of RUSI or any other institution.
Have an idea for a Commentary you’d like to write for us? Send a short pitch to email@example.com and we’ll get back to you if it fits into our research interests. Full guidelines for contributors can be found here.
Latest News from
The US National Defense Industrial Strategy: No Shortage of Ambition20/02/2024 15:05:00
The recently published its first ever National Defence Industrial Strategy represents the latest in a long series of efforts to improve US defence acquisition. Ensuring that its ideas are implemented will require a major and sustained change programme across the US armed forces and the Department of Defense as a whole.
Navalny’s Legacy: The Twilight of Russian Oppositional Thought?20/02/2024 13:05:00
The recent death of Alexei Navalny has served to highlight the dire conditions faced by those who defy Putin’s rule. It is unlikely that an opposition figure of his stature will emerge in the near future.
Director of the Serious Fraud Office Nick Ephgrave QPM Speaks at RUSI19/02/2024 14:25:00
On 13 February we welcomed Nick Ephgrave to RUSI for his first public speech as Director of the Serious Fraud Office, and heard his reflections on some of the challenges facing the organisation.
The Risk of Not Building a Consensus on Artificial Intelligence and Defence16/02/2024 15:05:00
As AI creeps into the battlespace, the lack of a global framework on the responsible use of AI in defence is threatening to become a strategic vulnerability.
Pakistan’s Elections: Washington Should Urge the Army to Respect the People16/02/2024 14:25:00
Despite its leader being in jail, Imran Khan’s party came out on top in Pakistan’s recent elections.
Shifting Cultures: Addressing Discrimination in the London Fire Brigade15/02/2024 14:25:00
A year on from reviews which identified how systemic discrimination was damaging the reputation and response of UK public services, institutions like the London Fire Brigade are struggling with how to change deeply engrained cultural dynamics.
Poking the Bear: Social Media and Human Intelligence Recruitment15/02/2024 12:05:00
Recent CIA social media campaigns have shown how the past can be weaponised to encourage modern-day potential agents to work with the West.
Russian Military Objectives and Capacity in Ukraine Through 202413/02/2024 14:25:00
Russian forces are likely to peak in late 2024, with increasing material challenges over the course of 2025.
‘Joint Expeditionary Force Digital’: A Better Way to Deliver Defence Tech13/02/2024 11:25:00
The first article in this series argued that the Joint Expeditionary Force (JEF) should be stretched beyond routine joint military cooperation, with the second article promoting the creation of a ‘JEF Bank’ as an innovative funding mechanism for rapid capability development.