Skip to main content

 

A technologist with the Israeli military's Matzpen operational data and applications unit works at his station, at an Israel Defense Force base in Ramat Gan, Israel, June 11, 2023. © 2023 Nir Elias/Reuters

Following the attack by Hamas-led Palestinian armed groups in southern Israel on October 7, 2023, there has been extensive reporting on the Israeli military’s use of digital tools in their operations in the Gaza Strip. The Israeli military reportedly uses surveillance technologies, artificial intelligence (AI), and other digital tools to help determine who or what to attack in Gaza and when.

Human Rights Watch found that four digital tools that the Israeli military is using in Gaza use faulty data and inexact approximations to inform military actions. The Israeli military’s use of these digital tools risk Israeli forces violating international humanitarian law, in particular the laws of war concerning distinction between military targets and civilians, and the need to take all feasible precautions before an attack to minimize civilian harm.

Human Rights Watch research indicates that the tools apparently rely on ongoing and systematic Israeli surveillance of all Palestinian residents of Gaza, including with data collected prior to the current hostilities in a manner that is incompatible with international human rights law. The tools use Palestinians’ personal data to inform military actions like threat predictions, and the identification of targets. Some tools rely on machine learning, which is the use of computerized systems that can draw inferences from data and recognize patterns without explicit instructions.

It has not been possible to document when and where these digital tools are being used or the extent to which these tools have been used in conjunction with other methods of information and intelligence collection. Nonetheless the Israeli military’s use of these tools, instead of helping to provide more accurate targeting and minimize civilian loss of life and property, may be exacerbating the risk to civilians and raises grave ethical, legal, and humanitarian concerns.

This question-and-answer document provides technical and legal analyses of four digital tools; lays out new risks, challenges, and questions raised by these tools; and assesses each against applicable international law.

This document is based on public statements from Israeli officials, media reports, interviews with experts and journalists, and previously unreported material published by the Israeli military, including the personal data of Gaza residents, published online apparently in error.

 

  1. What are some of the digital tools that the Israeli military is using in Gaza?
  2. What is the Israeli military’s evacuation monitoring tool and how does it work? 
  3. Is the cell tower triangulation data accurate enough to inform military decisions?
  4. What else is known about the data the evacuation monitoring system uses?
  5. What is “Lavender” and how does it work?
  6. On which grounds does Lavender assign suspicion?
  7. Why are Lavender’s ratings problematic and risky?
  8. What is “The Gospel” and how does it work?
  9. What is “Where’s Daddy?” and how does it work?
  10. Is mobile phone location data alone sufficiently accurate for targeted strikes?
  11. What are the limitations of systems that rely on big data, machine learning, and AI?
  12. What new risks and challenges do data intensive digital tools present in military contexts?
  13. Are the digital tools used by the Israeli military autonomous weapon systems?
  14. How does international humanitarian law apply to the military use of digital tools?

 

  1. What are some of the digital tools that the Israeli military is using in Gaza?

Human Rights Watch assessed four tools that the Israeli military has used in its ongoing offensive in Gaza related to military planning and targeting. One is based on mobile phone tracking to monitor the evacuation of Palestinians from parts of northern Gaza. Another, which the military calls “The Gospel,” generates lists of buildings or other structural targets to be attacked. Another, which the military calls “Lavender” assigns ratings to people in Gaza related to their suspected affiliation with Palestinian armed groups for purposes of labeling them as military targets. “Where’s Daddy?” purports to determine when a target is in a particular location so they can be attacked there.

  1. What is the Israeli military’s evacuation monitoring tool and how does it work? 

The Israeli military began bombing Gaza on October 7, 2023, shortly after Palestinian armed groups launched an assault on southern Israel. On October 13, it issued evacuation orders to all residents of northern Gaza, where more than one million people lived, to evacuate to the south of Wadi Gaza within 24 hours, despite there being no safe place to go, no safe route to travel, and no adequate shelter.

The Israeli military’s evacuation monitoring tool relies on cell phone location data to monitor people’s movement in Gaza. Based on the New York Times reporting published on October 16, one week before the Israeli military began major ground operations in northern Gaza, this system was being used by the military to monitor the evacuation of Palestinians from their homes north of Wadi Gaza. The Washington Post and the Guardian both later published additional details about this system.

Journalists described being shown an office in the southern command headquarters in the Israeli city of Beer Sheva, with large screens with a map of Gaza, divided into 620 sections, each colored depending on the degree to which residents had evacuated. The map also included labels for hospitals, mosques, shelters, and other structures. Reports put the number of mobile phones tracked at the time at over one million; before October 2023 the Gaza telecommunications ministry reported a total of 1,041,198 active mobile phone subscriptions in Gaza.

According to these reports, the system uses cell tower triangulation and other surveillance data to provide a live view of Gaza residents’ movements. Israeli military officers told journalists that this information was used to inform what actions the military could take in certain places, and what types of weapons they would use.

  1. Is the cell tower triangulation data accurate enough to inform military decisions?

Cell tower triangulation data, in which a mobile phone’s location is approximated based on the cell towers it can connect to, is unlikely to give accurate information as to the precise whereabouts of mobile devices and thus the people holding them. The inaccuracy would be exacerbated by lack of electricity to charge phones after Israel cut all power lines to Gaza and Gaza’s sole power station ceased functioning because of the blockade on fuel imports and by massivedamage to Gaza’s telephone infrastructure.

Tools that use cell tower triangulation data to calculate civilian presence as a way of informing decisions related to military operations increase the risk of civilian harm in those operations. The tools could lead military commanders to conclude wrongly that there were no or few civilians present in a certain area and that this area could therefore be attacked.

  1. What else is known about the data the evacuation monitoring system uses?

In May 2024, Human Rights Watch discovered data posted publicly online by the Israeli military, apparently erroneously, that included what appears to be operational data related to systems used for monitoring the evacuation and movement of people through Gaza, as well as for projecting the likely civilian harm that would be caused by attacks in particular areas.

This data was included in the source code of the Israeli military’s evacuation information website. The data contained population figures consistent with 10-year-old census data from Gaza, disaggregated population data, information about civilian population movements, the Israeli military’s presence in Gaza, and a cumulative number of attacks, for each of the 620 blocks dividing the Gaza Strip. The data also included personal information: the surnames of the most populous extended families in each block.

Human Rights Watch analyzed and mapped this data related to Israel’s military presence and found it to be consistent with the state of the Israeli incursion into Gaza in mid-November 2023, when Israeli forces controlled most of northern Gaza and had not yet entered Khan Yunis in the south. While Human Rights Watch could not definitively confirm the origin and use of the information published online, it resembles the data described in media reports related to the evacuation monitoring system, as well as for planning military operations. The evacuation monitoring system may also rely on other sources of data. Human Rights Watch notified the Privacy Protection Authority, Israel’s data protection authority, about the publication of this data.

The destruction after an Israeli strike on residential buildings and a mosque in Rafah, Gaza Strip, February 22, 2024. © 2024 Fatima Shbair/AP Photo

  1. What is “Lavender” and how does it work?

Lavender uses machine learning to assign residents of Gaza a numerical score relating to the suspected likelihood that a person is a member of an armed group. Based on reports, Israeli military officers are responsible for setting the threshold beyond which an individual can be marked as a target subject to attack.

The military has acknowledged the existence of a tool like Lavender in very general terms, calling it a “database whose purpose is to cross-reference intelligence sources.”

A presentation and book by two Israeli officers provide important technical details. In a February 2023 presentation, the head of AI and data science at Unit 8200, the military’s signals intelligence division, described a digital tool that uses machine learning “for finding new terrorists.” The system, which the officer says was first used in Gaza in 2021, compiles surveillance data to rate people based on their suspected likelihood of association with a militant group.

The tool described in the presentation, which Human Rights Watch believes to be either Lavender or another tool that relies on the same underlying technology, involves a type of semi-supervised machine learning called “positive unlabeled learning,” which trains an algorithm from a dataset containing both labeled (positive) and unlabeled (negative) data. This type of machine learning uses qualities about the labeled data to try to identify patterns in the larger dataset. In this case, the algorithm looks for characteristics in surveillance and other data about individuals whom the Israeli military suspects of having affiliation to a militant group and then uses those same characteristics to identify more suspected people out of the general population. Because many data points would be unconfirmed if this technique were applied to a large population, this process relies to a large degree on unsubstantiated guesswork.

  1. On which grounds does Lavender assign suspicion?

Without access to Lavender, it is impossible to know fully which data points the tool uses to raise someone’s suspicion score. In general, semi-supervised machine learning relies on an algorithm to process large quantities of data and then establish which types of data are useful to carry out a given task. The 2021 book by Yossi Sariel, the head of Unit 8200, on military uses of AI may provide some insight. Sariel describes a “target machine” with characteristics very similar to Lavender that can be used to identify potential targets by collecting and analyzing their social connections, much like a social media platform.

Qualities described by Sariel that could contribute to someone being assigned a higher degree of suspicion include their social connections or associations, membership of a chat group with someone the Israeli military already believes to be associated with a militant group, or even merely changing phones or addresses repeatedly.

  1. Why are Lavender’s ratings problematic and risky?

Lavender’s ratings have serious flaws that could put civilians at grave risk during armed conflict. Positive unlabeled learning (see question 5) is not an adequate tool for informing decisions about identifying lawful military targets. Assumptions that lead to a tool assigning suspicion are not rooted in international humanitarian law but rather criteria developed by an algorithm and based on data that is likely biased and incomplete, and technically impossible to scrutinize.

This reliance on likely flawed assumptions to inform military decisions could lead to civilians erroneously being targeted. Lavender compiles data on the behavior and connections of large numbers of individuals to assign them a suspicion score. With predictive policing tools, the guilt of the general population is presumed by processing surveillance data on individuals who are not suspected having committed a crime, violating their right to the presumption of innocence. In a wartime context, this means creating presumptions about their status as military objectives subject to lawful attack, when the laws of war make a presumption of civilian status. Furthermore, the use of semi-supervised machine learning can lead to false positives by identifying people as suspicious based on qualities or behaviors that are not in any way illegal, threatening, or problematic, influencing outputs.

The authorities’ use of broad definitions of terrorism and terrorist groups, and the reliance on broad provisions of Israeli military law in the Occupied Palestinian Territory to ban associations as “hostile organizations” raises additional concerns. In the West Bank, for instance, the military has banned Palestinian human rights organizations as “terrorist groups,” and detained Palestinians for mere membership in, or identification with, such groups or entities affiliated with them.

If similar broad definitions were used in the training of machine learning tools like Lavender, intended to inform targeting decisions, the outputs would face similar biases and could increase the possibility of civilians being targeted for attack.

  1. What is “The Gospel,” and how does it work?

“The Gospel” uses an algorithm to process surveillance data to generate lists of targets. Based on media reports, The Gospel identifies four categories of nonhuman targets: military targets, including underground targets like tunnels; family homes of suspected militants; and “power targets,” which are civilian structures that are attacked with the stated goal, according to current and former intelligence analysts quoted in media reports, of “creat[ing] a shock” that will “lead civilians to put pressure on Hamas. Articles on the Israeli forces’ website from 2020 and 2023 describe an algorithm-based tool closely resembling The Gospel, with the latter mentioning the tool by name. 


There is less information available about how The Gospel functions than about Lavender, but it is likely that it also uses positive unlabeled learning to make its determinations. The Gospel is, in effect, a tool being used to determine which structures are considered civilian objects and military objectives, a process of distinction that attackers are required to make under the laws of war.

  1. What is Where’s Daddy?” and how does it work?

Based on media reports, “Where’s Daddy” is a tool that uses mobile phone location tracking to notify Israeli military operators when people who have been marked as military targets enter a specific location – according to reports, often a family home – where they can be attacked. Human Rights Watch was able to confirm that the Israeli military is using a tool with this functionality, but could not confirm any other specific technical details.

  1. Is mobile phone location data alone sufficiently accurate for targeted strikes?

Mobile phone location data is not sufficiently accurate to establish that a specific person is in a place at a specific time, and could lead to deadly errors if used to inform military attacks. A mobile phone is not a reliable proxy for a human target, as people, especially in conflict zones, may change numbers or devices during an emergency where access to networks and devices may frequently and suddenly change.

In addition, the most common methods for tracking cell phone location are through triangulating cell tower locations, called cell site location information, or through gaining access to a mobile phone's GPS data. Neither of these systems alone are accurate enough to determine a mobile phone’s location for precision attacks, and their use in targeting decisions raises concern about a failure to take all feasible precautions to avoid civilian harm, as required by the laws of war.

  1. What are the limitations of systems that rely on big data, machine learning, and AI?

These systems are limited by several assumptions and the tendency to reproduce society’s biases.

“Black box effect”: Machine learning, AI, and other algorithmic systems, by design, do not allow for scrutiny, or show their work or decision-making processes around how outputs are generated, and they lack documentation to support apportioning responsibility to relevant actors. It is important for digital tools to allow users to check the systems’ work, to see how outputs were made, and using which data. This is especially important in the case of weapons or forms of target generation that use AI.

“Automation bias”: Automation bias is when people put excessive trust in the outputs of digital tools, in part because of the perception that they are more neutral than human beings. Studies have found that the use of digital tools, including in military applications, can lead to reduced scrutiny over their outputs, and even to trust in, and continue reliance on, automated outputs in the face of contradictory information.

Problematic assumptions: All digital systems rely on core assumptions to work; these are the facts upon which calculations are based (for a TV series recommendation tool, that would be like saying if someone often watches science fiction series, then the tool should recommend them more science fiction). While instructions and parameters for machine learning and AI-based systems are set by developers, these assumptions are not explicitly written by programmers, but rather developed by algorithms based on initial training data. Tools based on this technology process large quantities of data with a specific output or goal in mind, and then are able to draw their own conclusions (Still, it is the owners, developers and users of this technology who ultimately bear responsibility for how algorithm-based systems are used). This method may work for some low-stakes applications like TV recommendations, but it raises grave concerns when used in military contexts, where there are life and death consequences.

Reproducing society’s biases: In addition, while the output of algorithmic systems might seem neutral, machines are likely to reflect the biases of their programmers and society. This risk is especially high with the development and use of digital tools that inform decisions whether people or objects can be attacked as a military target. In the context of the Israeli military and the hostilities in the Occupied Palestinian Territory, for example, the International Court of Justice (ICJ) found in July that Israel perpetrates discrimination and apartheid against Palestinians. Digital tools developed and used by the Israeli military for targeting people in the occupied territory could reflect such systemic discrimination and biases.

“Garbage in, garbage out”: Flawed data make for flawed results. If a digital system is built on data that is inaccurate, dated, incomplete, or not fully representative of the context in which it is seeking to operate, its outputs will be similarly flawed.

  1. What new risks and challenges do data intensive digital tools present in military contexts?

Digital tools used in a military context create new risks and challenges through their reliance on invasive surveillance, digital dehumanization, and the possibility of a military’s overreliance on digital tools, and that those tools could increase the tempo of war.

Tools that employ machine learning and AI rely on enormous quantities of data to produce outputs such as threat predictions, target identification, etc. Palestinians in Gaza and in the West Bank are under constant, pervasive surveillance by Israeli authorities, and some of this data is now being used to inform military operations and planning in Gaza.

Prior to the current hostilities, Israel as an occupying power maintained significant control over aspects of life in Gaza, creating obligations under both international human rights law as well as international humanitarian law to ensure the welfare of the population. Article 17 of the International Covenant on Civil and Political Rights (ICCPR), to which Israel is party, affirms the right to privacy, which may not be subject to arbitrary or unlawful interference.

The United Nations Human Rights Committee, the international expert body that authoritatively interprets the ICCPR, has held that “any interference with privacy must be proportional to the end sought and be necessary in the circumstances of any given case.” Governments are obligated to respect the privacy rights of individuals, regardless of their nationality or location, given that digital technology enables far-reaching extraterritorial surveillance.

The Israeli government has obtained and stored data from Gaza in ways that appear to violate the internationally protected right to privacy and other human rights. This is particularly problematic as Israel’s surveillance of Palestinians in Gaza is inextricably tied to a system of rights violations that amount to a policy of apartheid.

During armed conflict, human rights law needs to be interpreted against the backdrop of international humanitarian law. Thus, what amounts to “arbitrary or unlawful” interference with privacy prohibited by Article 17 of the ICCPR may be different when taking into account the laws of war, which is the controlling body of law (“lex specialis”) after hostilities are triggered.

Digital dehumanization: Automated systems reduce humans to a series of data points, to be processed by systems that inherently have bias and limitations based on their programming. Once human beings are reduced to data points, surveilled, sorted, and categorized, it may make it easier to decide who should be subjected to harm, including targeting for lethal action, and to carry out those actions. This trajectory from automated surveillance to harm is called “digital dehumanization.”

Overreliance on digital tools: In a fast-paced, stressful environment, or where there is pressure to generate large numbers of targets in a short period of time, such as during armed conflict, there is a tendency to over-rely on digital systems that can produce outputs at a fast pace. The International Committee of the Red Cross (ICRC) has identified this as a risk for big data, machine learning, and AI-based systems that are used to inform military options, noting that it could facilitate error-prone decisions, violations of the laws of war, and increase the likelihood of civilian harm.

Increasing the tempo of war: The ICRC has also identified the ability for digital systems to speed up the tempo of war as a risk factor for civilians, especially if an increased operational speed prevents thorough analysis and scrutiny of information sources used in military planning. Israeli military officers have said publicly that the digital tools for target generation have allowed officers to generate in a matter of days the same number of targets that, before the use of these tools, would take one year.

  1. Are the digital tools used by the Israeli military autonomous weapon systems?

The Israeli military’s digital tools are not autonomous weapon systems: weapons that select and engage targets based on sensor processing rather than human inputs, and for which Human Rights Watch has long sought a treaty ban. Instead, they are data processing systems that provide information to Israeli military planners. The decision about whether to attack a recommended target is made and carried out separately by a human and not a machine. Each digital tool requires human input and oversight.

But a UN Human Rights Council resolution adopted in October 2022 stresses the central importance of human decision-making in the use of force. It warns against relying on nonrepresentative data sets, algorithm-based programming, and machine-learning processes. Such technologies can reproduce and exacerbate existing patterns of discrimination, marginalization, social inequalities, stereotypes, and bias, with unpredictable outcomes.

The Israeli military’s digital tools present similar problems to autonomous weapons systems. They operate in ways that are difficult or, in the case of the machine learning algorithms used by Lavender and The Gospel, impossible to check, source, or verify. At the same time, they encourage overreliance by appearing to be accurate and unbiased.

  1. How does international humanitarian law apply to the military use of digital tools?

Digital tools are not military weapons. However, their use in military operations subjects them to the restrictions of international humanitarian law. As the ICRC has noted, “Any new technology of warfare must be used, and must be capable of being used, in compliance with existing rules of international humanitarian law.”

Lavender and The Gospel raise important – and potentially deadly – issues with respect to military targeting, the determination as to whether an intended target can be lawfully attacked and under what circumstances. Two general rules found in international treaty law and customary law are most pertinent: 1) the requirement that attacking forces distinguish between military objectives and civilians and civilian objects, and only attack the former; and 2) the requirement that attacking forces take all feasible precautions to minimize incidental loss of civilian life and damage to civilian objects.

Lavender and The Gospel rely on machine learning to distinguish between military objectives and civilians and civilian objects. If their recommendations or assessments of the digital tools are acted upon without sufficient scrutiny or any additional information, as has been reported, resulting in attacks causing civilian harm, Israeli forces would be violating the laws of war in carrying out attacks, which could amount to war crimes.

The Where’s Daddy? tool purportedly uses mobile phone location tracking to notify military operators when people who have been marked as military targets enter a specific location, such as a family home. The laws of war do not prohibit attacks on valid military targets such as military commanders when they are at home, but all the legal requirements still apply. It’s not clear that these other factors – such as whether the attack is causing disproportionate civilian harm compared to the military advantage of the attack – are being considered.

These tools also raise concerns when standards not found under international law are applied. Using machine learning to label someone a “terrorist” – a term not defined under international human rights or humanitarian law – may subject someone to arrest or lethal attack without meeting the evidentiary requirements of criminal wrongdoing that international law requires. Attacks against so-called power targets – large civilian structures reportedly identified by The Gospel for targeting due to the presumed psychological impact of destroying them – is not an accepted concept under international humanitarian law. Attacking them would still require that they meet the principles of distinction, proportionality, and precaution.

Your tax deductible gift can help stop human rights violations and save lives around the world.

Region / Country