Skip to main content
Donate Now

Expert Panel on the Social and Humanitarian Impact of Autonomous Weapons at the Latin American and Caribbean Conference on Autonomous Weapons

Delivered by Bonnie Docherty, Senior Researcher

Thank you to Costa Rica and FUNPADEM for organizing this important conference. I will address some of the social and humanitarian consequences of autonomous weapons systems. By autonomous weapons systems, I am referring to systems that select and engage targets based on sensor processing rather than human inputs. I will focus my remarks on the moral and legal problems posed by two particular types of autonomous weapons systems: those that target people and those that apply force without meaningful human control. After elaborating on the threats these systems raise, I will address how they can be addressed in a new legally binding instrument.  

Systems that Target People

When autonomous weapons systems are used to target people, they raise several related social concerns. As inanimate objects, machines cannot understand the value of human life or the significance of its loss. Allowing machines to make life-and-death determinations thus strips people who are being targeted of their human dignity. On a related note, in the process of determining whom to kill, autonomous weapons systems boil human targets down to data points. Cristof Heyns, then special rapporteur on extrajudicial killing, wrote:

"To allow machines to determine when and where to use force against humans is to reduce those humans to objects…. They become zeros and ones in the digital scopes of weapons which are programmed in advance to release force without the ability to consider whether there is no other way out, without a sufficient level of deliberate human choice about the matter.[1]"

These systems therefore dehumanize the use of force in dangerous ways.

There is also the risk of discrimination. Autonomous weapons systems choose whom to engage based on so-called “target profiles.” These profiles can be criteria such as weight or heat signatures. While such criteria may seem neutral, algorithmic bias in numerous contexts shows that whatever the criteria, machines are likely to reflect the biases of their programmers and society. Use of artificial intelligence (AI) in other fields has provided widespread evidence of these risks of bias. To make matters worse, autonomous weapons systems could be programmed with intentionally discriminatory target profiles, such as skin color, or beards to signify young men of military age.

A loss of dignity, dehumanization, and discrimination all constitute consequences that suggest autonomous weapons systems that target people are morally and ethically unacceptable. They also run afoul of international human rights law. The principle of human dignity underlies all human rights. The International Covenant on Civil and Political Rights (ICCPR) states in its preamble that the rights it contains “derive from the inherent dignity of the human person.”

Non-discrimination is also a fundamental principle of international human rights law. It is articulated in the Universal Declaration on Human Rights. The ICCPR and later treaties and documents enumerate categories that may not be discriminated against including, to name a few, race, religion, sex, sexual orientation or gender identity, and disability.

Systems that Lack Meaningful Human Control

The second category of especially problematic autonomous weapons systems that I wish to highlight is systems that by their nature select and engage targets without meaningful human control. Their lack of human control causes significant humanitarian consequences.

During armed conflict, it poses grave dangers to civilians and other non-combatants, and in the process presents obstacles to compliance with international humanitarian law. For example, it would be difficult for autonomous weapons systems to distinguish between civilians and combatants, as well as between those wounded or surrendering (hors de combat) and combatants still fighting. Unlike human soldiers, autonomous weapons systems cannot relate to the people they encounter, and thus may be unable to understand subtle cues, like tone of voice or facial expressions, that could help them make the distinction. They may thus kill or injure civilians or those hors de combat, who are not legitimate targets under the laws of war.

Even if technology could overcome that challenge, autonomous weapons systems would still face difficulty complying with the rule of proportionality. Determining whether expected civilian harm outweighs anticipated military advantage is a subjective test that relies on human judgment at the time of attack, not an algorithm. It is determined by a reasonable commander standard that takes into account criteria such as human reason, past experience, ethical assessments, and common sense. It would be difficult to replicate human judgment in a machine, and machines cannot be pre-programmed to handle the infinite number of rapidly changing scenarios. Such challenges on the battlefield will make war more dangerous for civilians and soldiers alike.

Autonomous weapons systems are also likely to be used in law enforcement operations, and the social and humanitarian consequences there are of equal concern. In this case, the relevant law is international human rights law. Numerous rights could be violated. I’ve already addressed dignity and non-discrimination. I will highlight two more.

The first is the right to life, or more precisely the right not to be arbitrarily deprived of life. Under this right, force may only be applied if it is necessary, a last resort, and proportionate. Weapons that operate without meaningful human control face challenges complying with all three parts of that test. A machine could find it difficult to determine if it was necessary to use force. It may be unable to determine accurately if a human was a true threat because it could not read subtle cues in a person it was targeting, as I discussed earlier with regard to international humanitarian law’s rule of distinction. While a human law enforcement officer may be able to avoid force by negotiating with a human who was perceived as a threat and defusing a situation, an autonomous weapon system would be unable to do this. Humans would be less likely to surrender to a machine. Finally, for reasons already discussed, autonomous weapons systems lack the human judgment to engage in proportionality assessments.

The second right that would be implicated is right to remedy. International human rights law guarantees individuals the right to individual accountability as well as reparations for serious violations of human rights or international humanitarian law. In both armed conflict and law enforcement operations, there is an accountability gap for the harm caused by autonomous weapons systems. There are significant obstacles to establishing individual responsibility for operators. It is legally challenging and arguably unfair to hold human operators criminally responsible for the actions of autonomous weapons systems if they could not predict or control those actions. There are also numerous obstacles to holding weapons manufacturers of weapons liable under civil law.

A New Legally Binding Instrument

It is important understand the social and humanitarian consequences of different types of autonomous weapons systems because they can help us determine how best to address them. A large number of countries, including many in this room, have called for a new legally binding instrument with prohibitions and regulations that encompasses all or parts of what I will propose.

Specifically, the prohibitions should address the two main threats I discussed. A new treaty should prohibit the development, production, and use of autonomous weapons systems that target people in order to prevent the use of weapons systems that strip people of their dignity, dehumanize the use of force, or lead to discrimination.  It should also prohibit development, production, and use of systems that inherently lack meaningful human control over the use of force. This prohibition would help protect civilians and other non-combatants in armed conflict, and reduce infringements of human rights during law enforcement operations. The treaty should also include regulations (positive obligations) to ensure all other autonomous weapons systems are never used without meaningful human control.

The concept of meaningful human control should comprise a combination of components, such as, but not necessarily limited to:  

  • Decision-making components, for example, the ability to understand how the system works,
  • Technological components, including predictability and reliability, and
  • Operational components, notably restrictions on time and space in which the system operates.

A legally binding instrument, rather than a political declaration or other non-binding instrument, is essential. As discussed, autonomous weapons systems are a grave problem that can impact any country in the world so clear, strong, and global rules are important. Those rules should be legally binding to promote compliance among states that join the treaty. Experience shows that a legally binding instrument can also influence states not party, and even non-state armed groups through norm-building and stigmatization of the most problematic weapons.

Negotiating Forums

In conclusion, the social and humanitarian impacts I described are urgent as well as grave.  Technology is advancing rapidly, and we see autonomy playing an increasing role in the use of force. I, therefore, urge states to initiate negotiations on a new treaty as soon as possible. While the Convention on Conventional Weapons (CCW) has provided a forum for useful discussions and the development of support for a legally binding instrument over the years, there are no indications that negotiations of a new instrument will be possible there.

It is time to step outside of that forum to one that can aim higher, move faster, and be more inclusive of countries that are not party to the CCW as well as of international organizations and civil society. Disarmament precedent shows that stand-alone and UN General Assembly-initiated processes are viable options in which committed, like-minded states can produce strong treaties in 15 months or less.[2]

I thank Costa Rica again for organizing this conference, and I look forward to continued leadership from Costa Rica and the other countries in the region on this issue.

 

[1] Christof Heyns, “Autonomous Weapon Systems: Human Rights and Ethical Issues” (presentation to the Convention on Conventional Weapons Meeting of Experts on Lethal Autonomous Weapon Systems, April 14, 2016) (quoted in Human Rights Watch and Harvard Law School International Human Rights Clinic, Heed the Call: A Moral and Legal Imperative to Ban Killer Robots (August 2018)), https://www.hrw.org/sites/default/files/report_pdf/arms0818_web.pdf, p. 26.

 

[2] Human Rights Watch and Harvard Law School International Human Rights Clinic, Agenda for Action: Alternative Processes for Negotiating a Killer Robots Treaty (November 2022), https://www.hrw.org/report/2022/11/10/agenda-action/alternative-processes-negotiating-killer-robots-treaty.

Your tax deductible gift can help stop human rights violations and save lives around the world.

Region / Country