Summary
The debate about fully autonomous weapons has continued to intensify since the issue reached the international stage four years ago.[1] Lawyers, ethicists, military personnel, human rights advocates, scientists, and diplomats have argued, in a range of venues, about the legality and desirability of weapons that would select and engage targets without meaningful human control over individual attacks. Divergent views remain as military technology moves toward ever greater autonomy, but there are mounting expressions of concern about how these weapons could revolutionize warfare as we know it. This report seeks to inform and advance this debate by further elaborating on the dangers of fully autonomous weapons and making the case for a preemptive ban.
In December 2016, states parties to the Convention on Conventional Weapons (CCW) will convene in Geneva for the treaty’s Fifth Review Conference and decide on future measures to address “lethal autonomous weapons systems” (LAWS), their term for these weapons. Spurred to act by the efforts of the Campaign to Stop Killer Robots, CCW states have held three informal meetings of experts on LAWS since 2014. At the Review Conference, states parties should agree to establish a Group of Governmental Experts. The formation of this formal body would compel states to move beyond talk and create the expectation of an outcome. That outcome should be a legally binding prohibition on fully autonomous weapons.
To build support for a ban, this report responds to critics who have defended the developing technology and challenged the call for a preemptive prohibition. The report identifies 16 of the critics’ key contentions and provides a detailed rebuttal of each. It draws on extensive research into the arguments on all sides. In particular, it examines academic publications, diplomatic statements, public surveys, UN reports, and international law.
The report updates a May 2014 paper, entitled “Advancing the Debate on Killer Robots,” and expands it to address new issues that have surfaced over the past two years.[2] In the process, the report illuminates the major threats posed by fully autonomous weapons and explains the advantages and feasibility of a ban.
The first chapter of this report elaborates on the legal and non-legal dangers posed by fully autonomous weapons. The weapons would face significant obstacles to complying with international humanitarian and human rights law and would create a gap in accountability. In addition, the prospect of weapons that could make life-and-death decisions generates moral outrage, and even the expected military advantages of the weapons could create unjustifiable risks.
The second chapter makes the case for a preemptive prohibition on the development, production, and use of fully autonomous weapons. Of the many alternatives proposed, only an absolute ban could effectively address all the concerns laid out in the first chapter. The ban should be adopted as soon as possible, before this revolutionary and dangerous technology enters military arsenals. Precedent from past disarmament negotiations and instruments shows that the prohibition is achievable and would be effective.
Recommendations
In light of the dangers posed by fully autonomous weapons and the inability to address these dangers other than with a ban, Human Rights Watch and the International Human Rights Clinic (IHRC) at Harvard Law School call on states to:
- Adopt an international, legally binding instrument that prohibits the development, production, and use of fully autonomous weapons;
- Adopt national laws or policies that establish prohibitions on the development, production, and use of fully autonomous weapons; and
- Pursue formal discussions under the auspices of the CCW, beginning with the formation of a Group of Governmental Experts, to discuss the parameters of a possible protocol with the ultimate aim of adopting a ban.
I. The Dangers of Fully Autonomous Weapons
Fully autonomous weapons raise a host of concerns. It would be difficult for them to comply with international law, and their ability to act autonomously would interfere with legal accountability. The weapons would also cross a moral threshold, and their humanitarian and security risks would outweigh possible military benefits. Critics who dismiss these concerns depend on speculative arguments about the future of technology and the false presumption that technological developments can address all of the dangers posed by the weapons.
Legal Dangers
Contention #1: Fully autonomous weapons could eventually comply with international humanitarian law, notably the core principles of distinction and proportionality.
Rebuttal: The difficulty of programming human traits such as reason and judgment into machines means that fully autonomous weapons would likely be unable to comply reliably with international humanitarian law.
Analysis: Some critics contend that fully autonomous weapons could comply with the core principles of distinction and proportionality, at some point in the future. They argue that advocates of a ban often “fail to take account of likely developments in autonomous weapon systems technology.”[3] According to the critics, not only has military technology “advanced well beyond simply being able to spot an individual or object,” but improvements in artificial intelligence will probably also continue.[4] Thus, while recognizing the existence of “outstanding issues” and “daunting problems,”[5] critics are content with the belief that solutions are “theoretically achievable.”[6] Proceeding on an assumption that such weapons could one day conform to the international humanitarian law requirements of distinction and proportionality, however, is unwise.
Difficulties with Distinction
Fully autonomous weapons would face great, if not insurmountable, difficulties in reliably distinguishing between lawful and unlawful targets as required by international humanitarian law.[7] Although progress is likely in the development of sensory and processing capabilities, distinguishing an active combatant from a civilian or an injured or surrendering soldier requires more than such capabilities. It also depends on the qualitative ability to gauge human intention, which involves interpreting the meaning of subtle clues, such as tone of voice, facial expressions, or body language, in a specific context. Humans possess the unique capacity to identify with other human beings and are thus equipped to understand the nuances of unforeseen behavior in ways that machines, which must be programmed in advance, simply cannot. Replicating human judgment in determinations of distinction—particularly on contemporary battlefields where combatants often seek to conceal their identities—is a difficult problem, and it is not credible to assume a solution will be found.
Obstacles to Determining Proportionality
The obstacles to fully autonomous weapons complying with the principle of distinction would be compounded for proportionality, which requires the delicate balancing of two factors: expected civilian harm and anticipated military advantage. Determinations of proportionality take place not only in developing an overall battle plan, but also during actual military operations, when decisions must be made about the course or cessation of any particular attack. One critic concludes that there “is no question that autonomous weapon systems could be programmed … to determine the likelihood of harm to civilians in the target area.”[8] While acknowledging that “it is unlikely in the near future that … ‘machines’ will be programmable to perform robust assessments of a strike’s likely military advantage,” he contends that “military advantage algorithms could in theory be programmed into autonomous weapon systems.”[9]
There are a number of reasons to doubt each of these conclusions. As already discussed, it is highly questionable whether a fully autonomous weapon could ever reliably distinguish legitimate from illegitimate targets. When assessing proportionality, it is not only the legitimacy of the target that is in question, but also the expected civilian harm—a calculation that requires determining the status of and attack’s impact on all entities and objects surrounding the target.
When it comes to predicting anticipated military advantage, even critics admit that “doing so will be challenging [for a machine] because military advantage determinations are always contextual.”[10] Military advantage must be determined on a “case-by-case” basis, and a programmer could not account in advance for the infinite number of unforeseeable contingencies that may arise in a deployment.[11]
Even if the elements of military advantage and expected civilian harm could be adequately quantified by a fully autonomous weapon, it would be unlikely to be able qualitatively to balance them. The generally accepted standard for assessing proportionality is whether a “reasonable military commander” would have launched a particular attack.[12] In evaluating the proportionality of an attack by a fully autonomous weapon, the appropriate question would be whether the weapon system made a reasonable targeting determination at the time of its strike.
While some critics focus on the human commander’s action ahead of the strike,[13] the proportionality of any particular attack depends on conditions at the time of the attack, and not at the moment of design or deployment of a weapon. A commander weighing proportionality at the deployment stage would have to rely on the programmer’s and manufacturer’s predictions of how a fully autonomous weapon would perform in a future attack. No matter how much care was taken, a programmer or manufacturer would be unlikely accurately to anticipate a machine’s reaction to shifting and unforeseeable conditions in every scenario. The decision to deploy a fully autonomous weapon is not equivalent to the decision to attack, and at the moment of making a determination to attack, such a weapon would not only be out of the control of a human being exercising his or her own judgment, but also unable to exercise genuine human judgment itself (see Contention #12).
It would be difficult to create machines that could meet the reasonable military commander standard and be expected to act “reasonably” when making determinations to attack in unforeseen or changeable circumstances. According to the Max Planck Encyclopedia of International Law, “[t]he concept of reasonableness exhibits an important link with human reason,” and it is “generally perceived as opening the door to several ethical or moral, rather than legal, considerations.”[14] Two critics of the proposed ban treaty note that “[p]roportionality … is partly a technical issue of designing systems capable of measuring predicted civilian harm, but also partly an ethical issue of attaching weights to the variables at stake.”[15] Many people would object to the idea that machines could or should be making ethical or moral determinations (see Contention #6). Yet this is precisely what the reasonable military commander standard requires. Moreover, reasonableness eludes “objective definition” and depends on the situation.[16]
Proportionality analyses allow for a “fairly broad margin of judgment,”[17] but the sort of judgment required in deciding how to weigh civilian harm and military advantage in unanticipated situations would be difficult to replicate in machines. As Christof Heyns, then UN special rapporteur on extrajudicial, summary or arbitrary executions, explained in his 2013 report, assessing proportionality requires “distinctively human judgement.”[18] According to the International Committee of the Red Cross (ICRC), judgments about whether a particular attack is proportionate “must above all be a question of common sense and good faith,” characteristics that many would agree machines cannot possess, however thorough their programming.[19]
While the capabilities of future technology are uncertain, it seems highly unlikely that it could ever replicate the full range of inherently human characteristics necessary to comply with the rules of distinction and proportionality. Adherence to international humanitarian law requires the qualitative application of judgment to what one scientist describes as an “almost indefinite combination of contingencies.”[20] Some experts “question whether artificial intelligence, which always seems just a few years away, will ever work well enough.”[21]
Contention #2: The use of fully autonomous weapons could be limited to specific situations where the weapons would be able to comply with international humanitarian law.
Rebuttal: Narrowly constructed hypothetical cases in which fully autonomous weapons could lawfully be used do not legitimize the weapons because they would likely be used more widely.
Analysis: Some critics, dismissing legal concerns about fully autonomous weapons, contend that their use could be restricted to specific situations where they would be able to conform to the requirements of international humanitarian law. These critics highlight the military utility and low risk to civilians of using the weapons in deserts for attacks on isolated military targets,[22] undersea in operations by robotic submarines,[23] in air space for intercepting rockets,[24] and for strikes on “nuclear-tipped mobile missile launchers, where millions of lives were at stake.”[25] These critics underestimate the threat to civilians once fully autonomous weapons enter military arsenals.
One can almost always describe a hypothetical situation where use of a widely condemned weapon could arguably comply with the general rules of international humanitarian law. Before the adoption of the Convention on Cluster Munitions, proponents of cluster munitions often maintained that the weapons could be lawfully launched on a military target alone in an otherwise unpopulated desert. Once weapons are produced and stockpiled, however, their use is rarely limited to such narrowly constructed scenarios. The widespread use of cluster munitions in populated areas, such as in Iraq in 2003 and Lebanon in 2006, exemplify the reality of this problem.[26] Such theoretical possibilities do not, therefore, legitimize weapons, including fully autonomous ones, that pose significant humanitarian risks when used in less exceptional situations.
Contention #3: Concerns that no one could be held to account for attacks by fully autonomous weapons are of limited importance or could be adequately addressed through existing law.
Rebuttal: Insurmountable legal and practical obstacles would prevent holding anyone responsible for unlawful harms caused by fully autonomous weapons.
Analysis: Some critics argue that the question of accountability for the actions of fully autonomous weapons should not be part of the debate on fully autonomous weapons at all. It would be a mistake to “sacrifice real-world gains consisting of reduced battlefield harm through machine systems … simply in order to satisfy an a priori principle that there must always be a human to hold accountable.”[27] Other critics argue that the “mere fact that a human might not be in control of a particular engagement does not mean that no human is responsible for the actions of the autonomous weapon system.”[28] Accountability is more than what two critics called an “a priori principle,” however, and existing mechanisms for legal accountability are ill suited and inadequate to address the unlawful harms fully autonomous weapons would likely cause. These weapons have the potential to commit unlawful acts for which no one could be held responsible.[29]
Accountability serves multiple moral, social, and political purposes and is a legal obligation. From a policy perspective, it deters future violations, promotes respect for the law, and provides avenues of redress for victims. Redress can encompass retributive justice, which provides the victims the satisfaction that someone was punished for the harm they endured, and compensatory justice to restore victims to the condition they were in before the harm was inflicted.[30] International humanitarian law and international human rights law both require accountability for legal violations. International humanitarian law establishes a duty to prosecute criminal acts committed during armed conflict.[31] International human rights law establishes the right to a remedy for any abuses of human rights (see Contention #5). The value of accountability has been widely recognized, including by scholars and states.[32] Unfortunately, the actions of fully autonomous weapons would likely fall into an accountability gap.
Fully autonomous weapons could not be held responsible for their own unlawful acts. Any crime consists of two elements: an act and a mental state. A fully autonomous weapon could commit a criminal act (such as an act listed as an element of a war crime), but it would lack the mental state (often intent) to make these wrongful actions prosecutable crimes. In addition, a weapon would not fall within the natural person jurisdiction of international courts.[33] Even if such jurisdiction were expanded, fully autonomous weapons could not be punished because they would be machines that could not experience or comprehend the significance of suffering.[34] Merely altering the software of a “convicted” robot, unable to internalize moral guilt, would likely leave victims seeking retribution unsatisfied.[35]
In most cases, humans would also escape accountability for the unlawful acts of fully autonomous weapons. Humans could not be assigned direct responsibility for the wrongful actions of a fully autonomous weapon because fully autonomous weapons by definition would have the capacity to act autonomously and therefore could independently and unforeseeably launch an indiscriminate attack against civilians or those hors de combat. In such situations, the commander would not be directly responsible for the robot’s specific actions since he or she did not order them. Similarly, a programmer or manufacturer could not be held directly criminally responsible if he or she did not specifically intend, or could not even foresee, the robot’s commission of wrongful acts. These individuals could be held directly responsible for a robot’s actions only if they deployed the robot intending to commit a crime, such as willfully killing civilians, or if they designed the robot specifically to commit criminal acts.
Significant obstacles would exist to finding the commander indirectly responsible for fully autonomous weapons under the doctrine of command responsibility. This doctrine holds superiors accountable if they knew or should have known of a subordinate’s criminal act and failed to prevent or punish it. The autonomous nature of these robots would make them legally analogous to human soldiers in some ways, and thus it could trigger the doctrine. The theory of command responsibility, however, sets a high bar for accountability. Command responsibility deals with prevention of a crime, not an accident or design defect, and robots would not have the mental state to make their unlawful acts criminal.
Regardless of whether the act amounted to a crime, given that these weapons would be designed to operate independently, a commander would not always have sufficient reason or technological knowledge to anticipate the robot would commit a specific unlawful act. Even if he or she knew of a possible unlawful act, the commander would often be unable to prevent the act, for example, if communications had broken down, the robot acted too fast to be stopped, or reprogramming was too difficult for all but specialists. Furthermore, as noted above, punishing a robot is not possible. In the end, fully autonomous weapons would not fit well into the scheme of criminal liability designed for humans, and their use would create the risk of unlawful acts and significant civilian harm for which no one could be held criminally responsible.
An alternative option would be to try to hold the programmer or manufacturer civilly liable for the unanticipated acts of a fully autonomous weapon. Civil liability can be a useful tool for providing compensation, some deterrence, and a sense of justice for those harmed even if it lacks the social condemnation associated with criminal responsibility. There are, however, significant practical and legal obstacles to holding either the programmer or manufacturer of a fully autonomous weapon civilly liable.
On a practical level, most victims would find suing a programmer or manufacturer difficult because their lawsuits would likely be expensive, time consuming, and dependent on the assistance of experts who could deal with the complex legal and technical issues implicated by the use of fully autonomous weapons.
Legal barriers to civil accountability may be even more imposing than practical ones. The doctrine of sovereign immunity protects governments from suits related to the acquisition or use of weaponry, especially in foreign combat situations.[36] For example, the US government is presumptively immune from civil suits.[37] Manufacturers contracted by the US military are in turn immune from suit when they design a weapon in accordance with government specifications and without deliberately misleading the military. These manufacturers are also immune from civil claims relating to acts committed during wartime. Even without these rules, a plaintiff would find it challenging to establish in law that a fully autonomous weapon was defective for the purposes of a product liability suit.[38]
A no-fault compensation scheme would not resolve the accountability gap. Such a scheme would require only proof of harm, not proof of defect.[39] Victims would thus be compensated for the harm they experienced from a fully autonomous weapon without having to overcome the evidentiary hurdles related to proving a defect. It is difficult to imagine, however, that many governments would be willing to put such a legal regime into place. Even if they did, compensating victims for harm is different from assigning legal responsibility, which establishes moral blame, provides deterrence and retribution, and recognizes victims as persons who have been wronged. Accountability in this full sense cannot be served by compensation alone.[40]
Contention #4: The Martens Clause would not restrict the use of fully autonomous weapons.
Rebuttal: Because existing law does not specifically address the unique issues raised by fully autonomous weapons, the Martens Clause mandates that the “principles of humanity” and “dictates of public conscience” be factored into an analysis of their legality. Concerns under both of these standards weigh in favor of a ban on this kind of technology.
Analysis: Some critics dismiss the value of the Martens Clause in determining the legality of fully autonomous weapons. As it appears in Additional Protocol I to the Geneva Conventions, the Martens Clause mandates that:
In cases not covered by this Protocol or by other international agreements, civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of public conscience.[41]
Critics argue that the Martens Clause “does not act as an overarching principle that must be considered in every case,” but is, rather, merely “a failsafe mechanism meant to address lacunae in the law.”[42] They contend that because gaps in the law are rare, the probability that fully autonomous weapon would violate the Martens Clause but not applicable treaty and customary law is therefore “exceptionally low.”[43] The lack of specific law on fully autonomous weapons, however, means that the Martens Clause would apply, and the weapons would raise serious concerns under the provision.
The key question in determining the relevance of the Martens Clause to fully autonomous weapons is the extent to which such weapons would be “covered” by existing treaty law. As the US Military Tribunal at Nuremberg explained, the Martens Clause makes “the usages established among civilized nations, the laws of humanity and the dictates of public conscience into the legal yardstick to be applied if and when the specific provisions of [existing law] do not cover specific cases occurring in warfare.”[44] The International Court of Justice asserted that the clause’s “continuing existence and applicability is not to be doubted” and that it has “proved to be an effective means of addressing the rapid evolution of military technology.”[45] Fully autonomous weapons are rapidly evolving forms of technology, at best only generally covered by existing law.[46]
The plain language of the Martens Clause elevates the “principles of humanity” and the “dictates of public conscience” to independent legal standards against which new forms of military technology should be evaluated.[47] On this basis, any weapon conflicting with either of these standards is therefore arguably unlawful. At a minimum, however, the dictates of public conscience and principles of humanity can “serve as fundamental guidance in the interpretation of international customary or treaty rules.”[48] According to this view of the Martens Clause, “[i]n case of doubt, international rules, in particular rules belonging to humanitarian law, must be construed so as to be consonant with general standards of humanity and the demands of public conscience.”[49] Given the significant doubts about the ability of fully autonomous weapons to conform to the requirements of the law (see Contention #1), the standards of the Martens Clause should at the very least be taken into account when evaluating the weapons’ legality.
Fully autonomous weapons raise serious concerns under the principles of humanity and dictates of public conscience. The ICRC has described the principles of humanity as requiring compassion and the ability to protect.[50] As discussed below under Contention #7, fully autonomous weapons would lack human emotions, including compassion. The challenges the weapons would face in meeting international humanitarian law suggest they could not adequately protect civilians. Public opinion can play a role in revealing and shaping public conscience, and many people find the prospect of delegating life-and-death decisions to machines shocking and unacceptable. For example, a 2015 international survey of 1,002 individuals from 54 different countries found that 56 percent of respondents opposed the development and use of these weapons.[51] The first reason given for rejecting their development and use, cited by 34 percent of all respondents, was that “humans should always be the one to make life/death decisions.”[52] A 2013 national survey of Americans found that 68 percent of respondents with a view on the topic opposed the move toward these weapons (48 percent strongly).[53] Interestingly, active duty military personnel were among the strongest objectors—73 percent expressed opposition to fully autonomous weapons. These kinds of reactions suggest that fully autonomous weapons would contravene the Martens Clause.
Concerns about weapons’ compliance with the principles in the Martens Clause have justified new weapons treaties in the past. For example, the Martens Clause heavily influenced the discussions and debates preceding the development of CCW Protocol IV on Blinding Lasers, which preemptively banned the transfer and use of laser weapons whose sole or partial purpose is to cause permanent blindness.[54] The Martens Clause was invoked not only by civil society in its reports on the matter, but also by experts participating in a series of ICRC meetings on the subject.[55] They largely agreed that “[blinding lasers] would run counter to the requirements of established custom, humanity, and public conscience.”[56] A shared horror at the prospect of blinding weapons ultimately helped tip the scales toward a prohibition, even without consensus that such weapons were unlawful under the core principles of international humanitarian law.[57] The Blinding Lasers Protocol set an international precedent for preemptively banning weapons based, at least in part, on the Martens Clause.[58] Invoking the clause in the context of fully autonomous weapons would be equally appropriate.
Contention #5: International humanitarian law is the only relevant body of law under which to assess fully autonomous weapons because they would be tools of armed conflict.
Rebuttal: An assessment of fully autonomous weapons must consider their ability to comply with all bodies of international law, including international human rights law, because the weapons could be used outside of armed conflict situations. Fully autonomous weapons could violate the right to life, the right to a remedy, and the principle of dignity, each of which is guaranteed by international human rights law.
Analysis: Discussions about fully autonomous weapons have largely focused on their use in armed conflict and their legality under international humanitarian law (see Contention #1). Most of the diplomatic debate about the weapons has taken place in the international humanitarian law forum of the CCW. While states have touched on the human rights implications of fully autonomous weapons in CCW meetings and in the Human Rights Council, the weapons’ likely use beyond the battlefield has often been ignored.[59] Human rights law, which applies during peace and war, would be relevant to all circumstances in which fully autonomous weapons might be used, and thus should receive greater attention.[60]
Once developed, fully autonomous weapons could be adapted to a range of non-conflict contexts that can be grouped under the heading of law enforcement. Local police officers could potentially use such weapons in crime fighting, the management of public protests, riot control, and other efforts to maintain law and order. States could also utilize the weapons in counter-terrorism efforts falling short of an armed conflict as defined by international humanitarian law. The use of fully autonomous weapons in a law enforcement context would trigger the application of international human rights law.
Fully autonomous weapons would have the potential to contravene the right to life, which is codified in Article 6 of the International Covenant on Civil and Political Rights (ICCPR): “Every human being has the inherent right to life. This right shall be protected by law.”[61] The Human Rights Committee, the ICCPR’s treaty body, describes it as “the supreme right” because it is a prerequisite for all other rights.[62] It is non-derogable even in public emergencies that threaten the existence of a nation. The right to life prohibits arbitrary killing. The ICCPR states, “No one shall be arbitrarily deprived of his life.”[63]
The right to life constrains the application of force in law enforcement situations, including those in which fully autonomous weapons could be deployed.[64] In its General Comment No. 6, the Human Rights Committee highlights the duty of states to prevent arbitrary killings by their security forces.[65] Killing is only lawful if it meets three cumulative requirements for when and how much force may be used: it must be necessary to protect human life, constitute a last resort, and be applied in a manner proportionate to the threat. Fully autonomous weapons would face significant challenges in meeting the criteria circumscribing lawful force because the criteria require qualitative assessments of specific situations. These robots could not be programed in advance to assess every situation because there are infinite possible scenarios, a large number of which could not be anticipated. According to many roboticists, it is also highly unlikely in the foreseeable future that robots could be developed to have certain human qualities, such as judgment and the ability to identify with humans, that facilitate compliance with the three criteria.[66] A fully autonomous weapon’s misinterpretation of the appropriateness of using force could trigger an arbitrary killing in violation of the right to life.
As a non-derogable right, the right to life continues to apply during armed conflict.[67] In wartime, arbitrary killing refers to unlawful killing under international humanitarian law. In his authoritative commentary on the ICCPR, Manfred Nowak, former UN special rapporteur on torture, defines arbitrary killings in armed conflict as “those that contradict the humanitarian laws of war.”[68] As has been shown under Contention #1, there are serious doubts as to whether fully autonomous weapons could ever comply with rules of distinction and proportionality. Fully autonomous weapons would have the potential to kill arbitrarily and thus violate the right that underlies all others, the right to life.
The use of fully autonomous weapons also threatens to contravene the right to a remedy. The Universal Declaration of Human Rights (UDHR) lays out the right, and Article 2(3) of the ICCPR requires states parties to “ensure that any person whose rights or freedoms … are violated shall have an effective remedy.”[69] The right to a remedy requires states to ensure individual accountability. It includes the duty to prosecute individuals for serious violations of human rights law and punish individuals who are found guilty.[70] International law mandates accountability in order to deter future unlawful acts and punish past ones, which in turn recognizes victims’ suffering. It is unlikely, however, that meaningful accountability for the actions of a fully autonomous weapon would be possible (see Contention #3).
Fully autonomous weapons could also violate the principle of dignity, which is recognized in the opening words of the UDHR.[71] As inanimate machines, fully autonomous weapons could truly comprehend neither the value of individual life nor the significance of its loss, and thus should not be allowed to make life-and-death decisions (see Contention #6).
Non-Legal Dangers
Contention #6: Moral concerns about fully autonomous weapons either are irrelevant or could be overcome.
Rebuttal: A variety of actors have raised strong and persuasive moral objections to fully autonomous weapons, most notably related to the weapons’ lack of judgment and empathy, threat to dignity, and absence of moral agency.
Analysis: Some critics dismiss questions about the morality of fully autonomous weapons as irrelevant. They say the appropriateness of fully autonomous weapons is a legal and technical matter as opposed to a moral one. One critic writes that the “key issue remains whether or not a particular weapon system can be operated in compliance with IHL rules and obligations, not the presence or absence of a human moral agent.”[72] At least one other critic argues that morality would not be an issue because robots could be programmed to act ethically and could thus constitute moral agents.[73] Concerns about the morality of fully autonomous weapons, however, are foundational and far reaching.
A variety of actors have raised strong moral and ethical concerns about the use of fully autonomous weapons. The moral indignation expressed by states, UN special rapporteurs, Nobel peace laureates, religious leaders, and the public shows that the question of whether fully autonomous weapons should ever be used goes beyond the law. Several states have argued that there is a moral duty to maintain human control.[74] A 2015 paper from the Holy See, which has presented the most in-depth discussion of the ethical objections to fully autonomous weapons, explained, “It is fundamentally immoral to utilize a weapon the behavior of which we cannot completely control.”[75] The previous year, Chile stated that significant human control over weapons is an “ethical imperative” rather than a technological problem.[76] According to then UN Special Rapporteur on Extrajudicial Killing Christof Heyns, whether fully autonomous weapons are morally unacceptable “is an overriding consideration” and “no other consideration can justify the deployment of [fully autonomous weapons], no matter the level of technical competence at which they operate.”[77] Heyns and Maina Kiai, special rapporteur on the rights to freedom of peaceful assembly and of association, have both called for a ban on these weapons.[78] Nobel Peace Prize laureates have stressed the need to outline “the moral and legal perils of creating killer robots and call[ed] for public discourse before it is too late.”[79] According to Nobel Laureate Jody Williams, who is a member of the Campaign to Stop Killer Robots, “Where is humanity going if some people think it’s OK to cede the power of life and death of humans over to a machine?”[80] A religious leaders’ interfaith declaration calling for a ban highlighted moral and ethical concerns, stating that “[r]obotic warfare is an affront to human dignity and to the sacredness of life.”[81] Research surveys conducted in the United States and internationally have shown that these moral concerns are shared among populations around the world.[82]
For those concerned with the moral issues raised by fully autonomous weapons, no technological improvements can solve the fundamental problem of delegating a life-and-death decision to a machine. Morality-based arguments have focused on three core issues: the lack of human qualities necessary to make a moral decision, the threat to human dignity, and the absence of moral agency.
Any killing orchestrated by a machine is arguably inherently wrong since machines are unable to exercise human judgment and compassion. Because of the high value of human life, a decision to take a life deliberately is extremely grave. As humans are endowed with reason and intellect, they are uniquely qualified to make the moral decision to apply force in any particular situation. Humans possess “prudential judgment,” the ability to apply broad principles to particular situations, interpreting and giving a “spirit” to laws rather than blindly applying an algorithm.[83] No robot, however much information it can process, possesses prudential judgment in the same way that humans do. In addition, while humans in some way internalize the cost of any life that they choose to take, machines do not.[84] “Decisions over life and death in armed conflict may require compassion and intuition,” which humans, not robots, possess.[85] This allows for human empathy to act as a check on killing, but only when humans are making the relevant decisions.
Fully autonomous weapons are also morally problematic because they threaten the principle of human dignity. The opening words of the Universal Declaration of Human Rights assert that “recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world.”[86] (For other human rights arguments, see Contention #5.) In ascribing inherent dignity to all human beings, the UDHR implies that everyone has worth that deserves respect.[87] Fully autonomous weapons, as inanimate machines, could comprehend neither the value of individual life nor the significance of its loss. Allowing them to make determinations to take life away would thus conflict with the principle of dignity. Indeed, as one author notes, the “value of human life may be diminished if machines are in a position to make essentially independent decisions about who should be killed in armed conflict.”[88] Then Special Rapporteur on Extrajudicial Killing Christof Heyns, in his 2013 report to the Human Rights Council, stated: “[D]elegating this process dehumanizes armed conflict even further and precludes a moment of deliberation in those cases where it may be feasible. Machines lack morality and mortality, and should as a result not have life and death powers over humans.”[89]
Fully autonomous weapons raise further concerns under the umbrella of moral agency. According to one roboticist, agency is not an issue: such machines could be programmed to operate on the basis of “ethical” algorithms that would transform an autonomous robot into a “moral machine” and in this way into an “autonomous moral agent.”[90] An “ethical governor” would automate moral decision making at the targeting and firing stages.[91] This argument is unpersuasive for two reasons, however. First, it is extremely unlikely that such a protocol will ever be designed.[92] Second, and more fundamentally, “the problem of moral agency is not solved by giving autonomous weapon systems artificial moral judgment, even if such a capacity were technologically possible.”[93] “Fully ethical agents” are endowed with “consciousness, intentionality and free will.”[94] Fully autonomous weapons, by contrast, would act according to algorithms and thus would not be moral agents. Fully ethical agents “can be held accountable for their actions—in the moral sense, they can be at fault—precisely because their decisions are in some rich sense up to them.”[95] Fully autonomous weapons, on the other hand, would be incapable of assuming moral responsibility for their actions and thus could not meet the threshold of moral agency that is required for the taking of human life.[96]
Technological improvements could not overcome such moral objections to fully autonomous weapons. As one expert wrote, "The authority to decide to initiate the use of lethal force … must remain the responsibility of a human with the duty to make a considered and informed decision before taking human lives."[97]
Contention #7: Fully autonomous weapons would not be negatively influenced by human emotions.
Rebuttal: Fully autonomous weapons would lack emotions, including compassion and a resistance to killing, that can protect civilians and soldiers.
Analysis: Critics argue that fully autonomous weapons’ lack of human emotions could have military and humanitarian benefits. The weapons would be immune from factors, such as fear, anger, pain, and hunger, that can cloud judgment, distract humans from their military missions, or lead to attacks on civilians.[98] While such observations have some merit, the role in warfare of other human emotions can in fact increase humanitarian protection in armed conflict.
Humans possess empathy and compassion and are generally reluctant to take the life of another human. A retired US Army Ranger who has done extensive research on killing during war has found that “there is within man an intense resistance to killing their fellow man. A resistance so strong that, in many circumstances, soldiers on the battlefield will die before they can overcome it.”[99] Another author writes,
One of the greatest restraints for the cruelty in war has always been the natural inhibition of humans not to kill or hurt fellow human beings. The natural inhibition is, in fact, so strong that most people would rather die than kill somebody.[100]
Studies of soldiers’ conduct in past conflicts provide evidence to support these conclusions.[101] Human emotions are thus an important inhibitor to killing people unlawfully or needlessly.
Studies have focused largely on troops’ reluctance to kill enemy combatants, but it is reasonable to assume that soldiers feel even greater reluctance to kill the bystanders of armed conflict, including civilians or those hors de combat, such as surrendering or wounded soldiers. Fully autonomous weapons, unlike humans, would lack such emotional and moral inhibitions, which help protect individuals who are not lawful targets in an armed conflict. One expert writes, “Taking away the inhibition to kill by using robots for the job could weaken the most powerful psychological and ethical restraint in war. War would be inhumanely efficient and would no longer be constrained by the natural urge of soldiers not to kill.”[102]
Due to their lack of emotions or a conscience, fully autonomous weapons could be the perfect tools for leaders who seek to oppress their own people or to attack civilians in enemy countries. Even the most hardened troops can eventually turn on their leader if ordered to fire on their own people or to commit war crimes. An abusive leader who can resort to fully autonomous weapons would be free of the fear that armed forces would resist being deployed against certain targets.
For all the reasons outlined above, emotions should instead be viewed as central to restraint in armed conflict rather than as irrational influences and obstacles to reason.
Contention #8: Military advantages would be lost with a preemptive ban on fully autonomous weapons.
Rebuttal: Many potential benefits of fully autonomous weapons either could be achieved by using alternative systems or would create unjustifiable risks.
Analysis: Critics argue that a preemptive ban on fully autonomous weapons would mean forgoing the technology’s touted military advantages. According to these critics, fully autonomous weapons could have many benefits. Fully autonomous weapons could operate with greater precision than other systems.[103] The weapons could replace soldiers in the field and thus protect their lives.[104] Fully autonomous weapons could process data and operate at greater speed than those controlled by humans at the targeting and/or engagement stages.[105] They could also operate without a line of communication after deployment.[106] Finally, fully autonomous weapons could be deployed on a greater scale and at a lower cost than weapons systems requiring human control.[107] These characteristics, however, are not unique to fully autonomous weapons and present their own risks.
Other weapons provide some of the same benefits as fully autonomous weapons. For example, semi-autonomous weapons, too, have the potential for precision. They can track targets with comparable technology to that in future fully autonomous weapons. Indeed, existing semi-autonomous weapon systems have already incorporated autonomous features designed to increase the precision of attacks.[108] Unlike their fully autonomous counterparts, however, these systems keep a human in the loop on decisions to fire.
In addition, although fully autonomous weapons could reduce military casualties by replacing human troops on the battlefield, semi-autonomous weapons already do that. The use of semi-autonomous weapons involves human control over the use of force, but it does not require a human presence on the ground so operators can stay safe at a remote location. Semi-autonomous weapons, notably armed drones, have raised many concerns that should be addressed, but their problems relate more to how they are used than to the nature of their technology. Fully autonomous weapons, by contrast, present dangers no matter how they are used because humans are no longer making firing decisions.
In many situations that require speed, such as missile defense, automatic systems could eliminate threats as effectively as and more predictably than fully autonomous systems. While automation and autonomy are different ends of the same spectrum, automatic weapons operate in a more structured environment and “carr[y] out a pre-programmed sequence of operations.”[109]
Because fully autonomous weapons would have the power to make complex determinations in less structured environments, their speed could lead armed conflicts to spiral rapidly out of control. In arguing that fully autonomous weapons could become a necessity for states seeking to keep up with their adversaries, two critics of a ban on fully autonomous weapons write that “[f]uture combat may … occur at such a high tempo that human operators will simply be unable to keep up. Indeed, advanced weapon systems may well create an environment too complex for humans to direct.”[110] Regardless of the speed of fully autonomous weapons, their ability to operate without a line of communication after deployment is problematic because the weapons could make poor, independent choices about the use of force absent the potential of a human override.
Since fully autonomous weapons could operate at high speeds and without human control, their actions would also not be tempered by human understanding of political, socioeconomic, environmental, and humanitarian risks at the moment they engage. They would thus have the potential to trigger a range of unintended consequences, many of which could fundamentally alter relations between states or the nature of ongoing conflicts.
Given that countries would not want to fall behind in potentially advantageous military technology, the development of these revolutionary weapons would likely lead to an arms race. Indeed, some senior military officials have already expressed concerns about advancements in autonomous weapons technology in other states, emphasizing the need to maintain dominance in artificial intelligence capabilities.[111] High-tech militaries might have an edge in the early stages of these weapons’ development, but experts predict that as costs go down and the technology proliferates, the weapons will become mass produced. An open letter signed by more than 3,000 artificial intelligence and robotics experts states:
If any major military power pushes ahead with AI [artificial intelligence] weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.[112]
An arms race in fully autonomous weapons technology would carry significant risks. The rapidly growing number of fully autonomous weapons could heighten the possibility of major conflict. If fully autonomous weapons operated collectively, such as in swarms, one weapon’s malfunction could trigger a massive military action followed by a response in kind.[113] Moreover, in order to keep up with their enemies, states would have incentive to use substandard fully autonomous weapons with untested or outdated features, increasing the risk of potentially catastrophic errors. While fully autonomous weapons might create an immediate military advantage for some states, they should recognize that it would be short lived once the technology began to proliferate. Ultimately, the financial and human costs of developing such technology would leave each state worse off, and thus they argue for a preemptive ban.
II. Arguments for a Preemptive Prohibition on Fully Autonomous Weapons
The dangers of fully autonomous weapons demand that states take action to preemptively ban their development, production, and use. Critics propose relying on existing law, weapons reviews, regulation, or requirements of human control, but a ban is the only option that would address all of the weapons’ problems. The international community should not wait to take action because the genie will soon be out of the bottle. Precedent shows that a ban would be achievable and effective.
Advantages of a Ban
Contention #9: A new international instrument is unnecessary because existing international humanitarian law will suffice.
Rebuttal: A new treaty would help clarify existing international humanitarian law and would address the development and production of fully autonomous weapons in addition to their use.
Analysis: Critics of a new treaty on fully autonomous weapons often assert that “existing principles of international law are sufficient to circumscribe the use of these weapons.”[114] They argue that any problematic use of fully autonomous weapons would already be unlawful because it would violate current international humanitarian law. According to two authors, “The question for the legal community [would be] whether autonomous weapon systems comply with the legal norms that States have put in place.”[115] Recognizing that the weapons raise new concerns, another author notes that “as cases and mistakes arise, the lawyers and injured parties will have to creatively navigate the network of legal mechanisms [available in international law],” but he too concludes that a new legal instrument would be unnecessary.[116] Existing international humanitarian law, however, was not intended to and cannot adequately address the issues raised by this revolutionary type of weapon. Therefore, it should be supplemented with a new treaty establishing a ban.
A new international treaty would clarify states’ obligations and make explicit the requirements for compliance. It would minimize questions about legality by standardizing rules across countries and reducing the need for case-by-case determinations. Greater legal clarity would lead to more effective enforcement because countries would better understand the rules. A ban convention would make the illegality of fully autonomous weapons clear even for countries that do not conduct legal reviews of new or modified weapons (see Contention #10). Finally, many states that did not join the new treaty would still be apt to abide by its ban because of the stigma associated with the weapons.
A treaty dedicated to fully autonomous weapons could also address aspects of proliferation not covered under traditional international humanitarian law, which focuses on the use of weapons in war. In particular, such an instrument could prohibit development and production. Eliminating these activities would prevent the spread of fully autonomous weapons, including to states or non-state actors with little regard for international humanitarian law or limited ability to enforce compliance. In addition, it would help avert an arms race by stopping development before it went too far (see Contention #8).
Finally, new law could address concerns about an accountability gap (see Contention #3). A treaty that banned fully autonomous weapons under any circumstances could require that anyone violating that rule be held responsible for the weapon’s actions.
While international humanitarian law already sets limits on problematic weapons and their use, responsible governments have in the past found it necessary to supplement existing legal frameworks for weapons that by their nature pose significant humanitarian threats. Treaties dedicated to specific weapons types exist for cluster munitions, antipersonnel mines, blinding lasers, chemical weapons, and biological weapons. Fully autonomous weapons have the potential to raise a comparable or even higher level of humanitarian concern and thus should be the subject of similar supplementary international law.
Contention #10: Reviews of new weapons systems can address the dangers of fully autonomous weapons.
Rebuttal: Weapons reviews are not universal, consistent, or rigorously conducted, and they fail to address the implications of weapons outside of an armed conflict context. A ban would resolve these shortcomings in the case of fully autonomous weapons.
Analysis: Some critics argue that conducting weapons reviews on fully autonomous weapons would sufficiently regulate the weapons. Weapons reviews assess the legality of the future use of a new weapon during its design, development, and acquisition phases. They are sometimes called “Article 36 reviews” because they are required under Article 36 of Additional Protocol I to the Geneva Conventions. The article states:
In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.[117]
Critics have argued, including during CCW debates, that there is no need for a ban because any fully autonomous weapon that would violate international law would fail a weapons review and thus not be developed or used.[118] Not all governments, however, conduct weapons reviews, those that do follow varying standards, and reviews are often too narrow in scope sufficiently to address every danger posed by fully autonomous weapons. Proposals to address the shortcomings of weapons reviews should be considered in a separate forum to avoid distracting from discussions about fully autonomous weapons.
Currently, fewer than 30 states are known to have national review processes in place.[119] Not all states are party to Additional Protocol I, and it is debated whether weapons reviews are required under customary international law.[120] The lack of universal practice means that it is possible that some states could develop or acquire fully autonomous weapons without first reviewing the legality of the weapons at all.
Even if weapons reviews were conducted by every state, leaving decisions about whether or not to develop weapons to individual states is bound to lead to inconsistent outcomes. The complexity of fully autonomous weapons, which would require review of both hardware and software components, would exacerbate such inconsistencies.[121] In addition, there is no internationally mandated monitoring to ensure that all states conduct reviews and adhere to the results.[122] There is also limited capability for outside monitoring, including by civil society, because of the general lack of transparency in weapons reviews processes.[123] States are not obliged to release their reviews, and none are known to have disclosed information about a review that rejected a proposed weapon.[124]
Without the external pressure generated by monitoring, states have few incentives to conduct rigorous reviews of weapons. Just as there are no publicized cases of the rejection of a weapon, there are also no known examples of states stopping the development or production of a weapon because it failed a legal review.[125] The expense of conducting the kind of complex reviews necessary for fully autonomous weapons would provide a further disincentive to doing rigorous testing.
Regardless of the effectiveness of the weapons reviews, the basic goal, as evidenced by Article 36’s reference to “warfare,” is to ensure compliance with international law in the context of armed conflict. The ICRC’s guide to weapons reviews reflects this framework, noting that “[a]ssessing the legality of new weapons contributes to ensuring that a State’s armed forces are capable of conducting hostilities in accordance with its international obligations.”[126]
This framework does not address the human rights and ethical implications of the use of weapons. Fully autonomous weapons could independently contravene human rights law because of their potential use outside of armed conflict in domestic law enforcement situations (see Contention #5).[127] Because they would use force without meaningful human control, such weapons raise serious ethical concerns (see Contention #6). Neither of these risks would be taken into account in a military weapons review.[128]
Acknowledging the problems with existing weapons reviews, some states have called for improvements.[129] For example, at the 2016 CCW Meeting of Experts on Lethal Autonomous Weapons Systems, the United States proposed that CCW states parties produce “a non-legally binding outcome document that describes a comprehensive weapons review process.”[130] Such a set of best practices, however, would operate on a voluntary basis and would have less authority than a legally binding instrument.
While strengthening weapons reviews and setting international standards are worthy goals, the CCW meetings about fully autonomous weapons are an inappropriate forum for such discussions. The need to improve reviews is relevant neither specifically nor solely to fully autonomous weapons.[131] Rather, discussions about weapons reviews in the context of fully autonomous weapons distract from the substantive issues presented by the development and use of these weapons.
A binding international ban on fully autonomous weapons would resolve the shortcomings of weapons reviews in this context. A ban would also simplify and standardize weapons reviews by removing any doubts that the use of fully autonomous weapons would violate international law.
Contention #11: Regulation would better address fully autonomous weapons concerns than a ban.
Rebuttal: A binding, absolute ban on fully autonomous weapons would reduce the chance of misuse of the weapons, would be easier to enforce, and would enhance the stigma associated with violations.
Analysis: Certain critics object to a categorical ban on fully autonomous weapons because they prefer a regulatory framework that would permit the use of such technology within certain pre-defined parameters.[132] Such a framework might, for example, limit the use of fully autonomous weapons to specific types of locations or purposes. These critics suggest that such an approach would not be over-inclusive because it would more precisely tailor restrictions to the evolving state of fully autonomous weapons technology. Regulations could come in the form of a legally binding instrument or a set of gradually developed, informal standards.[133] Whatever its form, however, regulation would not be as effective as a ban.
An absolute, legally binding ban on fully autonomous weapons would provide several distinct advantages over formal or informal regulatory constraints. It would maximize protection for civilians in conflict because it would be more comprehensive than regulation. A ban would also be more effective as it would prohibit the existence of the weapons and be easier to enforce. Moreover, a ban would maximize the stigmatization of fully autonomous weapons, creating a widely recognized norm and influencing even those that do not join the treaty.
By contrast, once fully autonomous weapons came into being under a regulatory regime, they would be vulnerable to misuse. Even if regulations restricted use of fully autonomous weapons to certain locations or specific purposes, after the weapons entered national arsenals, countries might be tempted to use the weapons in inappropriate ways in the heat of battle or in dire circumstances (see Contention #2). Furthermore, the existence of fully autonomous weapons would leave the door open to their acquisition by repressive regimes or non-state armed groups that might disregard the restrictions or alter or override any programming designed to regulate the weapons’ behavior. They could use the weapons against their own people or civilians in other countries with horrific consequences.
Enforcement of regulations on fully autonomous weapons, as on all regulated weapons, could also be challenging and leave room for error, increasing the potential for harm to civilians. Instead of knowing that any use of fully autonomous weapons was unlawful, countries, international organizations, and nongovernmental organizations would have to monitor the use of the weapons and determine in every case whether use complied with the regulations. Debates about the scope of the regulations and their enforcement would likely ensue.
The challenges of effectively controlling the use of fully autonomous weapons through binding regulations would be compounded if governments adopted a non-binding option. Those who support best practices advocate “let[ting] other, less formal processes take the lead to allow genuinely widely shared norms to coalesce in a very difficult area.”[134] To the extent that a “less formal” approach is a non-binding one, it is highly unlikely to constrain governments—including those already inclined to violate the law—in any meaningful way, especially under the pressures of armed conflict. It is similarly unrealistic to expect governments, as some critics hope, to resist their “impulses toward secrecy and reticence with respect to military technologies” and contribute to a normative dialogue about the appropriate use of fully autonomous weapons technology.[135] If countries rely on transparency and wait until “norms coalesce” in an admittedly “very difficult area,”[136] such weapons will likely be developed and deployed, at which point it would probably already be too late to control them.
Contention #12: Ensuring human control during the design and deployment of autonomous weapons would be sufficient to address the concerns they raise.
Rebuttal: In order to avoid the dangers of fully autonomous weapons, humans must exercise meaningful control over the selection and engagement of targets in individual attacks. Only a ban on fully autonomous weapons can effectively guarantee such meaningful control by humans.
Analysis: While there appears to be widespread agreement that all weapons should operate under at least some level of “human control,”[137] certain critics contend that it need not be directly over individual attacks. These critics argue that human control at the design and deployment stages would be sufficient to preempt the concerns associated with fully autonomous weapons because the weapons would operate predictably.[138] Weapons with such limited control would be unlikely always to operate as expected, however, and human control is not meaningful if there is unpredictability.[139] Meaningful human control is essential to averting the dangers associated with fully autonomous weapons.
If human control over weapons were confined to the design and deployment stages, unpredictability in weapons would be almost impossible to avoid. Programmers could not always be sure how advanced weapons with complex codes would act in practice. As some scholars note, “[N]o individual can predict the effect of a given command with absolute certainty, since portions of large programs may interact in unexpected, untested ways.”[140] In addition, the actions of these weapons could be influenced by factors beyond the programmer. The weapons might rely on dynamic learning processes or processes to adapt existing information for use in new environments.[141] The unpredictability of weapons controlled by humans only at the pre-attack stages would indicate that that control was not meaningful.
The absence of meaningful human control would lead to at least three of the fundamental dangers of fully autonomous weapons already outlined in this report. First, because humans could not preprogram fully autonomous weapons to respond predictably to unforeseeable situations, the weapons would face significant obstacles to complying with international humanitarian or human rights law, which requires the application of human judgment (see Contentions #1 and #5). Second, limiting human control to the design and deployment stages would lead to an accountability gap since programmers and commanders could not predict at those stages how the weapons would act in the field and thus would escape liability in most cases (see Contention #3). Third, fully autonomous weapons would be unable to adhere to preprogrammed ethical frameworks, given their inherent unpredictability,[142] and ceding human control over determinations to use force in specific situations would cross a moral threshold (see Contention #6).[143]
Human control must be exercised over individual attacks in order to be meaningful and address many of the concerns regarding technological advances in weapons systems. Such control would promote legal compliance by facilitating the application of human judgment in specific, unforeseeable situations. It would allow for the imposition of legal liability by creating a link between a human actor and the harm caused by a weapon. Finally, meaningful human control over individual attacks would also ensure that morality could play a role in decisions about the life and death of human beings.
Timeliness and Feasibility of a Ban
Contention #13: It is premature to ban fully autonomous weapons given the possibility of technological advances.
Rebuttal: These highly problematic weapons should be preemptively banned to prevent serious humanitarian harm before it is too late and to accord with the precautionary principle.
Analysis: Critics contend that a preemptive ban on the development, production, and use of fully autonomous weapons is premature. They argue that:
Research into the possibilities of autonomous machine decision-making, not just in weapons but across many human activities, is only a couple of decades old.… We should not rule out in advance possibilities of positive technological outcomes—including the development of technologies of war that might reduce risks to civilians by making targeting more precise and firing decisions more controlled.[144]
This position depends in part on faith that technology could address the legal challenges raised by fully autonomous weapons, which, as explained under Contention #1, seems unlikely and uncertain at best. At the same time, it ignores other dangers associated with these weapons that are not related to technological development, notably the accountability gap, moral objections, and the potential for an arms race (see Contentions #3, 6, and 8).
Given the host of concerns about fully autonomous weapons, they should be preemptively banned before it becomes too late to change course. It is difficult to stop technology once large-scale investments have been made. The temptation to use technology already developed and incorporated into military arsenals would be great, and many countries would be reluctant to give it up, especially if their competitors possessed it.
In addition, if ongoing development were permitted, militaries might deploy fully autonomous weapons in complex circumstances with which artificial intelligence could not yet cope. Only after the weapons faced unanticipated situations that they were not programmed to address could the technology be modified to resolve those issues. During that period, the weapon would be likely to mishandle such situations potentially causing great harm to civilians and even friendly forces.
The prevalence of humanitarian concerns and the uncertainty regarding technology make it appropriate to invoke the precautionary principle, a principle of international law. The 1992 Rio Declaration states, “Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.”[145] While the Rio Declaration applies the precautionary principle to environmental protection, the principle can be adapted to other situations.
Fully autonomous weapons implicate the three essential elements of the precautionary principle—threat of serious or irreversible damage, scientific uncertainty, and the availability of cost-effective measures to prevent harm. The development, production, and use of fully autonomous weapons present a threat to civilians that would be both serious and irreversible, as the technology would revolutionize armed conflict and would be difficult to eliminate once developed and employed. Scientific uncertainty characterizes the debate over these weapons. Defenders argue there is no proof that a technological fix could not solve the problem, but there is an equal lack of proof that a technological fix would work. Finally, while treaty negotiations and implementation would carry costs, these expenses are small compared to the significant harm they might prevent.
There is precedent for a preemptive prohibition on a class of weapons. As discussed in Contention #4, in 1995 states parties to the CCW adopted a ban on blinding lasers before the weapons had started to be deployed.[146] During the negotiations, countries expressed many of the same concerns about blinding lasers as they have about fully autonomous weapons, and those negotiations led to a successful new instrument—CCW Protocol IV. States should build on that model and agree to a similar ban on fully autonomous weapons. Although there are differences between the two types of weapons, the revolutionary nature of fully autonomous weapons strengthens, rather than undermines, the case for a preemptive prohibition.[147]
Contention #14: A definition of fully autonomous weapons is needed before the concerns they raise can be addressed.
Rebuttal: A common understanding of fully autonomous weapons (also known as lethal autonomous weapons systems) has already largely already been reached, and disarmament negotiations have historically agreed on a treaty’s detailed, legal definition after resolving other substantive issues.
Analysis: Some critics argue that discussions cannot move toward treaty negotiations without a detailed definition of fully autonomous weapons, also known as lethal autonomous weapons systems (LAWS) by CCW states.[148] For example, one state has noted that “there seemed to be no agreement as to the exact definition of LAWS.... In this regard, many states … were not supportive of the call made by some states for a preemptive ban on LAWS.”[149] Another has argued that “prohibiting such systems before a broad agreement on a definition would not be pragmatic.”[150] A common understanding, however, should be sufficient to advance deliberations.
Most countries whose statements on the issue are publicly available appear to agree upon the basic elements of what constitutes a fully autonomous weapon. First, they say that fully autonomous weapons, although rapidly developing, remain an emerging technology that does not yet exist.[151] Second, they concur that fully autonomous weapons would be, as the name suggests, weaponized or lethal technology.[152]
Third, most of the states that have addressed the topic describe fully autonomous weapons as operating without human control. The terminology employed has varied, from “meaningful human control,”[153] to “appropriate levels of human judgment,[154] to “human involvement,”[155] but there seems to be almost universal agreement that fully autonomous weapons lack human control. Finally, while some debate lingers about precisely where human control is absent, agreement is coalescing around the notion that fully autonomous weapons lack human control over the critical combat functions, in particular, over the selection and engagement of targets.[156]
Historically in disarmament treaty negotiations, common understandings become detailed legal definitions only at the end of the process. For the Mine Ban Treaty,[157] the Convention on Cluster Munitions,[158] and CCW Protocol IV on Blinding Laser Weapons, the goals, scope, and obligations of the treaty being negotiated were determined before the final definitions. The initial draft text of the Mine Ban Treaty was circulated with definition of antipersonnel landmines from CCW Amended Protocol II.[159] That definition was only a starting point that was revised in later drafts of the text and was still being debated at the final treaty negotiation conference.[160] Similarly, the negotiating history of the Convention on Cluster Munitions began with a declaration in which states at an international conference committed to adopting a prohibition on “cluster munitions that cause unacceptable harm to civilians.”[161] While states discussed the definition of cluster munitions at the diplomatic meetings that followed, they did not settle on the definition of cluster munitions to be adopted until the final negotiations.[162] Working papers and draft protocols from the CCW Group of Governmental Expert meetings about blinding lasers reveal the same pattern: the draft definition contained only the basic elements of the final definition, which would be crafted later in the course of negotiations.[163]
There is already enough international agreement on the core elements of fully autonomous weapons to proceed with negotiations. Getting lost in the details of a definition without first determining the aims of negotiations would be unproductive. It would be more efficient to decide on the prohibitions or restrictions to be imposed on the general category of weapons and then detail to exactly which weapons those prohibitions or restrictions should apply. The international community should, therefore, focus on articulating the goals, scope, and obligations of a future instrument. The final legal definition of fully autonomous weapons can be negotiated at a later stage.
Contention #15: Valuable advances in autonomous technology would be impeded by a ban on the development of fully autonomous weapons.
Rebuttal: A prohibition would not stifle valuable advances in autonomous technology because it would not cover non-weaponized fully autonomous technology or semi-autonomous weapon systems.
Analysis: Some critics worry about the breadth of a ban on development. They express concern that it would represent a prohibition “even on the development of technologies or components of automation that could lead to fully autonomous lethal weapon systems.”[164] These critics fear that the ban would therefore impede the exploration of beneficial autonomous technology, such as self-driving cars.
In fact, the ban would apply to development only of fully autonomous weapons, that is, machines that could select and fire on targets without meaningful human control. Research and development activities would be banned if they were directed at technology that could be used exclusively for fully autonomous weapons or that was explicitly intended for use in such weapons. A prohibition on the development of fully autonomous weapons would in no way impede development of non-weaponized fully autonomous robotics technology, which can have many positive, non-military applications.
The prohibition would also not encompass development of semi-autonomous weapons such as existing remote-controlled armed drones.
Given the importance of keeping fully autonomous weapons out of national arsenals (see Contention #13), a prohibition on development should be adopted, even if it is a narrow one. Including such a prohibition in a ban treaty would legally bind states parties not to contract specifically for the development of fully autonomous weapons or to take steps to convert other autonomous technology into such weapons. It would also create a stronger norm against fully autonomous weapons by stigmatizing development as well as use and could thus influence even states and non-state armed groups that have not joined the treaty.
Contention #16: An international ban on fully autonomous weapons is unrealistic and would be ineffective.
Rebuttal: Past disarmament successes, growing support for a ban, and increasing international discussion of the issue suggest that a ban is both realistic and the only effective option for addressing fully autonomous weapons.
Analysis: Some critics argue that an absolute ban on the development, production, and use of fully autonomous weapons is “unrealistic.”[165] They have written that “part of our disagreements are about the practical difficulties that face international legal prohibitions of military technologies (we think such efforts are likely to fail).”[166] Other critics believe that even if such a ban could be adopted, it would not be implemented as states would either not join the prohibition or not comply with it.[167] These critics fail to acknowledge the parallels with past successful disarmament efforts that had humanitarian benefits and the growing support for preserving meaningful human control over decisions to use lethal force.
Strong precedent exists for banning weapons that raise serious humanitarian concerns. The international community has previously adopted legally binding prohibitions on poison gas, biological weapons, chemical weapons, antipersonnel landmines, and cluster munitions, as well as a preemptive ban on blinding lasers, which were still under development. Opponents of the landmine and cluster munitions instruments had frequently said that a ban treaty would never be possible, but the success of these bans has proved their skepticism was misplaced. The number of states joining these treaties and general compliance illustrates the treaties’ effectiveness and the ability of humanitarian disarmament to protect civilians from suffering.
Efforts to address the dangers of fully autonomous weapons are following a similar path as previous humanitarian disarmament instruments. April 2013 marked the launch of the Campaign to Stop Killer Robots, which calls for an absolute ban on the development, production, and use of fully autonomous weapons. The campaign resembles earlier civil society coalitions, including the International Campaign to Ban Landmines and the Cluster Munition Coalition.
Public support for a ban has bolstered the position of the campaign. As of November 2016, more than 3,000 roboticists and artificial intelligence researchers had signed a 2015 public letter calling for a ban on fully autonomous weapons. According to them, “Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits.”[168] Surveys have also revealed support for a ban. For example, a 2015 international survey found that 67 percent of respondents believe that fully autonomous weapons should be internationally banned (see Contention #4).[169]
Finally, governments have taken up the debate about fully autonomous weapons. Shortly after civil society pressure began, they added the topic to the CCW agenda, which was significant because the CCW process has previously produced a preemptive ban on blinding lasers and served as an incubator for bans on landmines and cluster munitions. Since 2014, CCW states parties have held three informal experts meetings that have examined the issues surrounding lethal autonomous weapons systems in depth. In the course of these meetings, many states have recognized the need to address these problematic weapons in some way. Fourteen states have expressed explicit support for a ban.[170] States parties that attended the 2016 experts meeting recommended that CCW’s Fifth Review Conference, to be held in December 2016, consider establishing a more formal Group of Governmental Experts to advance discussions.[171] Now it is up to the Review Conference to ensure that states pick up the pace and take the next step toward an instrument that bans the development, production, and use of fully autonomous weapons.
Achieving a ban will certainly require significant work and political will. Past precedents and recent developments suggest, however, that a legally binding prohibit on fully autonomous weapons would be the most realistic and effective way to address the dangers these weapons pose.
Acknowledgments
Bonnie Docherty, senior researcher in the Arms Division of Human Rights Watch and senior clinical instructor at the Harvard Law School International Human Rights Clinic (IHRC), was the lead writer and editor of this report. Joseph Crupi, Anna Khalfaoui, and Lan Mei, students in IHRC, made major contributions to the research, analysis, and writing of the report. Steve Goose, director of the Arms Division, and Mary Wareham, advocacy director of the Arms Division, edited the report. Dinah PoKempner, general counsel, and Tom Porteous, deputy program director, also reviewed the report.
This report was prepared for publication by Marta Kosmyna, associate in the Arms Division, Fitzroy Hepkins, administrative manager, and Jose Martinez, senior coordinator. Russell Christian produced the cartoon for the report cover.