We often hear about “the rise of the machines,” but the flip side of that is what we should really be concerned with: “the fall of human control.”
Those of us focused on human rights are worried in particular about what it means for the use of force. Who is making the most important decisions – decisions that could harm people, even decisions of life and death?
For years, we have been warning about fully autonomous weapons, also known as "killer robots," which would be able to select and engage targets in war zones (or even in policing) without any real human control.
Yes, it’s the plot of a hundred science fiction movies, but the threat is very real.
Many countries are already using precursors to these weapons, like armed drones. Without a ban on “killer robots,” governments will take the next step and start delegating life-and-death decisions to machines.
From being in charge of the machines, human beings would become their subjects. We would be little more than data points, which the machines would use to decide about who lives and who dies.
Experts have given this process a name, digital dehumanization, and the idea can apply beyond weapons systems – think, self-driving cars or medical diagnoses.
But when humans are reduced to data, and that data becomes the basis for decisions that can negatively affect their lives, we’ve replaced the concept of human responsibility for errors with a kind of “automated harm.”
In a conflict zone, the consequences would be catastrophic.
Without someone – some human being – to hold accountable for an atrocity, it is pretty much impossible to achieve any justice. A massacre of civilians in a conflict zone would be presented as a design problem rather than a war crime.
And our digital dehumanization would be complete.