Killer robots are a mainstay of science fiction. But unlike teleportation and flying cars, they are something that we are likely to see within our lifetime. The only thing that’s stopping countries like the USA, South Korea, the UK, or France from deploying autonomous killing machine in the very near term is that they’re likely to be illegal under current international humanitarian law (IHL) — the rules of war.
But if you just sighed in relief that the fate of humanity is safe, think again. The reason that autonomous killing machines are illegal is essentially a technicality, and worse, it’s a technicality that’s based on the current state of technology. The short version of the story, as it stands right now, is that the only thing making autonomous robotic killing weapons illegal is that it’s difficult for a robot to tell a friend from an enemy. When technology catches up with human judgement, all bets are off.
Think I’m insane? The United Nations Office at Geneva (UNOG), the folks who bring you the rules of warfare, started up a working group on killer robots three years ago, and the report from their 2016 meeting just came out. Now’s as good a time as any to start taking killer robots seriously.
The Convention on Certain Conventional Weapons (CCW), also known as the Inhumane Weapons Convention, is an agreement among member states to ban the use of certain weapons that are deemed to be too nasty or indiscriminate to be used on people, even in times of war. We’re talking about landmines, flamethrowers, and (recently) blinding laser weapons.
Beginning in 2013, the UNOG decided to investigate whether killer robots should also be included in this list. You may not be surprised that achieving international consensus on an issue like this, mixing equal parts ethics, military considerations, technology, and even futurism, would take a while. The 2016 Informal Meeting of Experts took place in April, and the reports just came out (PDF). It’s a high-level summary of the meetings, but it’s a good read.
The UNOG refers to killer robots by an acronym, naturally: Lethal Autonomous Weapons Systems (LAWS), and starts off by noting that there are currently no LAWS deployed, so the question of whether or not to limit them is pre-emptive. Indeed, a LAWS right now would probably be illegal because international humanitarian law requires any lethal actions to be necessary and proportional to the military goals, and they must be able to distinguish between civilian and military targets.
Right now, requiring an autonomous system to distinguish civilians from military is a difficult problem, and the moral judgement of what is proportional may never be possible. Considerations like these are what’s keeping LAWS from being deployed right now, but we don’t think that means that the UNOG is wasting its time. Our guess is that solving the AI and autonomy problems are just a matter of time.
While gadflys like Elon Musk and even serious scientists worry about general-purpose AIs taking over the world through rather elaborate strategies, the reality is that the first AIs to kill people are more likely to be very specifically designed to do so. Although it’s not currently technically possible to distinguish friend from foe, that’s a much more practical task for an AI than realizing by itself that humans are weak and thus needing to exterminate them. (Or whatever.)
There are already AI systems that do a lot of the pre-selection work. South Korea’s automated sentry bot, the Super aEgis II system, for instance, locates and tracks people using its IR cameras. Granted, the task is made significantly simpler by the fact that nobody is allowed in the de-militarized zone (DMZ) that it patrols, and essentially anyone walking around there is literally asking to get shot. But the system reportedly could have been made fully autonomous a few years ago. The South Korean government chose to limit that feature and keep a human in control.
In a more complicated situation, one could imagine an AI trained to distinguish combatants by their uniforms. Given significantly different styles, or flag patches on shoulders, or similar, this shouldn’t be too hard to do. Such a system may not be 100% foolproof, of course, but as long as it’s very accurate, it might be good enough.
Humans in the Loop, For Now
And the question of human control is a slippery slope. With weapons like the US’s drone aircraft, the question isn’t whether a human is in full control. They already aren’t. The drone’s flight, target acquisition, and tracking are all autonomous. People are involved only at the last minute — verifying the target by steering a crosshair around with a joystick, and pressing the fire button. As with the Korean sentry bots, a human is kept in the loop, even if their participation is very brief.
On the top of this single human, there’s an entire chain of command of course. The drone operator is ordered to fire on an individual. This provides accountability for the action, and the threat of a war crimes trial hopefully will prevent the indiscriminate targeting of civilians. This chain of accountability that makes something like the US drone program even purportedly legal, although it’s worth noting here in passing that this is not a settled issue.
While the drones are remotely teleoperated and largely automatic systems, it can be argued that they’re under substantial human control. But what happens when the human input is further reduced? Is it still sufficient, for instance, for a human to create the list of IMEI (cell-phone identifiers) that the drones target? What about when neural networks do the network analysis that makes the short list of IMEIs that belong to the targets? Defining autonomy in these cases gets ever trickier.
The UNOG report imagines another way that the human might get slowly squeezed out of the loop: if their role becomes small and/or repetitive enough that they slowly come to rely on the machine’s judgement rather than providing a counterbalance to it. Every one of you out there who clicks away a popup menu by force of habit knows that you can be fairly easily reduced to an automaton. The stakes are much higher in military robot oversight, naturally, but the mechanism is similar.
Determining what constitutes “meaningful human control” of automatic weapons systems will become tougher in the future as technological progress in AI and integration of weapons systems combine. US soldiers already routinely use robots for bomb-defusing and reconnaissance tasks, and existing weapons like Korea’s sentry bots and the US drones are already capable of nearly autonomous lethal action, although they’re deliberately fettered.
Killer robots are just around the corner, and the UNOG has taken three years so far and hasn’t even fully hammered-out its definition of LAWS, much less taken steps to regulate them. Will we see them made legal for the battlefield before they’re outlawed by convention? The race is on!