Killer robots are a mainstay of science fiction. But unlike teleportation and flying cars, they are something that we are likely to see within our lifetime. The only thing that’s stopping countries like the USA, South Korea, the UK, or France from deploying autonomous killing machine in the very near term is that they’re likely to be illegal under current international humanitarian law (IHL) — the rules of war.
But if you just sighed in relief that the fate of humanity is safe, think again. The reason that autonomous killing machines are illegal is essentially a technicality, and worse, it’s a technicality that’s based on the current state of technology. The short version of the story, as it stands right now, is that the only thing making autonomous robotic killing weapons illegal is that it’s difficult for a robot to tell a friend from an enemy. When technology catches up with human judgement, all bets are off.
Think I’m insane? The United Nations Office at Geneva (UNOG), the folks who bring you the rules of warfare, started up a working group on killer robots three years ago, and the report from their 2016 meeting just came out. Now’s as good a time as any to start taking killer robots seriously.
Geneva
The Convention on Certain Conventional Weapons (CCW), also known as the Inhumane Weapons Convention, is an agreement among member states to ban the use of certain weapons that are deemed to be too nasty or indiscriminate to be used on people, even in times of war. We’re talking about landmines, flamethrowers, and (recently) blinding laser weapons.
Beginning in 2013, the UNOG decided to investigate whether killer robots should also be included in this list. You may not be surprised that achieving international consensus on an issue like this, mixing equal parts ethics, military considerations, technology, and even futurism, would take a while. The 2016 Informal Meeting of Experts took place in April, and the reports just came out (PDF). It’s a high-level summary of the meetings, but it’s a good read.
The UNOG refers to killer robots by an acronym, naturally: Lethal Autonomous Weapons Systems (LAWS), and starts off by noting that there are currently no LAWS deployed, so the question of whether or not to limit them is pre-emptive. Indeed, a LAWS right now would probably be illegal because international humanitarian law requires any lethal actions to be necessary and proportional to the military goals, and they must be able to distinguish between civilian and military targets.
Right now, requiring an autonomous system to distinguish civilians from military is a difficult problem, and the moral judgement of what is proportional may never be possible. Considerations like these are what’s keeping LAWS from being deployed right now, but we don’t think that means that the UNOG is wasting its time. Our guess is that solving the AI and autonomy problems are just a matter of time.
AI
While gadflys like Elon Musk and even serious scientists worry about general-purpose AIs taking over the world through rather elaborate strategies, the reality is that the first AIs to kill people are more likely to be very specifically designed to do so. Although it’s not currently technically possible to distinguish friend from foe, that’s a much more practical task for an AI than realizing by itself that humans are weak and thus needing to exterminate them. (Or whatever.)
There are already AI systems that do a lot of the pre-selection work. South Korea’s automated sentry bot, the Super aEgis II system, for instance, locates and tracks people using its IR cameras. Granted, the task is made significantly simpler by the fact that nobody is allowed in the de-militarized zone (DMZ) that it patrols, and essentially anyone walking around there is literally asking to get shot. But the system reportedly could have been made fully autonomous a few years ago. The South Korean government chose to limit that feature and keep a human in control.
In a more complicated situation, one could imagine an AI trained to distinguish combatants by their uniforms. Given significantly different styles, or flag patches on shoulders, or similar, this shouldn’t be too hard to do. Such a system may not be 100% foolproof, of course, but as long as it’s very accurate, it might be good enough.
Humans in the Loop, For Now
And the question of human control is a slippery slope. With weapons like the US’s drone aircraft, the question isn’t whether a human is in full control. They already aren’t. The drone’s flight, target acquisition, and tracking are all autonomous. People are involved only at the last minute — verifying the target by steering a crosshair around with a joystick, and pressing the fire button. As with the Korean sentry bots, a human is kept in the loop, even if their participation is very brief.
On the top of this single human, there’s an entire chain of command of course. The drone operator is ordered to fire on an individual. This provides accountability for the action, and the threat of a war crimes trial hopefully will prevent the indiscriminate targeting of civilians. This chain of accountability that makes something like the US drone program even purportedly legal, although it’s worth noting here in passing that this is not a settled issue.
While the drones are remotely teleoperated and largely automatic systems, it can be argued that they’re under substantial human control. But what happens when the human input is further reduced? Is it still sufficient, for instance, for a human to create the list of IMEI (cell-phone identifiers) that the drones target? What about when neural networks do the network analysis that makes the short list of IMEIs that belong to the targets? Defining autonomy in these cases gets ever trickier.
The UNOG report imagines another way that the human might get slowly squeezed out of the loop: if their role becomes small and/or repetitive enough that they slowly come to rely on the machine’s judgement rather than providing a counterbalance to it. Every one of you out there who clicks away a popup menu by force of habit knows that you can be fairly easily reduced to an automaton. The stakes are much higher in military robot oversight, naturally, but the mechanism is similar.
Open Questions
Determining what constitutes “meaningful human control” of automatic weapons systems will become tougher in the future as technological progress in AI and integration of weapons systems combine. US soldiers already routinely use robots for bomb-defusing and reconnaissance tasks, and existing weapons like Korea’s sentry bots and the US drones are already capable of nearly autonomous lethal action, although they’re deliberately fettered.
Killer robots are just around the corner, and the UNOG has taken three years so far and hasn’t even fully hammered-out its definition of LAWS, much less taken steps to regulate them. Will we see them made legal for the battlefield before they’re outlawed by convention? The race is on!
The entire idea of “Rules of War” is absurd to start with. War is *supposed* to be a last-ditch effort to resolve issues when all other reasonable diplomatic attempts have failed. To place rules on how war can be enacted only serves to further the problem, since those who subscribe to them only take a unilateral handicap in the war, with no risk of penalty should they win using said outlawed tactics. Granted as a member of civilian population, it is good to know that member countries will not be dropping nerve gas on me should we end up in another world war – however this only holds out so long as a government believes that they are winning. History shows that once a country or group realizes that they are militarily outmatched, they will ignore any weapon bans or treaties, codes of conduct, and their general human instinct to not be simply evil.
What is my point here? Partly to point out that no mater what regulations are placed on such instruments, their use is inevitable. Also, I would caution that even should such instruments be banned, regulated, etc, any country that does not actively develop such capability will eventually be at the mercy of those who do.
Agreed. :)
I tend to disagree. The fact that the civilised countries do not develop and stockpile such weapons severely reduces the chance of them ending up in third world battlefields and later in other places. I for one am happy that the terrorist that come and bring their joy to my country only have AK47s and not blinding lasers and nerve gas.
Actually, they do have both. Sarin gas is a common one. Just because something is banned doesn’t mean people can’t make it.
A particularily cruel terrorist could buy a bunch of powerful lasers, make a disco ball out of them and sneak in a movie theater to turn it on.
AK’s are cheaper and more readily available.
Not to mention that in most conflicts the majority of countries sit on the sidelines as they have no real stake in the outcome. Once it is clear that a country has gone rogue by flaunting the agreed upon rules of war, it is much more likely that other countries will come off the bench. That changes the calculation significantly.
Besides, even if you win you still have to live in the post-war world. Using things like landmines just leaves you with a humanitarian catastrophe and decades of cleanup work. It makes no sense for an invading country to drop e.g. cluster bomb mines because once you’ve annexed the place your own troops would be tripping on them.
The rules of war are actually more because modern warfare tends to happen on someone else’s soil. The US or UN or Russia or China etc. fight other people’s wars, or fight their wars in other people’s countries and then pull out, leaving the local population to suffer the outcome.
“The entire idea of “Rules of War” is absurd to start with. War is *supposed* to be…”
And then you trip on your own argument and start dictating rules on war.
I agree too, “Rules of War” is the ultimate oxymoron!
How many citizens would enlist if the war image wasnt painted with such broad stokes of bullshit.
The whole idea of those rules is that they are enforced by third party — so if you ignore them, you have more than just that one enemy to worry about. Of course that doesn’t work in a global conflict where everyone is a side.
Agreed, War is supposed to be about using whatever means necessary to give your side an edge…no matter how barbaric, inhumane, shocking & uncivilized they might be. The age of chivalry died a long time ago.
Don’t me wrong, Conventions & Treaties that try to limit/ban the use of certain weapon ‘X’ that is deemed too inhumane are great and make us feel really good about ourselves. They are at best ‘suggestions’ to be considered when there are ways of winning the war other than having to use weapon ‘X’. But when there are no alternatives, human nature will dictate that weapon ‘X’ will be used.
Winning a war at all cost isn’t really what most wars are about.
Only those that are waged over principle or faith, and even then it takes one crazy motherfucker to pull off. Reality isn’t Bond movies.
World War II was not fought over faith…perhaps it was fought on principle but then again you can argue that all sides in all conflicts use some form of ‘principle’; real or imaginary, to justify their joining of the war.
I believe that the atomic bomb was used on Japan, merely to speed up their eminent defeat/surrender. Many actually continue to argue that Japan was so overwhelmed with destruction from conventional bombs that they were going to surrender with a few weeks anyways.
ANd during WW2 none used chemical weapons widely, unlike WW1.
And at the end of the war germans had sarin, soman and tabun nerveagent manufacture spooled up and fully weaponized. They had stockpiles with hundreds of tons of chemical munitions.
But they choose not to use them.
There are some weapons that even Hitler and nazis did not use.
I’m #triggered that those robots are saluting with the wrong hand.
They’re sinister robots.
Ha! I get it. Good one.
Yep. That was a good one.
I am a cynical grumpy old bastard who comes here only to countertroll Benchoff, but today, I pause to tip my fedora in gratitude of your delightful jape. For today, you, Sir, win the Internets.
I predict that real rules on these robots will put in place when some country’s robot kills its own human comrades.
Always intrigued me too…
Deploying a robot to indiscriminately shoot anyone and everyone in it’s range is bad, but drop a big ass bomb in the same region that kills anyone and everyone in its range is OK??
Is it though?
I thought carpet bombing and nuclear weapons were considered a faux pas nowadays.
I am pretty sure that every bomb dropped on a region is done so without checking every single person in the area is military. Big bomb or little bomb, same result.
Then what’s all the hoopla about smart bombs?
WTF are you waffling on about ???
I am pretty sure all those bombs flying around in the Middle East simply go “boom”, with about as many smarts as a pile of rocks.
They [smart bombs] get fed dumb intel & people who want a bloodless war, instead get bombed hospitals, weddings, & girls schools.
“WTF are you waffling on about ???”
The fact that the whole smart bomb & videogame war is sold to the public on the point of avoiding collateral damage by striking only the targets that need to be. That is to say, it is no longer considered kosher to be killing indiscriminately and the military has to at least pretend to be avoiding civilian targets.
Re : “WTF are you waffling on about ???”
OK, I agree with you there. I missed your point.
MY point was that conventional weapons have a much larger chance of collateral damage than the supposed “smart’ weapons being developed, in theory at least.
“checking every single person in the area is military”
Isn’t “enemy combatant” nowadays defined (by the US) as anyone over the age of 16? That should be easy enough for a vision system to establish (measure height, close enough)
Actually just sorting people by the mass of ferrous metals they are carrying is enough in most cases. You know that stuff that reflects electromagnetic radiation….
Dan, brilliant observation and lead for development of smart weapons. Once you identify that the enemy is controlling an area, target ferrous metal, which is currently a major component of almost all weapons. See the bot coming and don’t want to be hit, drop and remove yourself from any ferrous metal.
Tech challenge: remote identification of ferrous metal, as opposed to other metal.
Bombs, smart or dumb, are bad news because they’re indiscriminate. But when they’re used against a military target (factory, airport, etc) they’re deemed acceptable, even though they often catch a bunch of civilians. It’s a judgement call made in an imperfect world.
OTOH, bombs that fail to detonate are a PITA. I live in Munich, and we’re still finding WWII bombs whenever they dig for a new underground garage or a rail tunnel.
The % of civilians killed in wars has dropped over time, which is either a testament to technology or the Geneva Convention or both. It’s undoubtedly a good thing. There are also people who think that reducing the number of wars would also be swell.
There are “better” bombs, you can make the casing and the charge biodegradable and line the casing with tungsten power so the blast is very intense but contained to a short range, the only problem is that the wounds caused by them are not treatable. The energy of the chemical reaction in the explosive is transferred to the powder as heat and kinetic energy so you get a rapidly expanding cloud of white hot and very dense dust that slows down quickly due to air resistance. The range can be tuned very accurately as there is little chance of shrapnel that can turn into a long range projectile that e.g. flies across the street and kills the child in the house there, or whatever. You can imagine what the injuries are like for those near the blast, their bodies are filled with thousands of needle puncher like burn wounds that are so numerous they cannot be practically treated and their body remains full or the dust, if they survive.
Considering the warfare tactics used by ISIS and others in that region where they dress & live among the civilians, I think its fair to say that even humans can’t always accurately tell the difference between friend & foe. Provide a robot with access to a massive facial database and the ability to detect other uniform/weapons/etc and I suspect it will not be long before robots are much better at distinguishing the friends from foes than are we humans. Is there a point at which it become inhumane to put humans in the loop?
Still takes a human to define friend from foe.
The robot has no friends or foes. Its operators have.
Not once you get AIs that are able to determine goals. Then it’s cliche ‘computer gone bad’ any human that impedes its goals is marked for death as being ‘bad’.
Good question.
One of the points brought up by the UNOG was that robots/AI, no matter how accurate, aren’t alive and thus have no basis for making (moral) decisions on whether or not to take a life. (Deep!) They simply can’t be made to understand the implications. In this view, it will always be inhumane w/o humans.
If they are not allowed to kill, I think autonomous weapons may go towards non-lethal means. Nets, sticky foam, taser darts, sedatives, sonic weapons, blinding (non-permanent) lights. It would eliminate the moral question (mostly, non-lethal can still be dangerous), and it’s also a much more interesting engineering problem.
@Kratz – or maybe we need tall robots that just snatch up people and hold them for further processing. A scene from War of the Worlds comes to mind…
I can understand that to an extent. But, basically, I think its a cop-out so there’s someone to blame when someone is mis-identified. Let’s assume we get to a point where facial recognition is 99% accurate and we build a facial database that includes all of our most-wanted and this database essentially acts as a kill list. If a bot can enter an area and identify those targets with 99% accuracy then I’d see no reason to have a human (which is probably less accurate) confirm that the kill should be initiated. Wouldn’t that kind of like asking Gary Kasparov to confirm moves for Big Blue in a chess match?
We already have autonomous weapons systems that choose targets without human intervention https://en.wikipedia.org/wiki/Phalanx_CIWS
“depending on the operator conditions, the system will either fire automatically or will recommend fire to the operator. […] The CIWS does not recognize identification friend or foe, also known as IFF. The CIWS only has the data it collects in real time from the radars to decide if the target is a threat and to engage it.”
Guess those Russian pilots were lucky CIWS wasn’t deployed/operating when they buzzed our ships.
“The Phalanx Close-In Weapons System (CIWS) was developed as the last line of automated weapons defense (terminal defense or point defense) against anti-ship missiles… and attacking aircraft, including high-g and maneuvering sea-skimmers.”
I would guess that they were not activated at the time. The arrival of the Russian planes would not have been a surprise and shooting one down would be bad, very bad. The article does list an incident where a US plane was mistakenly shot down by a Japanese system because it was enabled too soon, (It was supposed to shoot down the drone being towed rather than the tow plane itself) so this sort of thing certainly can happen.
The Russian plane did not use targeting radar or any other system that indicated an attack.
If it did “the boat” (A GUIDED MISSILE DESTROYER) can handel a Sukhoi Su-24.
I would say that there are many other systems that could be said to be autonomous: Air defense, naval mines, torpedoes, land mines, religiously fanatical suicide bombers and drunken soldiers.
Killer robots have been around since WWII. Guided torpedos and the V2 are just two examples. Today AIM-120s and Captor mines are modern examples.
This is why they had our local FIRST Robotics season kick off at the SOCOM tech incubator. This is the future of weapons systems and the generals want nerd kids to help them build better killing machines.
Even if there is no human in the loop when Major General Disaster screws up a deployment and war crimes are committed, she, or her political bosses will still be held accountable using existing international legal frameworks, therefore there are the same limits on the dangers of these new weapons as there are on any existing weapons.
Accountability. I didn’t get much into that, but there’s some question about what happens in the case of a malfunction that kills innocents — whether the chain of command is responsible, or the company that produced the weapons system.
Is it harder to properly train your soldiers or properly code image-recognition software?
What about when humans start facing threats that only a robot can realistically intercept or retaliate against? Things that happen too quickly for use to perceive or be otherwise involved in? Of course, that is already true and it already exists to some extent – like the commenter who brought up the Phalanx CIWS (a robot for shooting down missiles). No one ever mentions that angle when talking up robot involvement in arms and ordnance, though.
Some level of autonomy (automation) has existed in weapons for a very long time.
http://www.glennsmuseum.com/bombsights/bombsights.html
AI is a tool. Very complicated, very powerful, but still a tool. Danger arises from the fact that it will be easier to make a robotic system that look like it gets the job done, without tackling corner cases that are hard to see during the design phase.
Imagine r/shittyrobots and automated-phone-systems with guns.
Anybody remembers “Kunetra the killer city” (IIRC)? Or Kyashan, where the AI, programmed to defend the environment, decides that the human race is environment’s biggest foe?
Two things:
AI experts said that alphaGo was 10 years away, 6 months later we have alphaGo. These people have a meeting in November 2015 and only now this vacuous report gets finished. These people have no idea what it is that they imagine they can control.
Also:
And I’m asuming that “many” > “several” here ..
“18. Tasking machines to make decisions on the life and death of a human being without any human intervention was considered by many delegations to be ethically unacceptable. Several delegations made the point that they had no intention of developing or acquiring weapon systems of this nature.”
I’d love to know which delegations have the intention to develop systems that they deem ethically unacceptable.
A number of the delegates shared your suspicions about the state/progress of AI. That’s the rationale for having special treatment for LAWS over and above the already existing restrictions. Are (completely accurate) LAWS still immoral or inhumane?
Re: many vs several. You’re right, and I suspect it’s bad language choice. The UNOG needs the Hackaday Comments Crew to do their proofreading!
If they’re “completely accurate” at distinguishing between “friend” and “foe” then in my opinion they are conscious to the same degree (and I believe that consciousness is a on a continuum) as any human that would follow such orders. So it’s as moral/humane as the murder in the first place.
“18. Tasking machines to make decisions on the life and death of a human being without any human intervention was considered by many delegations to be ethically unacceptable.”
Let’s talk about self-driving cars and corner cases.
http://hackaday.com/2015/10/29/the-ethics-of-self-driving-cars-making-deadly-decisions/
Great.
War is a Racket
https://ratical.org/ratville/CAH/warisaracket.html
> rules of war kek
“All is fair in love and war”
As long as humanity still hoards around 4000 active nuclear warheads “just to scare the enemy a little” (on top of the other >10000 nuclear warheads that are “in storage”), the threat of some robots armed with machineguns or rocket launchers seems kinda low to me. Especially when you consider how hard it actually would be to develop and maintain said robot to be viable in an active warzone, compared to the simple task of launching a nuke to potentially erradicate millions of civilians, cities or even a small country almost instantly “with the push of a button”.