This morning I want you to join me in thinking a few paces into the future. This mechanism let’s us discuss some hard questions about automation technology. I’m not talking about thermostats, porch lights, and coffee makers. The things that we really need to think about are the machines that can cause harm. Like self-driving cars. Recently we looked at the ethics behind decisions made by those cars, but this is really just the tip of the iceberg.
A large chunk of technology is driven by military research (the Internet, the space race, bipedal robotics, even autonomous vehicles through the DARPA Grand Challenge). It’s easy to imagine that some of the first sticky ethical questions will come from military autonomy and unfortunate accidents.
Our Fictional Drone Scenario
The Sundancer-3 is no ordinary drone. Based on the MQ-1 Predator UAV, our fictitious model performs much of the same functions. It has a few key differences, however. The main one being it can operate without any pilot. Its primary mission is to use its onboard cameras and facial recognition software to identify enemy combatants and take them out – all without any human intervention. It has the ability to defend itself if fired upon, and can also identify its surroundings and will not fire if there is even a remote chance of civilian casualties. The Sundancer was hailed as a marvel of human ingenuity, and viewed as the future of military combat vehicles. Militaries all over the world began investing in their own truly autonomous robotic platforms. But this would all change on a fateful night in a remote village a world away.
Its critics say the accident was predictable. They said from the beginning that taking the human decision-making out of lethal action would result in the accidental killing of innocent people. And this is exactly what happened. A group of school kids were celebrating a local holiday with some illegally obtained fireworks. A Sundancer was patrolling the area, and mistook one of the high reaching fireworks as an attack, and launched one of its missiles in self defense. A few hours later, the rising sun laid bare for all the world to see – a senseless tragedy that never should have happened.
For the first time in history, an autonomous robot had made the decision, completely on its own, to take lethal action against an innocent target. The outrage was severe, and everyone wanted answers. How could this happen? And perhaps more importantly, who is responsible?
Though our story is not real, it is difficult to say that a similar scenario will not play out in the near future. We can surmise that the consequences will be similar, and the public will want someone to blame. Our job is to discuss who, if anyone, is to blame when a machine injures or takes the life of a human on its own accord. Not just legally, but morally as well. So just who is to blame?
Although obvious in the fact that a machine cannot be punished for its actions, some interesting questions arise when looking at the deadly scenario from the machine’s point of view. In our story, the Sundancer mistook a firework as an attack, and responded accordingly. From its viewpoint, it was being targeted by a shoulder fired missile. It is programmed to stay alive, and as far as it’s concerned, did nothing wrong. It simply did not have the ability to differentiate between the missile and harmless firework. And it is this inability where the problem resides.
A similar issue can be seen in “suicide by cop” events. In a life and death situation, a police officer does not have the ability to tell the difference between a real gun and a fake gun. The officer will respond with deadly force every time, protecting themselves when threatened in the line of duty.
Does the manufacturer of the Sundancer have any fault in the deaths of the students? Indeed, it was their machine that made the mistake. They built it. One could argue that if they had not built the machine, the accident would have never happened. This argument is quickly put to rest by looking at similar cases. A person that drives a car into a crowd of people is at fault for the accident. Not the car, nor the manufacturer of the car.
The case of the Sundancer is a bit different, however. It made the decision to launch the missile. There is no human operator to place the blame upon. And this is the key. If the manufacturer had prior knowledge that it was building a machine that could take human life without human intervention, does it hold any responsibility for the deaths?
Let’s explore this concept a bit deeper. In the movie Congo, a group of researchers used motion activated machine gun turrets to protect themselves from dangerous apes. The guns basically shot anything that moved. In real life, this would be an extremely irresponsible machine to build. If such a device were built and it took an innocent life, you can rest assured that the manufacturer would be held partly to blame.
Now let us apply some sort of hypothetical sensor to our gun turret, such that it could detect the difference between an ape and a person. This would change everything. If a mistake happened and it took the life of an innocent person, the manufacturer could say with a clear conscious that it was not to blame. It could say there was an unknown flaw in the sensor that distinguishes between human and ape. The manufacturer of the Sundancer could wage a similar argument. It’s not a manufacturing problem. It’s an engineering problem.
Several years ago, I built a custom alarm clock as a gag gift for a friend. The thing drew so much current that I was unable to find a DC ‘wall wart’ power supply that would run it. Long story short, I wound up making my own power supply and embedding it in the clock. I made it clear to my friend that she couldn’t leave the clock plugged in while unattended. I did this because I had no training in how to design power supplies safely. If something went wrong and it caught fire, it would have been my fault. I was the designer. I was the engineer. And I bear ultimate responsibility if my project hurts someone.
The same can be said of any engineer, including the ones that designed the Sundancer. They should have thought about how to handle a mistaken attack. There should have been protocols…checks and balances put in to place to prevent such a tragedy. This of course is easier said that done. If you make it too safe, the machine becomes ineffective. It will never fire a missile because it will be constantly asking itself if it’s OK to fire. By then, it’s too late and it gets shot out of the sky. But this is still a better outcome than mistakenly shooting a missile at students.
I argue that it is the engineer to blame for the Sundancer accident.
This leaves us stuck between a high voltage transformer and a 1 farad capacitor. If the engineer holds ultimate responsibility for the mistakes of his or her machine, would they build a machine that could make such a mistake? Would you?