In 1971, a non-profit formed that holds the World Economic Forum each year. The Forum claims it “Engages the foremost political, business and other leaders of society to shape global, regional and industry agendas.” This year, the Forum hosted a session: What If: Robots Go To War? Participants included a computer science professor, an electrical engineering professor, the chairman of BAE, a senior fellow at the Vienna Center for Disarmament and Non-Proliferation, and a Time magazine editor.
We couldn’t help but think that the topic is a little late: robots have already gone to war. The real question is how autonomous are these robots and how autonomous should they be? The panel, as well as the Campaign to Stop Killer Robots, saw the need for human operators to oversee LARs (Lethal Autonomous Robots). The idea is that autonomous killing machines can’t resolve the moral nuances involved in warfare. That’s not totally surprising since we aren’t convinced people can solve moral nuances, either.
Besides that, not every killing machine goes to war. As [James Hobson] pointed out last year, self-driving cars (that are coming any day now, they tell us) may have to balance the good of the few vs. the good of the many, a logical dilemma even gave Mr. Spock trouble.
Speaking of which, we talked about the moral issues surrounding autonomous killing drones before. Discussions like the one at the World Economic Forum (see the video below) are thought provoking, but perhaps not practical. History shows that if something can be built, it will be built. Those of us who might design robots (killing or otherwise) need to exercise good judgment, but realistically we are a long way from having robots smart enough to handle the totally unexpected or moral gray areas.