A web search for “Uncanny Valley” will retrieve a lot of information about that discomfort we feel when an artificial creation is eerily lifelike. The syndrome tells us a lot about both human psychology and design challenges ahead. What about the opposite, when machines are clearly machines? Are we all clear? It turns out the answer is “No” as [Christine Sunu] explained at a Hackaday Los Angeles meetup. (Video also embedded below.)
When we build a robot, we know what’s inside the enclosure. But people who don’t know tend to extrapolate too much based only on the simple behavior they could see. As [Christine] says, people “anthropomorphize at the drop of the hat” projecting emotions onto machines and feeling emotions in return. This happens even when machines are deliberately designed to be utilitarian. iRobot was surprised how many Roomba owners gave their robot vacuum names and treated them as family members. A similar eruption of human empathy occurred with Boston Dynamics video footage demonstrating their robot staying upright despite being pushed around.
In the case of a Roomba, this kind of emotional power is relatively harmless. In the case of robots doing dangerous work in place of human beings, such attachment may hinder robots from doing the job they were designed for. And even more worrisome, the fact there’s a power means there’s a potential for abuse. To illustrate one such potential, [Christine] brought up the Amazon Echo. The cylindrical puck is clearly a machine and serves as a point-of-sale terminal, yet people have started treating Alexa as their trusted home advisor. If Amazon should start monetizing this trust, would users realize what’s happening? Would they care?
Continue reading “Emotional Hazards That Lurk Far From The Uncanny Valley”