Emotional Hazards That Lurk Far From The Uncanny Valley

A web search for “Uncanny Valley” will retrieve a lot of information about that discomfort we feel when an artificial creation is eerily lifelike. The syndrome tells us a lot about both human psychology and design challenges ahead. What about the opposite, when machines are clearly machines? Are we all clear? It turns out the answer is “No” as [Christine Sunu] explained at a Hackaday Los Angeles meetup. (Video also embedded below.)

When we build a robot, we know what’s inside the enclosure. But people who don’t know tend to extrapolate too much based only on the simple behavior they could see. As [Christine] says, people “anthropomorphize at the drop of the hat” projecting emotions onto machines and feeling emotions in return. This happens even when machines are deliberately designed to be utilitarian. iRobot was surprised how many Roomba owners gave their robot vacuum names and treated them as family members. A similar eruption of human empathy occurred with Boston Dynamics video footage demonstrating their robot staying upright despite being pushed around.

In the case of a Roomba, this kind of emotional power is relatively harmless. In the case of robots doing dangerous work in place of human beings, such attachment may hinder robots from doing the job they were designed for. And even more worrisome, the fact there’s a power means there’s a potential for abuse. To illustrate one such potential, [Christine] brought up the Amazon Echo. The cylindrical puck is clearly a machine and serves as a point-of-sale terminal, yet people have started treating Alexa as their trusted home advisor. If Amazon should start monetizing this trust, would users realize what’s happening? Would they care?

Continue reading “Emotional Hazards That Lurk Far From The Uncanny Valley”

Simon Says Smile, Human!

The bad news is that when our robot overlords come to oppress us, they’ll be able to tell how well they’re doing just by reading our facial expressions. The good news? Silly computer-vision-enhanced party games!

[Ricardo] wrote up a quickie demonstration, mostly powered by OpenCV and Microsoft’s Emotion API, that scores your ability to mimic emoticon faces. So when you get shown a devil-with-devilish-grin image, you’re supposed to make the same face convincingly enough to fool a neural network classifier. And hilarity ensues!

Continue reading “Simon Says Smile, Human!”

Raspberry Pi Robot That Reads Your Emotions

It’s getting easier and easier to add machine intelligence to your hacks, even to the point where you sometimes don’t have to install any special software. In this case [Dexter Industries] has added the ability to read human emotions to their EmpathyBot robot by making use of Google Cloud Vision.

Press a button on the robot and it moves forward until it’s a certain distance from an object. It then takes a picture and sends it off to Google Cloud Vision along with a request to do face detection. The response that Google returns is in JSON format and, if it finds a face, includes the likelihood of the face being happy, sad, sorrowful or surprised. The robot parses that response and gives an appropriate canned speech using the text-to-speech software, eSpeak e.g. “You seem happy! Tell me why you are so happy!”.

[Dexter] has made the source code available on github. It’s written in python and is easy to read by anyone with even just a little programming experience. The video after the break gives a number of demonstrations, including some with non-human subjects.

Continue reading “Raspberry Pi Robot That Reads Your Emotions”

Robots learning facial expressions

einstein (Custom)
Researchers at UC San Diego have been working on a robot that learns facial expressions. Starting with a bunch of random movements of the face “muscles”, the robot is rewarded each time it generates something that is close to an existing expression. It has slowly developed several recognizeable expressions itteratively. We have a few questions. First, are we the only ones who see a crazy woman with a mustache in the picture above? Why is that? What makes [Einstein] look so effiminate in that picture? Secondly, what reward do you give a robot? You can actually see this guy in action in a video after the break.

Continue reading “Robots learning facial expressions”

Lexlrie

Lexlrie is basically a feed display. It can connect to twitter, facebook and we feel fine for its updates. What makes this project different is that it is supposed to alter its lighting based on the mood of the updates. The system looks for words like “better” and “sorry” and displays color patterns based on those. We have no idea what “better” should look like, but it’s a cool idea. You can get more details of its construction here. This project vaguely reminds of Pulse, which intended to show the emotion of blogger.com updates.

Wiimote driven motion effects

Check out the video above by [Adrien Mondot] for a extensive demonstration of eMotion being used with a Wiimote. eMotion is a physics based visual tool for the Mac. It’s designed to enhance performances by reacting to real world motion. Its grounding in physics makes the resultant motion appear more natural than if they were arbitrarily generated. The video above combines eMotion with the output of Wiimote Whiteboard, a low-cost interactive white board that uses the Wiimote camera plus IR light pens. While the video takes place in a small area, we can see how this could be scaled to a much larger space with IR lights mounted to performers.

[via CDM]