A face made up of 3 OLEDs

It’s Nice Having Someone To Talk To

We all get a bit lonely from time to time and talking to other humans can be a challenge. With social robots still finding their way these days, [Markus] decided to find a DIY solution he could make cheaply, resulting in the “Conversation Face.”

The build is actually pretty simple, really. You have three different OLED displays, two for the eyes and one for the mouth, that have different graphic images programmed onto them depending on the expression being displayed. There’s also a small electret microphone that senses when you are speaking to the face.  Finally, a simple face cutout covers the electronics and solidifies the aesthetic.

The eyes are programmed identically since they would move together for most expressions. [Markus] was able to get a blinking animation by quickly moving a white circle vertically through the eye screens and the results are pretty convincing. He also moves the eyes around the OLED to make the expressions seem more dynamic.

There’s not much to the mouth. [Markus] only has a mouth open and a mouth closed animation. The mouth opens when it’s the face’s turn to talk or closes when the face should be listening. This information is easily determined by measuring the output of the microphone. Interestingly enough, you can program the face to be quiet and attentive when it’s being spoken to or quite chatty to show that it’s actively engaging in the conversation.

I don’t know about you, but we can’t decide if the Conversation Face is more or less creepy than those social robots. Either way, we thought you would get a kick out of it regardless. It also looks like a funny anime character if you ask us.

Machine Learning Helps You Track Your Internet Misery Index

We all seem to intuitively know that a lot of what we do online is not great for our mental health. Hang out on enough social media platforms and you can practically feel the changes your mind inflicts on your body as a result of what you see — the racing heart, the tight facial expression, the clenched fists raised in seething rage. Not on Hackaday, of course — nothing but sweetness and light here.

That’s all highly subjective, of course. If you’d like to quantify your online misery more objectively, take a look at the aptly named BrowZen, a machine learning application by [Nick Bild]. Built around an NVIDIA Jetson Xavier NX and a web camera, BrowZen captures images of the user’s face periodically. The expression on the user’s face is classified using a facial recognition model that has been trained to recognize facial postures related to emotions like anger, surprise, fear, and happiness. The app captures your mood and which website you’re currently looking at and stores the results in a database. Handy charts let you know which sites are best for your state of mind; it’s not much of a surprise that Twitter induces rage while Hackaday pushes [Nick]’s happiness button. See? Sweetness and light.

Seriously, we could see something like this being very useful for psychological testing, marketing research, or even medical assessments. This adds to [Nick]’s array of AI apps, which range from tracking which surfaces you touch in a room to preventing you from committing a fireable offense on a video conference.

Continue reading “Machine Learning Helps You Track Your Internet Misery Index”

Your Next Robot Needs Googly Eyes, And Other Lessons From Disney

There are so many important design decisions behind a robot: battery, means of locomotion, and position sensing, to name a few. But at a library in Helsinki, one of the most surprising design features for a librarian’s assistant robot was googly eyes. A company called Futurice built a robot for the Oodi library and found that googly eyes were a very important component.

The eyes are not to help the robot see, because of course they aren’t functional — at least not in that way. However without the eyes, robot designers found that people had trouble relating to the service robot. In addition, the robot needed emotions that it could show using the eyes and various sounds along with motion. This was inspired, apparently, by Disney’s rules for animation. In particular, the eyes would fit the rule of “exaggeration.” The robot could look bored when it had no task, excited when it was helping people, and unhappy when people were not being cooperative.

Continue reading “Your Next Robot Needs Googly Eyes, And Other Lessons From Disney”

Emotional Hazards That Lurk Far From The Uncanny Valley

A web search for “Uncanny Valley” will retrieve a lot of information about that discomfort we feel when an artificial creation is eerily lifelike. The syndrome tells us a lot about both human psychology and design challenges ahead. What about the opposite, when machines are clearly machines? Are we all clear? It turns out the answer is “No” as [Christine Sunu] explained at a Hackaday Los Angeles meetup. (Video also embedded below.)

When we build a robot, we know what’s inside the enclosure. But people who don’t know tend to extrapolate too much based only on the simple behavior they could see. As [Christine] says, people “anthropomorphize at the drop of the hat” projecting emotions onto machines and feeling emotions in return. This happens even when machines are deliberately designed to be utilitarian. iRobot was surprised how many Roomba owners gave their robot vacuum names and treated them as family members. A similar eruption of human empathy occurred with Boston Dynamics video footage demonstrating their robot staying upright despite being pushed around.

In the case of a Roomba, this kind of emotional power is relatively harmless. In the case of robots doing dangerous work in place of human beings, such attachment may hinder robots from doing the job they were designed for. And even more worrisome, the fact there’s a power means there’s a potential for abuse. To illustrate one such potential, [Christine] brought up the Amazon Echo. The cylindrical puck is clearly a machine and serves as a point-of-sale terminal, yet people have started treating Alexa as their trusted home advisor. If Amazon should start monetizing this trust, would users realize what’s happening? Would they care?

Continue reading “Emotional Hazards That Lurk Far From The Uncanny Valley”

Simon Says Smile, Human!

The bad news is that when our robot overlords come to oppress us, they’ll be able to tell how well they’re doing just by reading our facial expressions. The good news? Silly computer-vision-enhanced party games!

[Ricardo] wrote up a quickie demonstration, mostly powered by OpenCV and Microsoft’s Emotion API, that scores your ability to mimic emoticon faces. So when you get shown a devil-with-devilish-grin image, you’re supposed to make the same face convincingly enough to fool a neural network classifier. And hilarity ensues!

Continue reading “Simon Says Smile, Human!”

EmpathyBot recognizing emotion

Raspberry Pi Robot That Reads Your Emotions

It’s getting easier and easier to add machine intelligence to your hacks, even to the point where you sometimes don’t have to install any special software. In this case [Dexter Industries] has added the ability to read human emotions to their EmpathyBot robot by making use of Google Cloud Vision.

Press a button on the robot and it moves forward until it’s a certain distance from an object. It then takes a picture and sends it off to Google Cloud Vision along with a request to do face detection. The response that Google returns is in JSON format and, if it finds a face, includes the likelihood of the face being happy, sad, sorrowful or surprised. The robot parses that response and gives an appropriate canned speech using the text-to-speech software, eSpeak e.g. “You seem happy! Tell me why you are so happy!”.

[Dexter] has made the source code available on github. It’s written in python and is easy to read by anyone with even just a little programming experience. The video after the break gives a number of demonstrations, including some with non-human subjects.

Continue reading “Raspberry Pi Robot That Reads Your Emotions”

Robots Learning Facial Expressions

einstein (Custom)
Researchers at UC San Diego have been working on a robot that learns facial expressions. Starting with a bunch of random movements of the face “muscles”, the robot is rewarded each time it generates something that is close to an existing expression. It has slowly developed several recognizeable expressions itteratively. We have a few questions. First, are we the only ones who see a crazy woman with a mustache in the picture above? Why is that? What makes [Einstein] look so effiminate in that picture? Secondly, what reward do you give a robot? You can actually see this guy in action in a video after the break.

Continue reading “Robots Learning Facial Expressions”