Never Stare Down A Robot

There are a few things historically difficult to make a robot do. Stairs, of course, are the obvious problem. But realistic blinking behavior is harder than you might think. At first, it might seem frivolous and simple to have a robot blink, but according to Italian scientists, it is both more important and more difficult than you probably think.

Blinking is a nonverbal cue when humans communicate. The post quotes a Finnish researcher:

While it is often assumed that blinking is just a reflexive physiological function associated with protective functions and ocular lubrication, it also serves an important role in reciprocal interaction.

Continue reading “Never Stare Down A Robot”

A robotic arm uses artificial muscles powered by water to lift a 7 kg barbell.

Taking A Stroll Down Uncanny Valley With The Artificial Muscle Robotic Arm

Wikipedia says “The uncanny valley hypothesis predicts that an entity appearing almost human will risk eliciting cold, eerie feelings in viewers.” And yes, we have to admit that as incredible as it is, seeing [Automaton Robotics]’ hand and forearm move in almost human fashion is a bit on the disturbing side. Don’t just take our word for it, let yourself be fascinated and weirded out by the video below the break.

While the creators of the Artificial Muscles Robotic Arm are fairly quiet about how it works, perusing through the [Automaton Robotics] YouTube Channel does shed some light on the matter. The arm and hand’s motion is made possible by artificial muscles which themselves are brought to life by water pressurized to 130 PSI (9 bar). The muscles themselves appear to be a watertight fiber weave, but these details are not provided. Bladders inside a flexible steel mesh, like finger traps?

[Automaton Robotics]’ aim is to eventually create a humanoid robot using their artificial muscle technology. The demonstration shown is very impressive, as the hand has the strength to lift a 7 kg (15.6 lb) dumbbell even though some of its strongest artificial muscles have not yet been installed.

A few years ago we ran a piece on Artificial Muscles which mentions pneumatic artificial muscles that contract when air pressure is applied, and it appears that [Automaton Robotics] has employed the same method with water instead. What are your thoughts? Please let us know in the comments below. Also, thanks to [The Kilted Swede] for this great tip! Be sure to send in your own tips, too!

Continue reading “Taking A Stroll Down Uncanny Valley With The Artificial Muscle Robotic Arm”

Emotional Hazards That Lurk Far From The Uncanny Valley

A web search for “Uncanny Valley” will retrieve a lot of information about that discomfort we feel when an artificial creation is eerily lifelike. The syndrome tells us a lot about both human psychology and design challenges ahead. What about the opposite, when machines are clearly machines? Are we all clear? It turns out the answer is “No” as [Christine Sunu] explained at a Hackaday Los Angeles meetup. (Video also embedded below.)

When we build a robot, we know what’s inside the enclosure. But people who don’t know tend to extrapolate too much based only on the simple behavior they could see. As [Christine] says, people “anthropomorphize at the drop of the hat” projecting emotions onto machines and feeling emotions in return. This happens even when machines are deliberately designed to be utilitarian. iRobot was surprised how many Roomba owners gave their robot vacuum names and treated them as family members. A similar eruption of human empathy occurred with Boston Dynamics video footage demonstrating their robot staying upright despite being pushed around.

In the case of a Roomba, this kind of emotional power is relatively harmless. In the case of robots doing dangerous work in place of human beings, such attachment may hinder robots from doing the job they were designed for. And even more worrisome, the fact there’s a power means there’s a potential for abuse. To illustrate one such potential, [Christine] brought up the Amazon Echo. The cylindrical puck is clearly a machine and serves as a point-of-sale terminal, yet people have started treating Alexa as their trusted home advisor. If Amazon should start monetizing this trust, would users realize what’s happening? Would they care?

Continue reading “Emotional Hazards That Lurk Far From The Uncanny Valley”

Christine Sunu Proves The Effect Of Being Alive On Hardware Design

Modeling machines off of biological patterns is the dry definition of biomimicry. For most people, this means the structure of robots and how they move, but Christine Sunu makes the argument that we should be thinking a lot more about how biomimicry has the power to make us feel something. Her talk at the 2017 Hackaday Superconference looks at what makes robots more than cold metal automatons. There is great power in designing to complement natural emotional reactions in humans — to make machines that feel alive.

We live in a world that is being filled with robots and increasingly these are breaking out of the confines of industrial automation to take a place side by side with humans. The key to making this work is to make robots that are recognizable as machines, yet intuitively accepted as being lifelike. It’s the buy-in that these robots are more than appliances, and Christine has boiled down the keys to unlocking these emotional reactions.

Continue reading “Christine Sunu Proves The Effect Of Being Alive On Hardware Design”

Our Reactions To The Treatment Of Robots

Most of us have seen employees of Boston Dynamics kicking their robots, and many of us instinctively react with horror. More recently I’ve watched my own robots being petted, applauded for their achievements, and yes, even kicked.

Why do people react the way they do when mechanical creations are treated as if they were people, pets, or worse? There are some very interesting things to learn about ourselves when considering the treatment of robots as subhuman. But it’s equally interesting to consider the ramifications of treating them as human.

The Boston Dynamics Syndrome

Shown here are two snapshots of Boston Dynamics robots taken from their videos about Spot and Atlas. Why do scenes like this create the empathic reactions they do? Two possible reasons come to mind. One is that the we anthropomorphize the human-shaped one, meaning we think of it as human. That’s easy to do since not only is it human-shaped but the video shows it carrying a box using human-like movements. The second snapshot perhaps evokes the strongest reactions in anyone who owns a dog, though its similarity to any four-legged animal will usually do.

Is it wrong for Boston Dynamics, or anyone else, to treat robots in this way? Being an electronic and mechanical wizard, you might have an emotional reaction and then catch yourself with the reminder that these machines aren’t conscious and don’t feel emotional pain. But it may be wrong for one very good reason.

Continue reading “Our Reactions To The Treatment Of Robots”

How Do You Think This Quadcopter Feels?

how-does-this-quadcopter-feel

You don’t speak the language of dogs and yet you can tell when one is angry, excited, or down in the dumps. How does that work, and can it be replicated by a robot? That’s the question which [Megha Sharma] set out to study as part of her graduate research at the University of Manitoba in Winnipeg.

The experiment starts by training the robot in a series of patterns meant to mimic emotion. How, you might ask? Apparently you hire an actor trained in Laban Movement. This is a method of describing and dealing with how the human body moves. It’s no surprise that the technique is included in the arsenal of some actors. The training phase uses stationary cameras (kind of like those acrobatic quadcopter projects) to record the device as it is moved by the actor.

Phase two of the experiment involves playing back the recorded motion with the quadcopter under its own power. A human test subject watches each performance and is asked to describe how the quadcopter feels. The surprising thing is that when asked they end up anthropomorphising the inanimate device even further; making up small stories about what the thing actually wants.

It’s an interesting way to approach the problem of the uncanny valley in robotic projects.

Continue reading “How Do You Think This Quadcopter Feels?”

Android Skips Uncanny Valley – Fills In At The Office For You

For those that are unaware, Androids are often judged by where they fall on the uncanny valley curve, a graph that maps human revulsion to robots that closely resemble humans but are just a bit off (similar to how a corpse resembles a living person). This offering jumps right over that dip of the curve and takes its rightful place as a human stand-in. Well, except that you’re probably going to notice the limbless torso… but pay no attention to the man behind the curtain!

This is the result of research by Geminoid Lab at Aalborg University. It is the twin of its creator and in an effort to be as human as possible, movements are mimicked using facial recognition from a human operator. We’d bet that with some clever learning routines you can map out and index common mannerisms from the original person for later use with this body-snatcher-esque copy. Take a look at the clips after the break; we don’t think you’ll be creeped out at all.

Continue reading “Android Skips Uncanny Valley – Fills In At The Office For You”