Robots Learning Facial Expressions

einstein (Custom)
Researchers at UC San Diego have been working on a robot that learns facial expressions. Starting with a bunch of random movements of the face “muscles”, the robot is rewarded each time it generates something that is close to an existing expression. It has slowly developed several recognizeable expressions itteratively. We have a few questions. First, are we the only ones who see a crazy woman with a mustache in the picture above? Why is that? What makes [Einstein] look so effiminate in that picture? Secondly, what reward do you give a robot? You can actually see this guy in action in a video after the break.

[youtube=http://www.youtube.com/watch?v=UBUtxfUY_w0]

30 thoughts on “Robots Learning Facial Expressions

  1. Maybe its the orange cutoff on the side and the exposed chest area that many feminine clothing styles sport. If you put your hand over his chest, It looks like a masculine einstein lol.

    As for the project itself, this is very, interesting.

  2. 1. neckline is too slender for a man.
    2. hairstyle, hair density, and hairline are feminine.
    3. eyes are way too youthful and shiny for an older male.
    4. the outfit as noted by taylor.
    5. the jawline and eyebrows look like cosmetic surgery or other modifications associated with female appearance upkeep (tweezed brows?)

  3. i think everyone is missing the point here, it never learned anything, or at least i was not convinced it had learned anything new at all.

    walt disney has been doing this “trick” for many years with their innovative robotics teams. some way or another information was ALREADY provided about facial expressions, mean the machine was just doing what it was told.

    like always.

  4. It looks like it’s is wearing a Paula Dean wig.
    This is pretty interesting but I feel like there should be a more exact way of doing this.
    Has anything else been done with this?
    I can see a pretty neat application using a webcam to see a user’s face and einstein would copy it.

    Also, at 0:23, sad or high?

  5. I would have to agree with some that this is not actually learning anything. This reminds me of the whole “put monkeys in a room for so many hundred years or so and the will write shakespear” (To inaccurately paraphrase).
    This just proves how far we truly are to mimicking complex biological phenomena with an over-simplistic code counterpart. The year 2009!

  6. @_matt
    Idk, learning in itself is a very complex process:

    The program seems to work only by iterations, that at any random interval could be accepted as a plausible facial expression, resulting in a reward, and thus that face is saved.

    In real life, organisms ( the more intelligent ones, anyway ) don’t just learn by trial and error, and it is not all just guesswork; there are calculations and things like boolean / fuzzy logic are applied to the situation, and it is mostly done by the process, “a relatively minimal action- observation- calculation- repeat”.

    What I am saying is, you and I both know we did not learn how to smile by brute-forcing facial expressions. We observed with audio, visual, and other sources of input.

  7. I would hope that the people involved with identifying the different facial expressions don’t have Aspergers or anything on the Autistic spectrum. As someone with Aspergers I find reading body language & subtle facial expressions difficult.

  8. who rewarded the robot for making the “stoned” look on the left of the picture montage at the top of the page?

    “what reward do you give a robot?”

    chips, obviously.

  9. it is technically an ai hack. by combining random actions with “rewards” (positive feedback, basically binary response…either this combination of actions is good or not) learning can be faked.

  10. “Hello, I am uncomfortably androgynous Mark Twain. Sometimes a feel more like a man, and sometimes more like a woman. Will you help me explore my sexuality?”, the robot said with a provocative expression.

  11. It’s not a hack/fake really. It is actually a fair approximation of how we work (just badly described)…

    Reinforcement learning usually approximates what goes on inside our brains to learn – this is learning in exactly the same way as us, but many orders of magnitude simpler. As said above, it’s probably a fairly straightforward application of a neural network. Neural nets can implement fuzzy logic, reasoning, decisions, etc, so it’s fair to say it’s using fairly complex AI technology. It’s just a bit unimaginative.

    A more imaginative application of the tech has already been covered here:
    http://hackaday.com/2008/08/12/autonomous-helicopter-learns-autorotation/

  12. While I certainly can’t disagree this is pretty typical for an AI project, the problem is that we learn to make facial expressions as a result of communicating emotions. There does not seem to be any correlation here between the facial and internal state, only learning expressions independent of anything. While that may or may not save some programming time, it still would require some human to come in and correlate the particular expressions with some symbolic representation for communication; hardly psychologically plausible.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.