Dance For A Dollar With The YayTM

The YayTM is a device that records a person dancing and judges whether or not the dancing is “Good”. If the YayTM likes the dance, it will dispense a dollar for the dancers troubles. However, unless the dancer takes the time to read the fine print, they won’t realize that their silly dance is being uploaded to YouTube for the whole world to see. Cobbled together with not much more than a PC and a webcam,  the box uses facial recognition to track and rate the dancer.

The YayTM was made by [Zach Schwartz], a student at NYU, as a display piece for the schools Interactive Telecommunication Program. Unfortunately there aren’t any schematics or source code, but to be honest, having one of these evil embarrassing boxes around is probably enough. What song does the YayTM provide for dancing, you ask? Well, be sure to check it out here.

EDIT: [Zack] has followed up with an expanded writeup of the YayTM. Be sure to check out his new page with source code and more info. Thanks [Zack]!

Head-up Uses Facial Recognition And Augmented Reality

Scouter is a facial recognition system and head-up display that [Christopher Mitchell] developed for his Master’s Thesis. The wearable device combines the computing power of an eeePC 901 with a Vuzix VR920 wearable display and a Logitech Quickcam 9000. The camera is mounted face-forward on the wearable display like a third eye and the live feed is patched through to the wearer. [Christopher’s] software scans, identifies, and displays information about the people in the camera frame at six frames per second.

We can’t help but think of the Gargoyles in Snow Crash. This rendition isn’t quite that good yet, there’s several false positives in the test footage after the break. But there are more correct identifications than false ones. The fact that he’s using inexpensive off-the-shelf hardware is promising. This shouldn’t been too hard to distill down to an inexpensive dedicated system.

Continue reading “Head-up Uses Facial Recognition And Augmented Reality”

Face Tracking With X10

If you are looking to do some face tracking and don’t know where to start, this explanation of how to do it with X10 modules could be pretty helpful. Aside from having, what some could consider to be, the absolute most annoying website ever for a company, X10 also makes modular systems for home automation. X10 also refers to the industry standard for home automation, so sometimes just saying you did something with X10 can get confusing.   He is using the SDK to write custom code for the tracking, which you can download from the project page.

[via HackedGadgets]

Robot Einstein Could Save Humans From Killbot Destruction

einstein-robot

Earlier this year we saw the Einstein robot that is being developed to facilitate human facial emotions in robots. [David Hanson], the man in charge of this project, has given a TED talk on his work that includes a show-and-tell of his most recent progress. We’ve embedded the video after the break for your enjoyment.

The Einstein robot (head only in this video) shows off the ability to recognize and mimic the facial emotions of the person in front of it. There is also video of a Bladerunner-esque robot looking around a room, recognizing and remembering the faces of the people it sees. [David] makes a very interesting proclamation: he’s trying to teach robots empathy. He feels that there is a mountain of R&D money going into robots that can kill and not much for those that can sense human emotions. His hope is that if we can teach empathy, we might not be annihilated when robots become smarter than us.

That’s not such a bad idea. Any way you look at it, this talk is interesting and we wish the five-minute offering was five-times as long. But [Mr. Hanson’s] facial hair alone is worth clicking through to see.

Continue reading “Robot Einstein Could Save Humans From Killbot Destruction”