Smart Kegerator Bills Based on Beer Consumption

Kegerator ownership is awesome, but it has its downsides. It’s hard to keep track of who drank what without cans or bottles to count. [Phil] was looking for a good solution to this problem when it came to sharing beer with his roommates and friends and has just completed the first iteration of his smart kegerator.

He has devised a system based on a Raspberry Pi.  His software recognizes the face of the person pulling a beer and adds a charge to their tab based on the price of the keg and the volume of the pour. The system also keeps track of current and historic temperature and humidity values inside the kegerator, and everything is displayed on a Mimo 720S touch screen.

[Phil] has a flow meter on each keg to detect and monitor pouring. This triggers the Pi camera module to run the facial recognition. The walk-through found after the jump might be a bit confusing; at the time it was recorded, the unit was only capable of facial detection. [Phil] wrote the UI in QT and C++ and used Python scripts for the flow interrupts. His plans for future iterations include weight sensors underneath the kegs, liquid probe thermometers for more accurate beer temperature readings, a NoIR Pi camera module for low light conditions, and a really snazzy UI that you’ll see on his build page.

If you don’t have a Pi, here’s an Arduino-fied kegerator that reports temperature and controls beer cooling.

[Read more...]

Facial recognition software can tell when you’re frustrated with Xbox Live

Most of us have been faced with the anguish of being shot in the head repeatedly by 12-year-olds. There are also the times when we’re overjoyed by defeating the Mother Brain and making it out of the caverns of Zebes. If we wanted to scientifically quantify how happy, sad, or angry we are while playing video games, we wouldn’t know what to do. [Dale] came up with a very interesting way to gauge someone’s state of mind while either playing Xbox, or watching TV.

To get a measure of how happy or sad he is, [Dale] put a webcam underneath his TV and pointed it towards his couch. Every 15 seconds or so, the webcam snaps a picture and sends if off to the face.com API. After face.com sends a blob of JSON containing information about all the faces detected in the photo, a short Python script plots it on a graph.

[Dale] admits he’s not entirely scientific with this project; the low resolution of the webcam, coupled with images being captured every 15 seconds means he runs into the limitations of his hardware very quickly. Also, there’s the confound of [Dale] paying attention to something else in the room – like his kids – rather than the TV. Still, it’s an interesting use of hardware and software that would be loved by a market researcher or QA designer.

Get digital plastic surgery thanks to openFrameworks and some addons

[Kyle McDonald] is trying out a new look, at least in the digital world, with the help of some openFrameworks video plugins. He’s working with [Arturo Castro] to make real-time facial substitution as realistic as possible. You can see that [Arturo's] own video has a different take on shading and color of the facial alterations that makes them a bit less realistic than what [Kyle] was able to accomplish (see that clip after the break).

The setup depends on some facial tracking software developed by [Jason Saragih]. That package is wrapped in ofxFaceTracker (already linked at the top of this article) which makes it play nicely with openFrameworks. From there, it’s just a matter of image processing. If you think you’re up to the challenge, grab your own copies of the source code and get to work. We’re shocked by how real this looks, even when [Kyle] grabs his cheeks and stretches them out. If someone can fix some of the artifacts around the edges of the sampled faces this would be ready to use when video-conferencing.

It kind of makes us think of technology seen in The Running Man.

[Read more...]

Android skips uncanny valley – fills in at the office for you

For those that are unaware, Androids are often judged by where they fall on the uncanny valley curve, a graph that maps human revulsion to robots that closely resemble humans but are just a bit off (similar to how a corpse resembles a living person). This offering jumps right over that dip of the curve and takes its rightful place as a human stand-in. Well, except that you’re probably going to notice the limbless torso… but pay no attention to the man behind the curtain!

This is the result of research by Geminoid Lab at Aalborg University. It is the twin of its creator and in an effort to be as human as possible, movements are mimicked using facial recognition from a human operator. We’d bet that with some clever learning routines you can map out and index common mannerisms from the original person for later use with this body-snatcher-esque copy. Take a look at the clips after the break; we don’t think you’ll be creeped out at all.

[Read more...]

Dance for a Dollar with the YayTM

The YayTM is a device that records a person dancing and judges whether or not the dancing is “Good”. If the YayTM likes the dance, it will dispense a dollar for the dancers troubles. However, unless the dancer takes the time to read the fine print, they won’t realize that their silly dance is being uploaded to YouTube for the whole world to see. Cobbled together with not much more than a PC and a webcam,  the box uses facial recognition to track and rate the dancer.

The YayTM was made by [Zach Schwartz], a student at NYU, as a display piece for the schools Interactive Telecommunication Program. Unfortunately there aren’t any schematics or source code, but to be honest, having one of these evil embarrassing boxes around is probably enough. What song does the YayTM provide for dancing, you ask? Well, be sure to check it out here.

EDIT: [Zack] has followed up with an expanded writeup of the YayTM. Be sure to check out his new page with source code and more info. Thanks [Zack]!

Head-up uses facial recognition and augmented reality

Scouter is a facial recognition system and head-up display that [Christopher Mitchell] developed for his Master’s Thesis. The wearable device combines the computing power of an eeePC 901 with a Vuzix VR920 wearable display and a Logitech Quickcam 9000. The camera is mounted face-forward on the wearable display like a third eye and the live feed is patched through to the wearer. [Christopher's] software scans, identifies, and displays information about the people in the camera frame at six frames per second.

We can’t help but think of the Gargoyles in Snow Crash. This rendition isn’t quite that good yet, there’s several false positives in the test footage after the break. But there are more correct identifications than false ones. The fact that he’s using inexpensive off-the-shelf hardware is promising. This shouldn’t been too hard to distill down to an inexpensive dedicated system.

[Read more...]

Face tracking with x10

If you are looking to do some face tracking and don’t know where to start, this explanation of how to do it with X10 modules could be pretty helpful. Aside from having, what some could consider to be, the absolute most annoying website ever for a company, X10 also makes modular systems for home automation. X10 also refers to the industry standard for home automation, so sometimes just saying you did something with X10 can get confusing.   He is using the SDK to write custom code for the tracking, which you can download from the project page.

[via HackedGadgets]

Follow

Get every new post delivered to your Inbox.

Join 93,978 other followers