Finger recognition on the Kinect

The Kinect is awesome, but if you want to do anything at a higher resolution detecting a person’s limbs, you’re out of luck. [Chris McCormick] over at CogniMem has a great solution to this problem: use a neural network on a chip to recognize fingers with hardware already connected to your XBox.

The build uses the very cool CogniMem CM1K neural network on a chip trained to tell the difference between counting from one to four on a single hand, as well as an ‘a-okay’ sign, Vulcan greeting (shown above), and rocking out at a [Dio] concert. As [Chris] shows us in the video, these finger gestures can be used to draw on a screen and move objects using only an open palm and closed fist; not too far off from the Minority Report and Iron Man UIs.

If you’d like to duplicate this build, we found the CM1K neural network chip available here for a bit more than we’d be willing to pay. A neural net on a chip is an exceedingly cool device, but it looks like this build will have to wait for the Kinect 2 to make it down to the consumer and hobbyist arena.

You can check out the videos of Kinect finger recognition in action after the break with World of Goo and Google Maps.


  1. jan says:

    Hmm, apart from being an advert for the chip (which is pretty much expensive “unobtanium” anyway, “call for pricing” being a bad sign), the same application can be done in software. Neural net with 1024 neurons is not that computationally expensive, decent microcontroller can do it as well.

    The dedicated chip is perhaps a good solution for high speed processing when tied with something like a high-end FPGA (but then why not use the FPGA for it?), but it is an enormous overkill for something like 4 gestures with Kinect …

  2. Isaac says:

    If you’d checked the “available here” link, you would see that the chip is ~$150.

    Let’s see your implementation using a “decent microcontroller”.

    • The mcu formerly known as 1802 says:

      Dedicated hardware has utility, and this basically equates to speed. However, the key is the training and classification steps, with all the usual accuracy/specificity/flase-pos/neg, etc. issues.

      Calling these weighted gates neurons is OK, I guess.

      Old school cats can think of this as a monster comparator with up to N outputs matching one or more results. Thus “state” is replaced with “weight” and obviously you could tune the weights.

      I read the manual – it’s a nice bit of kit, especially since it does video decode on the chip. I am impressed!

      But again, the secret is in the training. If you don’t mind wasting the $$ and time to build a proper training program, you could basically churn out your own version of this using a video frame store and some dedicated memory – and now your production costs drop from $150 to $35-$50 per unit, at the expense of slightly larger size.

      After all, it’s basically a giant PROM. This topic is very interesting, and it would be interesting to play with an eval board. This is one of the most interesting chips ever featured on HAD.

  3. Willaim says:

    Well considering they could have had tech like the kinect back in the Late 90’s i guess the kinect 2 with these features will be out around 2030 or so.
    Hey, maybe we will all be too busy with the VR we were promised back in the 90’s to even worry about it.

  4. Isaac says:

    Anyone else feel exhausted just watching this? :P

    Cool none the less though, love seeing more and more done with the kinect.

  5. nebulous says:

    One step closer to an ARI interface from Heavy Rain?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

Join 96,615 other followers