Hackit: Leap Motions new motion sensor

The big news circulating this morning is of the Leap Motion sensor that will be hitting the market soon. They claim that their sensor is ~100 more accurate than anything else on the market right now. Check out the video to see what they mean (there’s another at the link). This is pretty impressive.  You can see the accuracy as they use a stylus to write things. If you’ve played with the Kinect, you know that it is nowhere near this tight. Of course, the Kinect is scanning a massive space compared to the work area that this new sensor works in.  The response time looks very impressive as well, motions seem to be perfectly in sync with the input. We’re excited to play with one when we get a chance.

So, why do we care as hackers? Well, we always care when a new toy arrives. That alone should be good enough. However, what we really like is the price tag. This thing is $69. That is a great thing to hear. At roughly half the cost of a Kinect, this is getting into a new market. As these prices drop, we might start to see motion input used as it really should be; a supplement to your other input devices. Undoubtely, someone won’t actually read this article and one of the comments will be “your arms will get tired doing everything by waving your hands”. Yep, your arms would get tired. With the cost of these devices being rather high, people tend to think of them as being the primary input device. As the prices drop (and size as well), we could start adding these things to our laptops and keyboards. Sometimes you actually do want to wave your hand at the screen, when an application can utilize that naturally. Then you go right back to the keyboard/mouse when that fits. If these got cheap enough, we could see them pop up in vending machines making them ~100 times more sanitary!

Like everyone else, we really want to know how these work. We can see several demos of it in action in the videos. We’re familiar with common methods of doing this kind of thing. At one point, there’s a hand visualization that looks like it might be a very tightly packed point cloud (IR array? those points do jitter!). Then again, that could just be a fun little graphical representation. We can’t wait to see, so if any of you get your hands on one of the developer models, let us know!

Comments

  1. wardy says:

    $69? Earth dollars?? Wow.

    Is this thing optimised for human hand detection or can it track random objects like walls, rocks…

    Could this be used to improve collision avoidance and orientation stability in quadcopters for instance?

  2. Gordon says:

    Looks amazing. I’m not sure I believe the price point + capability but I’d love to be proved wrong.

    About the arm ache – I’m not sure it’s a bad thing. I can’t believe I’m the only person who would like to feel physically (as well as mentally) tired after a day of coding.

    I spend 10 hours a day basically motionless, and then have to find a way to do something physical in my free time – if I could combine the two it’d be a dream come true!

  3. Stephen says:

    I have a funny feeling this doesn’t work from any useful distance like most people here will hope.

  4. anglophony says:

    Could multiple units co-operate on the same ‘puter?

  5. Zee says:

    This might be a scam. A 0.01mm resolution seems a bit hard to believe.

  6. Drake says:

    Our company is signed up for 4 just because it looks sweet

  7. Zizer says:

    I pre-ordered one.. for $70 its worth trying out.

  8. Funguseater says:

    Since these only work in a 2×2 foot area they may be legit, at least I hope they are. Although I may wait and order after release.

    • Kris Lee says:

      The work space is a 3D cube in size of 4 qubic foot. It is something like like 1.59 x 1.59 x 1.59 feet or roughtly 47.5 x 47.5 x 47.5 cm.

      I did few tests.

      I tried to move one hand in such space, that is fine. Now I tried with two hands and these did not fit in. Sweeps with two hands are take more space.

      I also see that in the video they only use one hand.

      But with two hand support and transparent 3D monitor one can imagine Iron Man style user interface.

  9. Destate9 says:

    Alright, it’s official, we live in the future now.

  10. Taylor says:

    Just pre-ordered one. $70 is worth it. Not to mention, for all the people that don’t pre-order it, they will be willing to pay more than $70 once the pre-order people get them. I’ll be selling mine on ebay for double.

  11. qwerty says:

    The preorder page takes ages to load, bummer!

    Anyway, games aside, this is going to be a game changer also for disabled people if it could detect small changes in the users facial expression.

  12. raster says:

    I’m going to say “your arms will get tired doing everything by waving your hands” only because on their web site it says: “Say goodbye to your mouse and keyboard.”

    I can see great uses for this device, but if they are positioning it as a full replacement for a keyboard and mouse, I don’t buy it.

  13. non says:

    What exactly is The Leap?
    The Leap is a small iPod sized USB peripheral that creates a 3D interaction space of 8 cubic feet to precisely interact with and control software on your laptop or desktop computer. It’s like being able to reach into the computer and pull out information as easily as reaching into a cookie jar.

    http://live.leapmotion.com/about/

  14. Yianni P says:

    Point Screen is the new touch screen!
    Once this works from a decent distance I can have a hi def tv that acts like a touch screen from my couch.

  15. BadHaddy says:

    I’m not preordering. Their SSL certificate is a generic GoDaddy one. just a few bucks, no authentication. They have no privacy policy, no contact us page. Their ‘preorder form’ is an automatically generated page from formstack.com, and who knows how secure the data ACTUALLY is. Their partner is ‘zazuba technology’ which is a generic wordpress job using the DEFAULT TEMPLATE and some spammy RSS feed.

    Not liking this at all.

  16. decto says:

    I’m calling fake. Am I the only one who doesn’t believe this promo movie? No actual device is shown while it’s being used. The 0.01mm accuracy seems unbelievable and I found this while stepping over the frames http://dl.dropbox.com/u/1177353/Fake%20Leap.PNG The 3D point cloud and the hand don’t even match.

  17. Carlos anzola says:

    this technology is old.

    My project is 4 years ago:

  18. nes says:

    Just a stab in the dark here as there’s no detail of the sensor apparent, but I guess at that price the principle of operation is more like PS3 Eye than the Kinect.

    If I were doing this, I would illuminate the hands with an IR light. In the videos, the fingers are always fairly close to the screen. Fingertips would appear to the camera as bright dots. Motion tracking should get you the sub millimetre precision with a reasonable camera resolution.

    Depth perception could be as simple as comparing the relative brightness of the dots it sees – the further away, the dimmer they appear. To filter ambient light, the IR source could be modulated or simply switched off periodically for the duration of a frame. To get the fluidity of motion seen above you would need a camera capable of a pretty high frame rate, just like the device in Nintendo’s Wiimote.

    So that’s my guess: high speed medium resolution IR camera together with a bright IR LED which is being modulated in sync with the camera. If they were being really cheap, the bulk of the DSP work might even be being done on the host.

    • Nils motpol says:

      What do you mean, “medium resolution”? In order to distinguish 100 points per mm you would need around 900 megapixels for every square foot. And that’s just for regular 2d. Of course it’s fake.

      • nes says:

        I meant that with clever motion tracking they could possibly infer the position of the fingers down to that precision with a sensor of only a few megapixels. (I have actually worked on something similar in my day job.)

        I am leaning more towards smoke n’ mirrors though after watching the cNet video. In it they are quite clearly plotting points on the screen corresponding to the back of the hand which wouldn’t be visible to a sensor flat on the desk. It’s a shame, but again I have been involved in the same sort of shenanigans when trying to bag some VC. :-/

      • Nils says:

        Ok, but how would that motion tracking actually work?

  19. wulfderay says:

    I tried to get a sample as a developer, but they didn’t even ask for a postal code/zip code or a province/state. Not only that, they state that it’s usb but all the pictures show it without wires or any sort of connector. I think it’s a clever way to bilk people. Betcha it gets revealed as a scam in less than a week.

  20. Diarmuid says:

    The website demos are rubbish, but this link http://www.tech-stew.com/post/2012/05/21/Leap-Motion-3D-control-system-100-times-more-accurate-than-kinect-(video).aspx does look legit. I bet it will turn out to be identical technology to the Kinect with a wider lens and a shorter focus. The new Kinect has a shorter throw so this could probably do the same.

  21. rtsd says:

    As much as I would like this to be true, I think this is fake for a couple of reasons.

    Problem 1: Electrical Implementation

    Their claims… are not implementable in reality. Lets give them the benefit of the doubt, When they say 4 cubic meters, they mean a range of 1x1x4, so they really only need to sense a 1meter by 1 meter grid. We’ll also assume that the 1/100mm precision is only available near the aperature. And for most use it can only get about ~1/10mm of precision. To be in realtime it has to update at 24fps, and it is able to encode the depth it detects in 4bytes. Also, we’ll assume that by some miracle, it is able to do this with only a single camera. If we assume these very favorable conditions. The device must be able to process
    10000pixel*10000pixel*4bytes/pixel*8bits*24fps= 76.8Gbits/second (about the maximum bandwidth of DDR2 in peak conditions), if it has two cameras, the needed bandwidth is doubled, and if it can track color(which is needed for the implementation of some of their demo’s) the needed bandwidth is trippled, which puts it far out of the reach of power budget for any peripirial using current technology, let alone the $69 cost.

    Problem 2: Physical implementation

    Just think about the placement of the device. Now think about how large the field of view would have to be to even be able to capture a moving object that close.

    Problem 3: Broken Physics

    In the demos they place the device on the table. So it is facing upward, looking at the bottom of the object. Yet somehow it is able to create a full eggshell model of the hands front and back, and is able to track an object that is being blocked by another.

    Problem 4: Incorrect Projection

    Go watch the video from 0:34 on, from here you can see the “detected hand” point cloud. However the perspective on it, and the angle of it is from the point of view of the viewer not the actual device. With it frequently “detecting” the top of the hand and losing samples from the side of the hand actually facing the device.

    Problem 5: No history

    None of the people from this company have any identifiable history whatsoever. Neither does the company. In fact the domain name was registered only about a month ago. The company doesn’t exist. Also they have only bothered to make one blog post with no commenting allowed… Hmm.

    Problem 6: The actual device
    The device is described as using USB… however there is not a single demo, or example of it with a cord anywhere. The device has no visible ports. It also does not have any shown means of assembly. It also curiously lacks a panel for the camera to view through.

    • Nils says:

      Very good point about broken physics, strange I didn’t consider that. One explanation could be that they have a model of a couple of hands so that they are not actually detecting the upper part of the hands, rather inferring them from the position of the palms and fingers.

      I’m just playong the devil’s advovate here, I don’t think this technology is real.

    • Mythgarr says:

      Funny – why would you assume this is an optical device? Ultrasound seems more likely. Sonar is a very mature technology that scales remarkably well. Also, that means they’re not dealing with the 10k x 10k pixel cloud that you assert – instead, they are dealing with the sparse point set detected by the ultrasound array.

      I’d love to think this is real, but since it seems to have come out of nowhere with little background information I believe it probably is a hoax.

    • Chris says:

      I’ve watched the point cloud portions of the demos probably two dozen times, because I’ve been wondering about this too.

      Look closely at the parts of the point cloud representing tops of hands (or chopsticks and arms), which the device obviously cannot see. Ignore the apparent randomness of individual points, and you’ll see that these regions are all represented by a series of perfectly elliptical arcs. In the case of the hand, this isn’t quite true to the real shape.

      With that in mind, inspect the bottom of the hand closely. It too is represented by perfectly elliptical arcs, omitting details like the cup of the palm; which should otherwise be visible as greater density in the point cloud when viewed at the correct angle.

      Together, the top and bottom form a complete symmetrical ellipse; looking more like an latex glove full of air than a real hand.

      Step through the video Diarmuid linked at 0:37, and you’ll even find a particular frame where a couple of ellipses (or elliptical groups) are separated enough from the rest of the point cloud to be individually seen.

      This certainly proves that the displayed point cloud is NOT a raw and unaltered point cloud returned from the device. But doesn’t necessarily prove that it’s fake, either.

      The CEO made a comment that the magic is in the software, and the sensors are just “glorified webcams”.

      So consider another possibility – it’s doing stereo vision analysis, optimized for capturing only the characteristics necessary to represent the position and direction of certain objects, like a hand and fingers.

      Perhaps it starts with a simple edge detection, which is computationally easy. Then it deconstructs the area between edges into a series of lines, with average elliptical curvature in the depth plane, that best (but not exactly) represents what it sees.

      This at least is technically possible, especially if Mr. Holtz is a math genius as claimed. And would certainly give you the results seen in the point cloud demos, if you were to render the ellipses as a series of points. None of the neighboring ellipses will be perfectly accurate as to length or curvature, and viewed together this would result in the slight randomness seen in the point cloud.

      I seriously doubt the device has 0.01mm pixel/sensor resolution. However, if you were to pick out a group of ellipses representing a particular feature (like a finger), by averaging together their characteristics into a single vector, that vector might indeed have interpolated accuracy in the sub-mm range. And that’s all that’s needed for this device to provide its intended function very well indeed.

      This would make it *useless* for general 3D scanning, though. Notice how they didn’t show anything in the demo that couldn’t be represented by ellipses, like a cube.

    • Santiago says:

      This is a fake too?

      They fool CBS reporter, and the entier CBS staff so they publish this on behalf of CBS?… yeah, right.

  22. rasz says:

    “four-cubic-foot virtual workspace. Within that area, it tracks all ten of your fingers simultaneously to within 1/100 of a millimeter”

    can someone translate that to usable units?

  23. Pierre says:

    I’m calling vaporware on this. No way they deliver any units in December or January as promised.

  24. ts says:

    The following might be just pure imagination…

    Taking airborne particulates by an image sensor with emitting IR light,
    and then solving an inverse problem with fluid mechanism from the distribution of the particulates in the images. (Fluid mechanism is CTO’s expertise.)
    Finally, this device can infer full shapes of objects inside the particulates.

    Is this possible? (I think it might be too futuristic…)

  25. spacepainter says:

    ultrasound?

  26. Jimny says:

    I cannot understand why there are people who believe this is the real thing. let’s compare this with say…a car manufacturer..u have a factory with all the engineers and designers and technicians(remember the cost of paying salary and factory) u design a fabulous car and it is the best in the world. U announced it via a sudden-pop-out-of-nowhere website and biz point of view, will u launch sales without the physical product available and telling everyone abt this pre-order sales? everything seems so wrong. We know it is not easy as ABC to just develop a technology or SDK, let alone selling for only 70. we know how much r&d would cost and how much engineers cost. Check out ubiqwindow and u will know the effort those people made to develop this product and it is definitely not selling for 70. Look ard the internet, there are sooooo many newly launched website all featuring some out of this world technology and selling super cheap. And they are all selling exactly thru the same method-pre order! It cannot be coincidence..come on, i was born in the night…but not last night!

  27. Vegetable Doctor says:
  28. Mythgarr says:

    Update today – Engadget goes hands-on:
    http://www.engadget.com/2012/05/25/leap-motion-gesture-control-technology-hands-on

    With a hands-on demo it becomes very difficult to convincingly fake something like this. The precision could still be WAY off from the 10µm previously stated, but I’m getting a little optimistic that maybe it’s not a complete hoax.

    • Lenny says:

      I have one (I got the dev kit) and its most certainly not a hoax. The precision is amazing! To anyone saying “I haven’t seen it do anything yet'”, that’s because they are working on launching the Leap App Store with apps and games for this thing.

      • Caleb Kraft says:

        I wondered if they were holding off till they had some actual usable software. It seems like if anything kills the leap it would be lack of early uptake. Having a product with no market is hard to survive.

        I heard they were going to be incorporating them into some devices next year. That sounds like a good thing!

  29. Dabron says:

    How comes no reporter (like Engadget’s) is using it himself? it’s always this guy with the uncombed hair playing with it.

  30. steve says:

    A touchscreen you break contact by pulling your finger away. Note he can’t easily break contact with it, he would have to remove his finger from the led field. You NEED a physical flat surface in order to get your orientation and know when you are “in contact” or “out of contact” … even a plane is 3d space isn’t good enough – it’s just too inaccurate. Thats why he draws a line from the o to teh h in hello.
    That said … it’s still pretty wicked eh ?
    I’m someone will hack it so it only registers points when you touch a surface – it’s fast and accurate, that’s cool.
    maybe even abutton in your left hand, or under your foot, or between your teeth or eyelids to break contact – i.e. mouse up !
    The 3d spacial aspect of it .. well, that’s going to be tricky to handle – so probably only be worth while under certain circumstances.

    I’ll have one !!

  31. steve says:

    In addition, I haven’t actualy seen it do any actual control over a computer.
    Yes, the fingers are tracked and it can recognise a pinch and can make pretty patterns … but has anyone seen it actually press a button click alink ??? So far, all I hav eread isthat if this thing is such a technological breakthrough then these guys are carp a PR … perhaps, just perhaps, it the other way round … and this thing is crap and they’re doing a fabulous job of marketing it !!!

  32. steve says:

    n addition, I haven’t actualy seen it do any actual control over a computer.
    Yes, the fingers are tracked and it can recognise a pinch and can make pretty patterns … but has anyone seen it actually press a button or click a link or type anything into a textbox ??? So far, all I have read is that if this thing is such a technological breakthrough then these guys are crap a PR … perhaps, just perhaps, it the other way round … and this thing is crap and they’re doing a fabulous job of marketing it !!!

  33. Terrance Vadner says:

    Motion sensors can be quite a great deal of aid especially when you use it to recognize movement from any things. I personally use it on my alarm system system.

  34. Jordan says:

    well it has at least 4 cameras and 5 IR leds, so that seems like some serious triangulation going on.

  35. chon says:

    th elast news about leap motion:

  36. Vincent says:

    it’s still in pre-order and I have found no clarity on what OS X version and more importantly if Linux will be supported.
    They can all keep there crap if there will be no decent Linux support :).

  37. Blackbird says:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 92,417 other followers