Hackit: Leap Motions New Motion Sensor

The big news circulating this morning is of the Leap Motion sensor that will be hitting the market soon. They claim that their sensor is ~100 more accurate than anything else on the market right now. Check out the video to see what they mean (there’s another at the link). This is pretty impressive.  You can see the accuracy as they use a stylus to write things. If you’ve played with the Kinect, you know that it is nowhere near this tight. Of course, the Kinect is scanning a massive space compared to the work area that this new sensor works in.  The response time looks very impressive as well, motions seem to be perfectly in sync with the input. We’re excited to play with one when we get a chance.

So, why do we care as hackers? Well, we always care when a new toy arrives. That alone should be good enough. However, what we really like is the price tag. This thing is $69. That is a great thing to hear. At roughly half the cost of a Kinect, this is getting into a new market. As these prices drop, we might start to see motion input used as it really should be; a supplement to your other input devices. Undoubtely, someone won’t actually read this article and one of the comments will be “your arms will get tired doing everything by waving your hands”. Yep, your arms would get tired. With the cost of these devices being rather high, people tend to think of them as being the primary input device. As the prices drop (and size as well), we could start adding these things to our laptops and keyboards. Sometimes you actually do want to wave your hand at the screen, when an application can utilize that naturally. Then you go right back to the keyboard/mouse when that fits. If these got cheap enough, we could see them pop up in vending machines making them ~100 times more sanitary!

Like everyone else, we really want to know how these work. We can see several demos of it in action in the videos. We’re familiar with common methods of doing this kind of thing. At one point, there’s a hand visualization that looks like it might be a very tightly packed point cloud (IR array? those points do jitter!). Then again, that could just be a fun little graphical representation. We can’t wait to see, so if any of you get your hands on one of the developer models, let us know!

73 thoughts on “Hackit: Leap Motions New Motion Sensor

  1. $69? Earth dollars?? Wow.

    Is this thing optimised for human hand detection or can it track random objects like walls, rocks…

    Could this be used to improve collision avoidance and orientation stability in quadcopters for instance?

  2. Looks amazing. I’m not sure I believe the price point + capability but I’d love to be proved wrong.

    About the arm ache – I’m not sure it’s a bad thing. I can’t believe I’m the only person who would like to feel physically (as well as mentally) tired after a day of coding.

    I spend 10 hours a day basically motionless, and then have to find a way to do something physical in my free time – if I could combine the two it’d be a dream come true!

      1. I have a kneeling chair, which is an improvement over a normal one. Standing up seems like an interesting idea. If I had a desk that could just be reconfigured I might try it, but I’m not willing to get a whole new desk in order to try it out just yet!

    1. The work space is a 3D cube in size of 4 qubic foot. It is something like like 1.59 x 1.59 x 1.59 feet or roughtly 47.5 x 47.5 x 47.5 cm.

      I did few tests.

      I tried to move one hand in such space, that is fine. Now I tried with two hands and these did not fit in. Sweeps with two hands are take more space.

      I also see that in the video they only use one hand.

      But with two hand support and transparent 3D monitor one can imagine Iron Man style user interface.

      1. @stegu In the press release they say: “The Leap creates a three-dimensional interaction space of 4 cubic feet to control a computer more precisely and quickly than a mouse or touchscreen, and as reliably as a keyboard.”

  3. Just pre-ordered one. $70 is worth it. Not to mention, for all the people that don’t pre-order it, they will be willing to pay more than $70 once the pre-order people get them. I’ll be selling mine on ebay for double.

  4. The preorder page takes ages to load, bummer!

    Anyway, games aside, this is going to be a game changer also for disabled people if it could detect small changes in the users facial expression.

  5. I’m going to say “your arms will get tired doing everything by waving your hands” only because on their web site it says: “Say goodbye to your mouse and keyboard.”

    I can see great uses for this device, but if they are positioning it as a full replacement for a keyboard and mouse, I don’t buy it.

  6. What exactly is The Leap?
    The Leap is a small iPod sized USB peripheral that creates a 3D interaction space of 8 cubic feet to precisely interact with and control software on your laptop or desktop computer. It’s like being able to reach into the computer and pull out information as easily as reaching into a cookie jar.


  7. I’m not preordering. Their SSL certificate is a generic GoDaddy one. just a few bucks, no authentication. They have no privacy policy, no contact us page. Their ‘preorder form’ is an automatically generated page from formstack.com, and who knows how secure the data ACTUALLY is. Their partner is ‘zazuba technology’ which is a generic wordpress job using the DEFAULT TEMPLATE and some spammy RSS feed.

    Not liking this at all.

      1. If they have 14 mil, spend 10k of it and get their site properly designed, and work with a PCI certified payment gateway to get their system sorted and secure.

      1. That is awesome work. How are you inferring the distance to the object though, of would you rather not say?

        The filing mentions a plurality of IR cameras. Are you analysing the images for common features and then looking for the shift due to parallax? If so, I think the Kinect does it quite differently. And I know those guys approached others before going to Microsoft. (And they were extremely paranoid about getting shafted.)

  8. Just a stab in the dark here as there’s no detail of the sensor apparent, but I guess at that price the principle of operation is more like PS3 Eye than the Kinect.

    If I were doing this, I would illuminate the hands with an IR light. In the videos, the fingers are always fairly close to the screen. Fingertips would appear to the camera as bright dots. Motion tracking should get you the sub millimetre precision with a reasonable camera resolution.

    Depth perception could be as simple as comparing the relative brightness of the dots it sees – the further away, the dimmer they appear. To filter ambient light, the IR source could be modulated or simply switched off periodically for the duration of a frame. To get the fluidity of motion seen above you would need a camera capable of a pretty high frame rate, just like the device in Nintendo’s Wiimote.

    So that’s my guess: high speed medium resolution IR camera together with a bright IR LED which is being modulated in sync with the camera. If they were being really cheap, the bulk of the DSP work might even be being done on the host.

    1. What do you mean, “medium resolution”? In order to distinguish 100 points per mm you would need around 900 megapixels for every square foot. And that’s just for regular 2d. Of course it’s fake.

      1. I meant that with clever motion tracking they could possibly infer the position of the fingers down to that precision with a sensor of only a few megapixels. (I have actually worked on something similar in my day job.)

        I am leaning more towards smoke n’ mirrors though after watching the cNet video. In it they are quite clearly plotting points on the screen corresponding to the back of the hand which wouldn’t be visible to a sensor flat on the desk. It’s a shame, but again I have been involved in the same sort of shenanigans when trying to bag some VC. :-/

  9. I tried to get a sample as a developer, but they didn’t even ask for a postal code/zip code or a province/state. Not only that, they state that it’s usb but all the pictures show it without wires or any sort of connector. I think it’s a clever way to bilk people. Betcha it gets revealed as a scam in less than a week.

    1. the cnet video
      proves it is more complex than just identifying dots of IR light. it really maps objects in 3D point clouds, and very fast. the resolution apparently degrades quickly as you move the object away from the sensor. I’m guessing structured light with IR laser patterning instead of the lamp-based kinect projection.

  10. As much as I would like this to be true, I think this is fake for a couple of reasons.

    Problem 1: Electrical Implementation

    Their claims… are not implementable in reality. Lets give them the benefit of the doubt, When they say 4 cubic meters, they mean a range of 1x1x4, so they really only need to sense a 1meter by 1 meter grid. We’ll also assume that the 1/100mm precision is only available near the aperature. And for most use it can only get about ~1/10mm of precision. To be in realtime it has to update at 24fps, and it is able to encode the depth it detects in 4bytes. Also, we’ll assume that by some miracle, it is able to do this with only a single camera. If we assume these very favorable conditions. The device must be able to process
    10000pixel*10000pixel*4bytes/pixel*8bits*24fps= 76.8Gbits/second (about the maximum bandwidth of DDR2 in peak conditions), if it has two cameras, the needed bandwidth is doubled, and if it can track color(which is needed for the implementation of some of their demo’s) the needed bandwidth is trippled, which puts it far out of the reach of power budget for any peripirial using current technology, let alone the $69 cost.

    Problem 2: Physical implementation

    Just think about the placement of the device. Now think about how large the field of view would have to be to even be able to capture a moving object that close.

    Problem 3: Broken Physics

    In the demos they place the device on the table. So it is facing upward, looking at the bottom of the object. Yet somehow it is able to create a full eggshell model of the hands front and back, and is able to track an object that is being blocked by another.

    Problem 4: Incorrect Projection

    Go watch the video from 0:34 on, from here you can see the “detected hand” point cloud. However the perspective on it, and the angle of it is from the point of view of the viewer not the actual device. With it frequently “detecting” the top of the hand and losing samples from the side of the hand actually facing the device.

    Problem 5: No history

    None of the people from this company have any identifiable history whatsoever. Neither does the company. In fact the domain name was registered only about a month ago. The company doesn’t exist. Also they have only bothered to make one blog post with no commenting allowed… Hmm.

    Problem 6: The actual device
    The device is described as using USB… however there is not a single demo, or example of it with a cord anywhere. The device has no visible ports. It also does not have any shown means of assembly. It also curiously lacks a panel for the camera to view through.

    1. Very good point about broken physics, strange I didn’t consider that. One explanation could be that they have a model of a couple of hands so that they are not actually detecting the upper part of the hands, rather inferring them from the position of the palms and fingers.

      I’m just playong the devil’s advovate here, I don’t think this technology is real.

    2. Funny – why would you assume this is an optical device? Ultrasound seems more likely. Sonar is a very mature technology that scales remarkably well. Also, that means they’re not dealing with the 10k x 10k pixel cloud that you assert – instead, they are dealing with the sparse point set detected by the ultrasound array.

      I’d love to think this is real, but since it seems to have come out of nowhere with little background information I believe it probably is a hoax.

    3. I’ve watched the point cloud portions of the demos probably two dozen times, because I’ve been wondering about this too.

      Look closely at the parts of the point cloud representing tops of hands (or chopsticks and arms), which the device obviously cannot see. Ignore the apparent randomness of individual points, and you’ll see that these regions are all represented by a series of perfectly elliptical arcs. In the case of the hand, this isn’t quite true to the real shape.

      With that in mind, inspect the bottom of the hand closely. It too is represented by perfectly elliptical arcs, omitting details like the cup of the palm; which should otherwise be visible as greater density in the point cloud when viewed at the correct angle.

      Together, the top and bottom form a complete symmetrical ellipse; looking more like an latex glove full of air than a real hand.

      Step through the video Diarmuid linked at 0:37, and you’ll even find a particular frame where a couple of ellipses (or elliptical groups) are separated enough from the rest of the point cloud to be individually seen.

      This certainly proves that the displayed point cloud is NOT a raw and unaltered point cloud returned from the device. But doesn’t necessarily prove that it’s fake, either.

      The CEO made a comment that the magic is in the software, and the sensors are just “glorified webcams”.

      So consider another possibility – it’s doing stereo vision analysis, optimized for capturing only the characteristics necessary to represent the position and direction of certain objects, like a hand and fingers.

      Perhaps it starts with a simple edge detection, which is computationally easy. Then it deconstructs the area between edges into a series of lines, with average elliptical curvature in the depth plane, that best (but not exactly) represents what it sees.

      This at least is technically possible, especially if Mr. Holtz is a math genius as claimed. And would certainly give you the results seen in the point cloud demos, if you were to render the ellipses as a series of points. None of the neighboring ellipses will be perfectly accurate as to length or curvature, and viewed together this would result in the slight randomness seen in the point cloud.

      I seriously doubt the device has 0.01mm pixel/sensor resolution. However, if you were to pick out a group of ellipses representing a particular feature (like a finger), by averaging together their characteristics into a single vector, that vector might indeed have interpolated accuracy in the sub-mm range. And that’s all that’s needed for this device to provide its intended function very well indeed.

      This would make it *useless* for general 3D scanning, though. Notice how they didn’t show anything in the demo that couldn’t be represented by ellipses, like a cube.

  11. “four-cubic-foot virtual workspace. Within that area, it tracks all ten of your fingers simultaneously to within 1/100 of a millimeter”

    can someone translate that to usable units?

  12. The following might be just pure imagination…

    Taking airborne particulates by an image sensor with emitting IR light,
    and then solving an inverse problem with fluid mechanism from the distribution of the particulates in the images. (Fluid mechanism is CTO’s expertise.)
    Finally, this device can infer full shapes of objects inside the particulates.

    Is this possible? (I think it might be too futuristic…)

  13. I cannot understand why there are people who believe this is the real thing. let’s compare this with say…a car manufacturer..u have a factory with all the engineers and designers and technicians(remember the cost of paying salary and factory) u design a fabulous car and it is the best in the world. U announced it via a sudden-pop-out-of-nowhere website and biz point of view, will u launch sales without the physical product available and telling everyone abt this pre-order sales? everything seems so wrong. We know it is not easy as ABC to just develop a technology or SDK, let alone selling for only 70. we know how much r&d would cost and how much engineers cost. Check out ubiqwindow and u will know the effort those people made to develop this product and it is definitely not selling for 70. Look ard the internet, there are sooooo many newly launched website all featuring some out of this world technology and selling super cheap. And they are all selling exactly thru the same method-pre order! It cannot be coincidence..come on, i was born in the night…but not last night!

    1. I have one (I got the dev kit) and its most certainly not a hoax. The precision is amazing! To anyone saying “I haven’t seen it do anything yet'”, that’s because they are working on launching the Leap App Store with apps and games for this thing.

      1. I wondered if they were holding off till they had some actual usable software. It seems like if anything kills the leap it would be lack of early uptake. Having a product with no market is hard to survive.

        I heard they were going to be incorporating them into some devices next year. That sounds like a good thing!

  14. A touchscreen you break contact by pulling your finger away. Note he can’t easily break contact with it, he would have to remove his finger from the led field. You NEED a physical flat surface in order to get your orientation and know when you are “in contact” or “out of contact” … even a plane is 3d space isn’t good enough – it’s just too inaccurate. Thats why he draws a line from the o to teh h in hello.
    That said … it’s still pretty wicked eh ?
    I’m someone will hack it so it only registers points when you touch a surface – it’s fast and accurate, that’s cool.
    maybe even abutton in your left hand, or under your foot, or between your teeth or eyelids to break contact – i.e. mouse up !
    The 3d spacial aspect of it .. well, that’s going to be tricky to handle – so probably only be worth while under certain circumstances.

    I’ll have one !!

  15. In addition, I haven’t actualy seen it do any actual control over a computer.
    Yes, the fingers are tracked and it can recognise a pinch and can make pretty patterns … but has anyone seen it actually press a button click alink ??? So far, all I hav eread isthat if this thing is such a technological breakthrough then these guys are carp a PR … perhaps, just perhaps, it the other way round … and this thing is crap and they’re doing a fabulous job of marketing it !!!

  16. n addition, I haven’t actualy seen it do any actual control over a computer.
    Yes, the fingers are tracked and it can recognise a pinch and can make pretty patterns … but has anyone seen it actually press a button or click a link or type anything into a textbox ??? So far, all I have read is that if this thing is such a technological breakthrough then these guys are crap a PR … perhaps, just perhaps, it the other way round … and this thing is crap and they’re doing a fabulous job of marketing it !!!

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.