Portabilizing The Kinect

Way back when the Kinect was first released, there was a realization that this device would be the future of everything 3D. It was augmented reality, it was a new computer interface, it was a cool sensor for robotics applications, and it was a 3D scanner. When the first open source driver for the Kinect was released, we were assured that this is how we would get 3D data from real objects into a computer.

Since then, not much happened. We’re not using the Kinect for a UI, potato gamers were horrified they would be forced to buy the Kinect 2 with the new Xbox, and you’d be hard pressed to find a Kinect in a robot. 3D scanning is the only field where the Kinect hasn’t been over hyped, and even there it’s still a relatively complex setup.

This doesn’t mean a Kinect 3D scanner isn’t an object of desire for some people, or that it’s impossible to build a portabilzed version. [Mario]’s girlfriend works as an archaeologist, and having a tool to scan objects and places in 3D would be great for her. Because of this, [Mario] is building a handheld 3D scanner with a Raspberry Pi 2 and a Kinect.

This isn’t the first time we’ve seen a portablized Kinect. Way back in 2012, the Kinect was made handheld with the help of a Gumstix board. Since then, a million tiny ARM single board computers have popped up, and battery packs are readily available. It was only a matter of time until someone stepped up to the plate, and [Mario] was the guy.

The problem facing [Mario] isn’t hardware. Anyone can pick up a Kinect at Gamestop, the Raspberry Pi 2 should be more than capable of reading the depth sensor on the Kinect, and these parts can be tied together with 3D printed parts. The real problem is the software, and so far [Mario] has Libfreenect compiling without a problem on the Pi2. The project still requires a lot of additional libraries including some OpenCV stuff, but so far [Mario] has everything working.

You can check out his video of the proof of concept below.

37 thoughts on “Portabilizing The Kinect

  1. I always thought resolution was the problem. It might be good enough to detect arms, head and legs. It might even be good enough map your figures close up, but anything finer than that may not be very accurately captured.

      1. You can’t switch to Kinect 2, because there is no way to use it without the Microsoft’s proprietary SDK. Which doesn’t work in Linux, even less on ARM :(

        Kinect has actually plenty of resolution for scanning – the data are accumulated over multiple passes and optimized using some complex math and you can get very good results.

          1. FALSE. If memory serves properly for all this, then I believe Windows RT runs on ARM. WinRT will be getting Win10 features…but not Win10. Win10 IoT that runs on RasPi is NOT Win10. You would need ARM capable drivers for Kinect2 regardless of platform and at this time I don’t believe they exist.

    1. How much accuracy do you want?

      I’m planning my next project (after the Hackaday prize thing is over), and I’m thinking of writing some image processing algorithms for resolution enhancement.

      Your comment just gave me an idea, which I think I can leverage for more resolution.

      So… what’s your application, and what resolution do you need?

      (IIRC, the Kinect depth image, each pixel is about 2mm. I think I can increase this with statistical analysis.)

      1. Since the subject matter is archaeology for this post, I would imagine sub mm accuracy will be necessary to capture details of remains and artifacts.

        Of course, mm accuracy should be good enough to capture geographic features. That being said, I am not sure how well the kinect preforms in an outdoor environment.(Perhaps it could be used at night.)

        Accuracy of 100 microns should be ideal, but I am not sure how realistic that will be.

  2. It seems there is no need to step up to 12V.

    source:
    http://www.eevblog.com/forum/reviews/kinect-teardown/

    quote:
    You’ll note that R4 is circled. That sets the UVLO for the 12V input, which is regulated down to 3.3V before anything uses it. The 3.3V buck regulator (a ST L6728) is capable of operating from 5V, but there’s an external UVLO that prevents operation at that voltage. Simply add a 4.7k resistor in parallel with R4 and the Kinect will operate from a 5V supply. Current draw is about 700-800mA.

  3. the problem with the kinect in robotics is its size. Hobby roboticists build smaller platforms, and with a raspberry pi 2, its much lighter, cheaper, and requires probably the same amount of work to get 2 cheap vga cameras with opencv

        1. You can do a LOT of GPGPU work with pixel shaders. Even though the pi’s GPU is weak, used properly, it has a lot more use than the CPU (which is way too weak for any opencv stuff). The bottleneck in these setups is usually reading the pixels back from the GPU and blocking the bus.

        2. Not if you’re trying to implement everything that opencv does. But the pi does come with a [relatively] decent GPU. You can do a LOT of GPGPU work in pixel shaders with a bit of effort. Eradicating noise takes like 2 passes (<2ms), segmentation another pass. Creating feature descriptors and then tracking them is another few pixel shader renders… With all that you can detect the camera's position in realtime, even on a pi.

  4. This is great, but there is a company called Occipital that has continued developing the Primesense technology with their own 3D Structure sensor. Smaller and it supports all platforms using its OpenNI V2.0 libraries, but its strength is in iOS development and their SDK supporting iOS 8. As a independent engineer/developer I make a line of accessories including cases that support the Grip & Shoot Bluetooth grip and the sensor. I also create lens system for better mesh resolution and an app coming out soon to offer more “Pro” like features over the sensor and the RGB camera in the iPhone. You can go out to my website to find out more http://slo3dcreators.com to see examples and order accessories and Occipitals at http://structure.io

  5. When we played with the Kinect for a scanner linked to a CNC at a Maker Faire back in 2011 we couldn’t get the infra-red camera of the Kinect to work reliably at all outdoors. Looking around at that time, the consensus seemed to be that it would only work out of direct light so we ended up constructing an enclosure for the people being scanned (the space we were using was an outdoor stand).

    1. That’s correct. Kinect and similar tech all uses some sort of IR light to flood the area and measure between the dots or speed of return to tell how far things are away.

      It also means that IR emitters will mess up the signal. And the sun puts out a lot of IR. Also, things like the LeapMotion also get ‘confused’ in presence of sunlight.

      That’s why I think that any 3d sensing tech that requires emitting energy will be a dead end. SLAM and similar algorithms are the best way to go forward: that’s how our own eyes and brain work. And it is also strictly better with regards to energy conservation. Case in point: We have a Google Tango tablet. In 3d scanning mode, the device only has enough power for 1/2 hour operation before shutting down. If it only opened multiple webcams, it could stay up far longer.

    1. I assume on windows ?

      There are 2 drivers. the official MS ones, included with the MS Kinect SDK, and the OpenNI SDK which uses generic libusb drivers. Use a program like Zadig (http://zadig.akeo.ie/) to install the libusb drivers or install the MS SDK.

      The scanning software you choose will determine which drivers to install. Most of them use OpenNI.

      There’s plenty of commercial software around, a simple google will give you lots of results. I have had some good results with skanect (http://skanect.occipital.com/) for example.

      There are some free solutions too, but these usually only export point clouds and require doing the surface reconstruction in external software (compute vertex normals from pointcloud and do Poisson reconstruction in Meshlab for example).

  6. Didn’t I read somewhere that the Kinect 2 used a different underlying capture technology (Time of flight?) and that its suitability for 3D scanning isn’t great, so it’s capabilities for 3D scanning aren’t *that* much better than the Kinect 1? Also possibly to do with the limitations of the MS Kinect SDK?
    Can’t quite remember, so it surprises me 3D scanning is ‘on the cards’.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.