The Race Is On To Build A Raspi Kinect 3D Scanner

pinect

The old gen 1 Kinect has seen a fair bit of use in the field of making 3D scans out of real world scenes. Now that Xbox 360 Kinects are winding up at yard sales and your local Goodwill, you might even have a chance to pick one up for pocket change. Until now, though, scanning objects in 3D has only been practical in a studio or workshop setting; for a mobile, portable scanner, you’d need to lug around a computer, a power supply, and it’s not really something you can fit in a back pack.

Now, finally, that may be changing. [xxorde] can now get depth data from a Kinect sensor with a Raspberry Pi. And with just about every other ARM board out there as well. It’s a kernel driver that’s small, fast, and does just one thing: turns the Kinect into a webcam that displays depth data.

Of course, a portabalized Kinect 3D scanner has been done before, but that was with an absurdly expensive Gumstix board. With a Raspi or BeagleBone Black, this driver has the beginnings of a very cheap 3D scanner that would be much more useful than the current commercial or DIY desktop scanners.

29 thoughts on “The Race Is On To Build A Raspi Kinect 3D Scanner

  1. This is just the gspca based kinect kernel driver that has existed since 2011 with minor modifications to return the depth stream instead of the rgb “web cam” stream.

    The interesting part would have been to see what a severly underpowered device is supposed to do with the data, and how well actually getting a depth frame that is not missing parts work on a raspi…

    1. And in case anyone is wondering why it includes a copy of the gspca code: The reason why the original code only returns the RGB stream is that the gspca framwork is meant for webcams, where only the first usb endpoint is used, but the kinect has two endpoints for the rgb camera and the depth camera, so that’s why this modified version needs a patched version of gspca and this is also the reason why there is no code for the depth stream in the kernel.

  2. Does anyone know what is going on with the Kinect2 from the new Xbox-One? When will this get released? I only noticed that there was a sign up time for 500 developers to receive an early access. But when will the Tinkerer tinker with it?

  3. The RasPi probably won’t be powerful enough to generate a mesh and the Kinect is terrible when used as a handheld device. It loses its bearings too quickly from my experience.

      1. It’s not outputting the graphics that’s the problem. I think the problem is feeding the Pi a 2D image of your subject, overlaid by a laser grid, and expecting it to turn that even into a point cloud, and from there into a polygonal representation.

        That all needs a lot of power, and I don’t think even the Xbox 360 bothers doing that, it just generates skeletons a lot of the time, from the centres of mass of the large moving things, ie people, in the image.

        There’s a LOT of work in getting a real representation of the world, from the view a Kinect gives. Even if you can move it about.

        The Ras Pi’s 3D graphics don’t matter at all in this. They’re not reprogrammable in some CUDA-like way to help with maths. It’s the main CPU that would do all the work, and it’s underpowered for real high-end stuff. Not that it hasn’t brought comparitively “large” amounts of MIPS to places that never had them before, but it’s not a powerful number cruncher.

        It’s also, as you kindof prove yourself, pretty irrelevant to compare CPU power of a few years ago to now, especially when you’re talking about 3D graphics. Ras Pi blows the late 1990s away, but so do most phones.

        1. with the v3d stuff that was released a while back, the 3d cores could be programmed to aid in this task

          but it currently lacks any standard CUDA like api, so you would have to do all that coding directly in v3d assembly

      2. Yes, well done. My washing machine ‘blows away’ the performance of a decrepid SGI too, but neither that nor the Pi are any match for a modern GPU and neither are close to enough power to run a 3D scan.

        You may well be able to record a scan and leave it to process over many hours but in order to get best results you need that realtime feedback of whether it is maintaining the lock on it surroundings.

      3. I don’t think the comparison to old SGIs matters. The software I’ve seen to work with Primesense-based devices often struggles to run well on a brand new i7 quad core. I’m not seeing where RasPi can scratch well against that, unless there’s some special “raspberry magic” going on.

  4. Someone fixed the raspberry pi isochronous usb transfers yet? Or is it still this unsolvable hardware problem. I could not get usb cams, usb audio or bluetooth a2dp work stable enough because of this (working without lost frames or crashing drivers or jerky audio).

      1. That’s a rather broad metaphor for the very specific problem the chap is mentioning. Do you know much about the Pi’s problems with isochronous USB transfers? Are you saying this mode is impossible because of lack of CPU speed? If that’s what you’re saying it’d be more helpful to do it specifically, since it would eliminate other possible causes.

      2. It’s not a cpu thing, I used the same logitech usb2.0 webcam on other, slower, platforms before (philips trimedia processor). The raspberry pi isochronous usb transfer problem was known from the start. Some blame the LAN8512 ethernet/usb hub, chip, for taking priority and interrupting the datastream when it thinks it some ethernet data. But the strange thing is it is happening on the A model also. The problem becomes more apparent when multiple usb devices are connected, for example keyboard and webcam.
        From forums it looks like they tried to bypass this problem in software, but actually is a usb hardware stack issue on the cpu chip.

        Also it looks like, beaglebone black does not have this problem and can work with usb webcams properly. So my opinion is, don’t do this kinect stuff on the raspberry pi. Use beablebone black or something else but not the raspberry pi.

  5. Does it have to be using a Kinect or can I just use a web cam plus software to construct a 3D mesh from a 2D image. It would be a significantly cheaper build.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.