3D Scanning Entire Rooms With A Kinect

Almost by definition, the coolest technology and bleeding-edge research is locked away in universities. While this is great for post-docs and their grant-writing abilities, it’s not the best system for people who want to use this technology. A few years ago, and many times since then, we’ve seen a bit of research that turned a Kinect into a 3D mapping camera for extremely large areas. This is the future of VR, but a proper distribution has been held up by licenses and a general IP rights rigamarole. Now, the source for this technology, Kintinuous and ElasticFusion, are available on Github, free for everyone to (non-commercially) use.

We’ve seen Kintinuous a few times before – first in 2012 where the possibilities for mapping large areas with a Kinect were shown off, then an improvement that mapped a 300 meter long path though a building. With the introduction of the Oculus Rift, inhabiting these virtual scanned spaces became even cooler. If there’s a future in virtual reality, we’re need a way to capture real life and make it digital. So far, this is the only software stack that does it on a large scale

If you’re thinking about using a Raspberry Pi to take Kintinuous on the road, you might want to look at the hardware requirements. A very fast Nvidia GPU and a fast CPU are required for good results. You also won’t be able to use it with robots running ROS; these bits of software simply don’t work together. Still, we now have the source for Kintinuous and ElasticFusion, and I’m sure more than a few people are interested in improving the code and bringing it to other systems.

You can check out a few videos of ElasticFusion and Kintinuous below.

25 thoughts on “3D Scanning Entire Rooms With A Kinect

  1. A commercially available, or easily put together DIY rig could do wonders for first person shooter games.

    Imagine mapping your workplace or school, surreptitiously scanning the entire building complex and grounds piece by piece day by day, then getting the computer to stitch all the parts together into a map where all your co-workers / schoolfriends can roam around having great big battles.

    1. Yeah I can see the school/work thing turning out bad and not just in the US but that would probably work really well for swat teams where they may be able to load up a building or some other structure to practice a breach on it before actually doing the real thing.

  2. The state of the art now uses normal cameras rather than kinects. Unless you have a suitable kinect it might be worth just waiting a few months for someone to release some of the “depth from a single image” work.

      1. https://vision.in.tum.de/research/vslam/lsdslam – monocular, large-scale, real-time. Slightly less dense than kinect. Code available.

        But yes, kinect does have certain advantages. Monocular methods need textural detail to work well, so they can only guess at large, flat, uniform surfaces. Depth cameras are the best for, say, building detailed 3D models of things. Whereas monocular methods are better for mapping and distance vision.

        1. This is great, thanks for that. I’m going to go out on a limb here (and probably fall off it due to my poor SLAM) and say that humans rarely make mistakes about 3d structure if they can move around, so it’s likely that monocular will be practical for most uses within a year.

        2. http://vision.in.tum.de/data/software/dvo – Kinect, large-scale, real-time. Of cause still less dense than the two in the article, but If you need a dense map, you can always do a second pass with the known positions form the first run and fill in the “holes” with a nother algorithm. Zurich Technical university had a paper showing that some time ago.

          I think this one needs textures in the images aswell, as it is still doing photometric-error minimization, but uses the known depth to transform (move) pixels from the previous image to where they would be in the new image.

      1. Active sensing is prone to failure in some environments. Example: my team were in a robot boat competition a year ago. The best teams had vision-based sensing to augment the depth stuff, because around midday the heat fuzzed out the $20k IR laser depth-scanners.

  3. Locked up in universities, and when the professor retires, all that good stuff gets tossed out. Look up the Calumet CAD-LAB. They did a ton of neat stuff for CNC machining but unless someone, somewhere, has an archive of their FTP server, everything they did is gone for good since the professor who ran that project in the 80’s and 90’s retired a couple of years ago.

    Around the turn of the century, they developed a CAD/CAM program for the ProLIGHT PLM2000 CNC milling machine. It didn’t directly operate the mill, but it did generate G-Code specifically for that machine. The software was cross-platform for Windows, Linux and Mac. Would have been for OS 8 or 9, not OS X.

    I would love to get a copy of that software but it’s likely impossible now.

      1. The expensive commercial software can, or one can buy (also expensive) add-ons that do. On the open source side, there’s very little CNC software to speak of outside of 3D printing. For CNC mills/routers, there’s BlenderCAM and not much else. That’s a shame. A CNC mill/router is, in my experience, more widely applicable than a 3D printer.

    1. The Kinect2 is not designed for multiple sensors at the same time, they may cause interference to each other. Microsoft has said this is nondeterministic and they don’t have any way to synchronize the “phase” on multiple sensors, so it’s a crapshoot whether it will work or not. People have gotten around this with the Kinect1 with a remarkable hack: vibrating one of the sensors. This blurs its projection with respect to the other sensor, but allows it to still work normally. http://www.precisionmicrodrives.com/tech-blog/2012/08/28/using-vibration-motors-with-microsoft-kinect

  4. Hi ive been trying some scanning. Im a complete amateur starting at 0.
    I was planning on using scenect but im having problems as my mac book pro only has USB 3.0 and scenect will not pick up the Kinect.
    I got it working with scanekt in OSX but the results were not great.
    Any way, my question is if install Ubuntu on my ‘Mac book pro retina’ will i be able to get this project running?
    Thanks
    Jack

Leave a Reply to DougCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.