3D mapping of huge areas with a Kinect

The picture you see above isn’t a doll house, nocliped video game, or any other artificially created virtual environment. That bathroom exists in real life, but was digitized into a 3D object with a Kinect and Kintinuous, an awesome piece of software that allows for the creation of huge 3D environments in real time.

Kintinuous is an extension of the Kinect Fusion and ReconstructMe projects. Where Fusion and ReconstructMe were limited to mapping small areas in 3D – a tabletop, for example, Kintinuous allows a Kinect to me moved from room to room, mapping an entire environment in 3D.

The paper for Kintinuous is available going over how the authors are able to capture point cloud data and overlay the color video to create textured 3D meshes. After the break are two videos showing off what Kintinuous can do. It’s jaw dropping, and the implications are amazing. We can’t find the binaries or source for Kintinuous, but if anyone finds a link, drop us a line and we’ll update this post.


  1. warspigot says:

    It’s spelled “Kintinuous”; you left out one u. That might make it easier to find.

    • Fixed. It’s a very hard to read name.

      • Matt says:

        … It’s “Continuous” but with “Kin” instead of “Con”. Seems pretty simple to me. I have a friend who is bad at spelling, but he doesn’t blame the words :-)

        Cool project. I’ll have to try some of the newer stuff like this with my Kinect… I wonder how far this can be pushed (e.g., what maximum real-world area can be captured).

      • charles says:

        @Matt. Different people have different strengths, thought patterns and so on. Engineers and electricians are not well known to enjoy puns and cross puzzles.

        My friend is a graphic designer and he DOES blame the words of an ad campaign or name if the intended audience doesn’t ‘get it’.

      • Matt says:

        I’m pretty sure the intended audience here is the CS research community.

        OTOH, HackADay is not exactly known for the spelling and grammatical prowess of it’s editors (quite the opposite, in fact). I think it’s very fair for me to poke fun at spelling mistakes that occur because the editor didn’t take the time to double check their work.

      • AP² says:

        OTOH, HackADay is not exactly known for the spelling and grammatical prowess of it’s editors

        Oh, the irony!

      • Matt says:


        Oops. You got me!
        I still make that mistake once in a while, though I’m rather surprised I made it here. It normally only pops up in my text messages.

        (For those that don’t understand: http://theoatmeal.com/comics/apostrophe)

        I’m pretty sure there’s an exceptionally wide gap between the majority of the grammatical errors on this site and a slip-up of “it’s” vs. “its”, but yup, you got me… :-)

  2. elmusa says:

    Genius to make CS maps

  3. Homelypoet says:

    Better than The Dark Knight’s solely sonar based “bat-vision”.

    Now to pack it in a cellular phone…

  4. pall.e says:

    This is cool, what is even cooler is how quickly this has come about. I remember seeing my first 3d scanner here on hackaday in 2007.
    To go from that, to this in under 5 years. Pretty amazing.

    • Jarel says:

      Yeah, except Kintinuous doesn’t replace those types of scanners for individual objects.

      • anon says:

        Why not? Just put the object on a turntable and move the kinect around it. You’ll be able to fill in far more detail than a laser scanner could in the same amount of time, and get color too.

      • Jarel says:

        I beg to differ. Why would a laser be less precise than the Kinect? My experience with the Kinect is that it’s too “noisy” to be trusted with small objects. I think this is because of the lens used on the Kinect to create the infrared dots. Too close or too far, and I get janky weirdness.

        My preferred method for 3D scanning (if you’re going to be cheap about it) is using OpenCV’s Canny edge-detection, a good webcam, a laser, and a turntable. Of course, you’re going to want to use the videoInput library for handling the webcam and crank that resolution up.

        Better still, if you’re going to automate it, is to sync the turntable’s rotation with the frame-rate of the camera so that you don’t get any blurry captures.

        If you can get a thin enough laser line (I’m using a nice corrugated lens I got off eBay), and a nice enough webcam resolution you might be surprised at how good the results are.

        Again, just in my experience with my little experimentation, I’ve had better results than the Kinect.

      • Jarel says:

        for small objects, I mean.

  5. crjeea says:

    If robots have memories I bet this is what they would look like.
    I saw a project a while back to alter the resolution of images depending upon how prominent they were. If that technique was applied to this mapping software to save data in uninteresting areas it might be possible (with a few gpu’s to do the number crunching) to implement this. I have been working on a prossessor design for a while now which maps multiple variables over one and other and uses the corrolations to predict future events. This would be a great data format to store my arrays in (:
    Thanks for posting this HaD (:

  6. Doktor Jeep says:

    Mindboggling awesome stuff.

  7. Chris C. says:

    Absolutely mind-blowing! I can’t believe how well that works, and in real-time too. Hope the creators will release this in some open-source fashion soon.

  8. joe says:

    This has already been added recently to KinFu, part of the pointclouds project. It’s already available at pointclouds.org

    Plus, it’s all opensource and public.

  9. The paper has an email address. I’d bet that would be a great starting point for getting your hands on the implementation.

  10. Mrthekod says:

    That’s a butt load of vertexseses! Brightly coloured too! The big jump will be when someone teaches a computer to identify what it’s seeing. At that point, I will be 1st in line for the pledging alliance to our robot overlords.

  11. rasz says:

    Looks good zoomed out, but huge gaps in geometry when zoomed in, they should integrate
    or at least implement some geometry interpolation
    other than that its sexy

    I can imagine a product where you would bundle Kinect with some small USB routerboard. Put it in slick 3D printed enclosure, make it just record both video streams to USB harddrive/pendrive.
    Then compute recording “in the cloud” and produce model ready to be embedded on a page. Instant hit for Realtors?

  12. ixbidie says:

    Just asked. They haven’t released their source code yet, because there is still much work to do.

    anyway, he asured me, that they are “definitely planning to get it out to the public in the future.”

  13. Random Man says:

    We use 3D scanners in large construction to scan existing places that we are coodinating using (BIM)building information management. They are increadibly expensive to buy. Almost 10K a day to rent. I was thinking of making something like this as a cheaper alternative. This image is great comparied to some of the point clouds that come out of the professional equipment. Professional equipment spits out amazing detail but its still a point cloud not a mesh. They will even pick up spray paint sprayed on a board.
    The Smooth image this creates would be awesome for coordination with programs like Revit and or Navisworks.

    • Matt says:

      I wonder how difficult it would be to pre-populate the 3d data with a construction laser scanner and then use the kinect to just map the color data to it.

      • Random Man says:

        The existing conditions (duct/conduit/ structure ect)are more important than the colors. You can add the colors in navisworks

      • Random Man says:

        I do 3 d models of buildings for electrical systems so I don’t care much for the colors as much as the smoothness of the vector images. Normally I’m trying to miss an existing piece of equipment with my new systems to insure smooth installations and proofs of concept

      • Matt says:

        I didn’t say one would do this for use in Revit. ;)
        I just thought it would be interesting to combine a 3d laser rangefinder (for precision and speed of acquisition), with something like the Kinect for building accurate, color 3d models.

    • A different Matt says:

      I work in at a Construction Management company and have been investigating using the kinect as a laserscanner. We have had very good results modeling objects with it (20x30x20 ft)and are currently trying to do exactly this, scan entire rooms/buildings for as-builts that aren’t on the original plans (but should be).

      If we didn’t have to stitch the scans together like this is showing, it would be an invaluable tool!

  14. Darren says:

    This looks like it’d be wonderful for mapping during urban exploration, or spelunking in wilder caves.

  15. Mojo says:

    Google should update their streetview car to quadcopter mounted kinect swarms.

  16. kevin mcguigan says:

    Oh the humanity!

  17. charles says:

    There is a similar but modular version of Kintinuous called ‘KinectFusion Large Scale’ on the PCL website: http://pointclouds.org/documentation/tutorials/using_kinfu_large_scale.php

  18. FAWAD PMI says:


  19. FAWAD PMI says:

    Can i get the details of this paper.It would be grateful if you attach some details about this project.

  20. Woof2255 says:

    Can you combine two or more “clips” of the same image to obtain more angles and improve resolution?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s