3D mapping of huge areas with a Kinect

The picture you see above isn’t a doll house, nocliped video game, or any other artificially created virtual environment. That bathroom exists in real life, but was digitized into a 3D object with a Kinect and Kintinuous, an awesome piece of software that allows for the creation of huge 3D environments in real time.

Kintinuous is an extension of the Kinect Fusion and ReconstructMe projects. Where Fusion and ReconstructMe were limited to mapping small areas in 3D – a tabletop, for example, Kintinuous allows a Kinect to me moved from room to room, mapping an entire environment in 3D.

The paper for Kintinuous is available going over how the authors are able to capture point cloud data and overlay the color video to create textured 3D meshes. After the break are two videos showing off what Kintinuous can do. It’s jaw dropping, and the implications are amazing. We can’t find the binaries or source for Kintinuous, but if anyone finds a link, drop us a line and we’ll update this post.

41 thoughts on “3D mapping of huge areas with a Kinect

      1. … It’s “Continuous” but with “Kin” instead of “Con”. Seems pretty simple to me. I have a friend who is bad at spelling, but he doesn’t blame the words :-)

        Cool project. I’ll have to try some of the newer stuff like this with my Kinect… I wonder how far this can be pushed (e.g., what maximum real-world area can be captured).

      2. @Matt. Different people have different strengths, thought patterns and so on. Engineers and electricians are not well known to enjoy puns and cross puzzles.

        My friend is a graphic designer and he DOES blame the words of an ad campaign or name if the intended audience doesn’t ‘get it’.

      3. @charles:
        I’m pretty sure the intended audience here is the CS research community.

        OTOH, HackADay is not exactly known for the spelling and grammatical prowess of it’s editors (quite the opposite, in fact). I think it’s very fair for me to poke fun at spelling mistakes that occur because the editor didn’t take the time to double check their work.

      4. OTOH, HackADay is not exactly known for the spelling and grammatical prowess of it’s editors

        Oh, the irony!

      5. @AP²

        Oops. You got me!
        I still make that mistake once in a while, though I’m rather surprised I made it here. It normally only pops up in my text messages.

        (For those that don’t understand: http://theoatmeal.com/comics/apostrophe)

        I’m pretty sure there’s an exceptionally wide gap between the majority of the grammatical errors on this site and a slip-up of “it’s” vs. “its”, but yup, you got me… :-)

      1. That’s excatly what I thought about when I saw that. Would be kind of hot playing CS on your UC campus :D

      2. Yeah that ruined a lot. At one point it was so much fun to recreate local structures with custom textures and all when digital cameras became affordable. It isn’t well thought of anymore.

        So much for the land of the brave and home of the free.

  1. Better than The Dark Knight’s solely sonar based “bat-vision”.

    Now to pack it in a cellular phone…

      1. Why not? Just put the object on a turntable and move the kinect around it. You’ll be able to fill in far more detail than a laser scanner could in the same amount of time, and get color too.

      2. I beg to differ. Why would a laser be less precise than the Kinect? My experience with the Kinect is that it’s too “noisy” to be trusted with small objects. I think this is because of the lens used on the Kinect to create the infrared dots. Too close or too far, and I get janky weirdness.

        My preferred method for 3D scanning (if you’re going to be cheap about it) is using OpenCV’s Canny edge-detection, a good webcam, a laser, and a turntable. Of course, you’re going to want to use the videoInput library for handling the webcam and crank that resolution up.

        Better still, if you’re going to automate it, is to sync the turntable’s rotation with the frame-rate of the camera so that you don’t get any blurry captures.

        If you can get a thin enough laser line (I’m using a nice corrugated lens I got off eBay), and a nice enough webcam resolution you might be surprised at how good the results are.

        Again, just in my experience with my little experimentation, I’ve had better results than the Kinect.

  2. If robots have memories I bet this is what they would look like.
    I saw a project a while back to alter the resolution of images depending upon how prominent they were. If that technique was applied to this mapping software to save data in uninteresting areas it might be possible (with a few gpu’s to do the number crunching) to implement this. I have been working on a prossessor design for a while now which maps multiple variables over one and other and uses the corrolations to predict future events. This would be a great data format to store my arrays in (:
    Thanks for posting this HaD (:

  3. Absolutely mind-blowing! I can’t believe how well that works, and in real-time too. Hope the creators will release this in some open-source fashion soon.

  4. This has already been added recently to KinFu, part of the pointclouds project. It’s already available at pointclouds.org

    Plus, it’s all opensource and public.

  5. That’s a butt load of vertexseses! Brightly coloured too! The big jump will be when someone teaches a computer to identify what it’s seeing. At that point, I will be 1st in line for the pledging alliance to our robot overlords.

  6. Looks good zoomed out, but huge gaps in geometry when zoomed in, they should integrate

    http://www.robots.ox.ac.uk/~gk/PTAM/

    or at least implement some geometry interpolation
    other than that its sexy

    I can imagine a product where you would bundle Kinect with some small USB routerboard. Put it in slick 3D printed enclosure, make it just record both video streams to USB harddrive/pendrive.
    Then compute recording “in the cloud” and produce model ready to be embedded on a page. Instant hit for Realtors?

  7. Just asked. They haven’t released their source code yet, because there is still much work to do.

    anyway, he asured me, that they are “definitely planning to get it out to the public in the future.”

  8. We use 3D scanners in large construction to scan existing places that we are coodinating using (BIM)building information management. They are increadibly expensive to buy. Almost 10K a day to rent. I was thinking of making something like this as a cheaper alternative. This image is great comparied to some of the point clouds that come out of the professional equipment. Professional equipment spits out amazing detail but its still a point cloud not a mesh. They will even pick up spray paint sprayed on a board.
    The Smooth image this creates would be awesome for coordination with programs like Revit and or Navisworks.

    1. I wonder how difficult it would be to pre-populate the 3d data with a construction laser scanner and then use the kinect to just map the color data to it.

      1. The existing conditions (duct/conduit/ structure ect)are more important than the colors. You can add the colors in navisworks

      2. I do 3 d models of buildings for electrical systems so I don’t care much for the colors as much as the smoothness of the vector images. Normally I’m trying to miss an existing piece of equipment with my new systems to insure smooth installations and proofs of concept

      3. I didn’t say one would do this for use in Revit. ;)
        I just thought it would be interesting to combine a 3d laser rangefinder (for precision and speed of acquisition), with something like the Kinect for building accurate, color 3d models.

    2. I work in at a Construction Management company and have been investigating using the kinect as a laserscanner. We have had very good results modeling objects with it (20x30x20 ft)and are currently trying to do exactly this, scan entire rooms/buildings for as-builts that aren’t on the original plans (but should be).

      If we didn’t have to stitch the scans together like this is showing, it would be an invaluable tool!

    1. Cool idea.
      I don’t do urbex as much as I used to, but I’d love to see if this can map a large enough area to handle a whole factory floor or something similar.

      1. That’s what I want to know. But I’m sure you can stitch multiple scans together worse case. That’s the scale I need.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s