3-Sweep: Turning 2D images into 3D models

As 3D printing continues to grow, people are developing more and more ways to get 3D models. From the hardware based scanners like the Microsoft Kinect to software based like 123D Catch there are a lot of ways to create a 3D model from a series of images. But what if you could make a 3D model out of a single image? Sound crazy? Maybe not. A team of researchers have created 3-Sweep, an interactive technique for turning objects in 2D images into 3D models that can be manipulated.

To be clear, the recognition of 3D components within a single image is a bit out of reach for computer algorithms alone. But by combining the cognitive abilities of a person with the computational accuracy of a computer they have been able to create a very simple tool for extracting 3D models. This is done by outlining the shape similar to how one might model in a CAD package — once the outline is complete, the algorithm takes over and creates a model.

The software was debuted at Siggraph Asia 2013 and has caused quite a stir on the internet. Watch the fascinating video that demonstrates the software process after the break!

[via Reddit]

34 thoughts on “3-Sweep: Turning 2D images into 3D models

  1. BKP Horn outlined the techniques for “shape from shading” in the seminal book “Robot Vision” published in 1986.

    Cognex Corp. in the mid 1980s had systems (PDP-11 based!) that were recovering object shapes from single 2-D images.

    1. Yet the tech still isn’t to market. This means the people holding it don’t care about it..

      Refining algorithms for this would literally revolutionize engineering and modeling..

    1. There’s a big difference between a siggraph demo and a “product” usable by end users.

      most siggraph demo projects turn out to be coding nightmares and are abandoned in favor of purpose-written software

      trust me you don’t want to run experimental software that’s mightily hacked to make the demo look good

    1. there never has been reason to trust photography, more often than not it is a tool used for representation not copying
      photographers have been manipulating images since it was first invented.

  2. I think I’m most impressed by the “ways in which our algorithm can fail” part. Very cool all the way around. It’s one thing to talk about techniques 30 years ago, and another thing altogether to actually make them work.

    1. Yeah, because these people just faked a fucking SIGGRAPH presentation. You know, SIGGRAPH, the biggest annual symposium on computer graphics that’s been going on for well over three decades at this point? Are you *high*?

  3. Looks like canoma (long dead great s/w captured and buroed by Adobe) on steroids.
    Would love to see capabilities of both combined…
    Lets hope they decide to release some code…

  4. Isn’t (wasn’t?) there a similar function in Photoshop, but without the edge detection and automatic content aware fill, you draw a basic shape around an object and can modify it in 3D.
    This is a nice eye-catcher though and I wouldn’t be surprised if it is included in a coming PSCC update.

  5. That software seems very capably written, it does indeed remind of sketchup but then with an invisible robotfriend assisting you and automatic texturegrabbing.
    I am impressed.. Kudos to the programmer.

    Oh and I’m also impressed that the video includes failure scenarios. Where do you find such honesty anymore huh.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s