As 3D printing continues to grow, people are developing more and more ways to get 3D models. From the hardware based scanners like the Microsoft Kinect to software based like 123D Catch there are a lot of ways to create a 3D model from a series of images. But what if you could make a 3D model out of a single image? Sound crazy? Maybe not. A team of researchers have created 3-Sweep, an interactive technique for turning objects in 2D images into 3D models that can be manipulated.
To be clear, the recognition of 3D components within a single image is a bit out of reach for computer algorithms alone. But by combining the cognitive abilities of a person with the computational accuracy of a computer they have been able to create a very simple tool for extracting 3D models. This is done by outlining the shape similar to how one might model in a CAD package — once the outline is complete, the algorithm takes over and creates a model.
The software was debuted at Siggraph Asia 2013 and has caused quite a stir on the internet. Watch the fascinating video that demonstrates the software process after the break!
[via Reddit]
BKP Horn outlined the techniques for “shape from shading” in the seminal book “Robot Vision” published in 1986.
Cognex Corp. in the mid 1980s had systems (PDP-11 based!) that were recovering object shapes from single 2-D images.
Yet the tech still isn’t to market. This means the people holding it don’t care about it..
Refining algorithms for this would literally revolutionize engineering and modeling..
What an amazing hack
speachless.
just use sketchup… easy to use
doing CAD in SketchUp is like riding a hobo.
Yeah sketchup is shit, but then CATIA ain’t free.
i….thi…b…damn…that’s awesome!
soo…where can i get this software?
There’s a big difference between a siggraph demo and a “product” usable by end users.
most siggraph demo projects turn out to be coding nightmares and are abandoned in favor of purpose-written software
trust me you don’t want to run experimental software that’s mightily hacked to make the demo look good
First, left hand thumb up, then the second one comes up!
DOUBLE THUMBS UP!
watched the vid skipping through, but confused how it generates what was behind the object when its moved >.>
One of the many typical methods for background inpainting.
implied symmetry, and the narrator said “copyfill”
Ever heard of the “clone brush” in GIMP?
Now that is bad ass!
Wow! Now watch a product from Autodesk come out that uses it. :P
Well that’s kinda amazing. There’s now no reason whatsoever to trust a photograph!
Had that feeling 5 years ago with video:
https://www.youtube.com/watch?v=FuTZZfS5LZg
there never has been reason to trust photography, more often than not it is a tool used for representation not copying
photographers have been manipulating images since it was first invented.
amazing, cant wait for software to be released!
I think I’m most impressed by the “ways in which our algorithm can fail” part. Very cool all the way around. It’s one thing to talk about techniques 30 years ago, and another thing altogether to actually make them work.
Yeah I was going to say the same thing. Refreshing. The whole thing was really well put together.
software link or did’nt exist
Yeah, because these people just faked a fucking SIGGRAPH presentation. You know, SIGGRAPH, the biggest annual symposium on computer graphics that’s been going on for well over three decades at this point? Are you *high*?
+1
*Level up*
“Yeah, because these people just faked a fucking SIGGRAPH presentation”
SIGGRAPH is all a CG fake! Incredible!
Have a look at http://www.agisoft.ru which doesn’t send all your data to the company unless you [s]bribe[/s] pay them to let you keep it private.
What? It doesn’t send your data anywhere
Looks like canoma (long dead great s/w captured and buroed by Adobe) on steroids.
Would love to see capabilities of both combined…
Lets hope they decide to release some code…
Isn’t (wasn’t?) there a similar function in Photoshop, but without the edge detection and automatic content aware fill, you draw a basic shape around an object and can modify it in 3D.
This is a nice eye-catcher though and I wouldn’t be surprised if it is included in a coming PSCC update.
That software seems very capably written, it does indeed remind of sketchup but then with an invisible robotfriend assisting you and automatic texturegrabbing.
I am impressed.. Kudos to the programmer.
Oh and I’m also impressed that the video includes failure scenarios. Where do you find such honesty anymore huh.
Think I’m done my sahre of it using Archipelis :p