Robots can easily make their way across a factory floor; with painted lines on the floor, a factory makes for an ideal environment for a robot to navigate. A much more difficult test of computer vision lies in your living room. Finding a way around a coffee table and not knocking over a lamp present a huge challenge for any autonomous robot. Researchers at the Royal Institute of Technology in Sweden are working on this problem, but they need your help.
[Alper Aydemir], [Rasmus Göransson] and Prof. [Patric Jensfelt] at the Centre for Autonomous Systems in Stockholm created Kinect@Home. The idea is simple: by modeling hundreds of living rooms in 3D, the computer vision and robotics researchers will have a fantastic library to train their algorithms.
To help out the Kinect@Home team, all that is needed is a Kinect, just like the one lying disused in your cupboard. After signing up on the Kinect@Home site, you’re able to create a 3D model of your living room, den, or office right in your browser. This 3D model is then added to the Kinect@Home library for CV researchers around the world.
So this is how google street view finally gets into our homes!
Seriously though this is an excellent piece of work especially given that it is browser based. No where did I put that kinect?
Kinect is just one of the ways to get 3D models, why limit it to just this? You can capture 3D with single webcam
http://www.robots.ox.ac.uk/~gk/PTAM/
123D Catch
Vi3Dim
Does PTAM output 3d objects?
Because the Kinect way usually works. I’ve seen the other methods fail so far. But there are a lot of these projects out there.
As owner of a 3D printer, I’m very interested in all these solutions, as it would be pretty cool to scan and print small objects. But so far only the Kinect has been able to produce good results. But it is limited to “human sized” objects.
PTAM does not output dense meshes. AFAIK it does extremely good camera localization and keypoint tracking. DTAM is cool and does a very good job, but is impractical for large workspaces. purely 2d image based reconstruction from a single camera still has a long way to go before it can match rgb+depth (aka kinect/xtion) reconstructions.
http://www.youtube.com/watch?v=Df9WhgibCQA