We know that the appearance of the Kinect 3D camera hardware, and subsequent open source driver hacking conquest, is a game-changer that brings the real world into much closer contact with the virtual world. But it still amazes us when we see a concept like this turntable-based 3D object scanner that works so incredibly well.
The concept is extremely simple. A box made from foamboard rests atop a turntable. At its center is the object you wish to scan being well-lit by a small LED light source at each upper corner of the box. First up some code and capture data about the sides and top of the object as it spins. To put the shoe back together in the virtual world, he used a modified version of RGBDemo v0.6.0, a Kinect focused project written by Nicolas Burrus.
[A.J] says that the scan comes out pretty well after just one pass, but that’s not stopping him from setting his sights on making this work with three of four Kinects at once. Don’t forget to check out his video demonstration which is embedded after the break.
[youtube=http://www.youtube.com/watch?v=V7LthXRoESw&w=470]
This looks really good to me!
I think that this technology will, in the future, revolutionise game creation. If models can be added to that sort of quality in just a few minutes then – just wow.
wow this is epic
very impressive. I hope he releases the changes he made to the code.
i will have to try this if he releases the code
Imagine the box was 3d printed. It would “just” fit into the area of your 3d printer, and it would provide 3d models.
Or wait….unlimited lowpoly rubble items for the gaming industry. They could put actual rubble in the box and just scan it instead of making it virtually.
Thanks for the niceness all.
The real greatness is in the code, which I did not write. It’s available for free here, compliments of Nicolas Burrus:
http://nicolas.burrus.name/index.php/Research/KinectRgbDemoV6?from=Research.KinectRgbDemoV3
The modifications I’d mentioned were to do with the 3D point grabbing frequency, however the software works as you see in the video perfectly well as it comes.
Go get it! :D
BTW, RGBDemo does the point cloud accumulation using OpenNI to access the Kinect. Meshlab is used for the stitching and clean-up later on.
Thanks for the write-up HAD!
Looks kinda like a Nextengine scanner, a $3000 or so commercial version. Not sure how the quality levels compare though.
NextEngine scanners are much more accurate and higher resolution, but they are SLOW! We have one at work, and it scans one section at a time (up to 9) with a single moving vertical laser line. It then attempts to stitch them all together (usually very well and with minimal effort).
The great thing about the Kinect’s laser projection grid tech is that it captures an entire scene/side in 1/15th of a second (approximately). I cant wait until the next version is released, which will no doubt be of higher resolution.
Another interesting “scanning” software solution is 3DSOM. Look it up if you’d like. They have a 14 days free trial. It’s silhouette and pixel comparison based, but not real-time by any means.
I tried 3DSOM a few years ago, but it has a major flaw: because it is silhouette-based, it can’t deal with concave areas.
Ah crap…now I have to buy something from Microsoft!
Very nice and thanks for sharing!
KinectFusion was demoed yesterday by Microsoft too. It had 1mm accuracy!
It’s interesting, MS has a system where you just move the kinect and wave it about and it accumulates accuracy and resolution and the view from various angles, so it’s like this on steroids without any hassle, and it makes me interested again in getting a kinect sensor device.
A related link with video: http://techcrunch.com/2011/08/10/video-free-moving-kinect-used-to-map-room-and-objects-in-detailed-3d/
anybody else thinking:
this + 3d printer = replicator?
Whatnot, they do have a cool thing there. Nicolas’ code does the same thing basically. I just chose not to use it that way for this.
Are you sure it also does an accumulative increasing of resolution of objects? Because I though that was sort of new, at least I never saw it happen in any video I happen to watch (I did not see all that many of them though), accumulating scene info yes but enhancing the same object I didn’t notice before.
BTW, I do like your version too, and that’s more available and not one of the many enticing videos MS puts out which then so annoyingly never pan out as you’d hope.
Ah I see what you’re saying. No, from what I’ve seen it just adds point cloud data where it’s missing but doesn’t reevaluate and refine the detail accuracy. That is definitely cool! And definitely secretive and unreleased! :)
I can’t wait until Nyko releases the zoom lense for Kinect. It allows 40% closer usage for small spaces, which to me, means potentially getting 40% closer to objects I’d like to scan! (I think).
Could be interesting when combined with 3D printing of clothes
http://hackaday.com/2011/06/14/bikinis-of-the-future/
3D-SOM does concave now via vision based pattern recognition. :)
SO, six months later, anything new o report? One suggestion ripped from experience with optics and the David scan kit illustrations: Would it be a Good Idea to substitute numbered target bulls, possibly in a color that your software can render invisible later for the random pieces of tape in the barrel and making the Kinnect move, rather than the object – gives you a better idea of where you are at any time.