With the introduction of the Kinect, obtaining a 3D representation of a room or object became a much easier task than it had been in the past. If you lack the necessary cash for one however, you have to get creative. Both the techniques and technologies behind 3D scanning are somewhat complicated, though certainly still within reach as maker [Shikai Chen] shows us. (Google Translation)
He wanted to create 3D scanned images, but he didn’t have the resources to purchase a Kinect. Instead, he built his own scanner for about 1/6th the cost. Interestingly enough, the scanner resembles what you might imagine a very early Kinect prototype looked like, though it functions just a little bit differently than Microsoft’s creation. The scanner lacks any sort of IR emitter/camera combo, opting to use a laser and a USB VGA camera instead. While scanning, the laser shines across the target surface, and the reflected light is then picked up by the camera.
So how does this $25 DIY laser scanner measure up? Great, to be honest. Check out the video below to see how well his scanner works, and be sure to take a look through his second writeup (Google Translation) as well for more details on the project.
[via Seeedstudio]
[youtube=http://www.youtube.com/watch?v=jLZ-s9KRzG8&feature=player_embedded&w=470]
you sir, are brilliant.
have you tried it in other enviroments?
how does it cope with noise or different reflectivity?
I’m the maker of this project.
This scanner can works well in any in-door environment. As the IR-filter len has been equipped, the light-noise can be effectively removed.
For out-door environment, this device cannot work well in the directly sun-light as the IR noise is too high.
Whats the 3D mapping software used?
Blender could handle these kind of data?
Probably ‘meshlab’ to “mesh” the “point cloud” data (just a bunch of points which meshlab assumes to be “nodes” on a “solid” object connected by “edges”). meshlab exports a “solid model” in a variety of formats, several of which Blender recognizes.
The kinect is *not* a Microsoft invention. Kinect’s technology was developed by PrimeSense http://en.wikipedia.org/wiki/PrimeSense .
Seems to me that this low cost kinect alternative has better spatial resolution than kinect with the drawback of higher depth acquisition times.
The depth estimation process seems to be similar to the one used by the Neato X-11 vacuum cleaner. The white paper describing the process is here:
http://www.robotshop.com/content/PDF/revolds-whitepaper.pdf
If the guy replaces the headbody physical wiring by RF, he can then spin the head full 360º at higher speed and increase the acquisition rate.
This is very interesting. Do you have an idea that what should be done to improve the accuracy?
What I have in mind is from a bit different perspective.
Namely I’m thinking about a device that could map the floor or wall surface to indicate where should something removed or added to make the surface flat and level.
I.e. what (and where) are the highest and lowest points, what is average difference between heights, is there tilt in any direction what what its value etc.
To improve accuracy, you just need to use a higher resolution CCD. In the paper, they use a 752pixel CCD and they can get range errors below 3 cm at 6 m. So, if you use a HD camera (with an effective lower pixel-pitch) you can decrease the depth estimation error.
“Namely I’m thinking about a device that could map the floor or wall surface to indicate where should something removed or added to make the surface flat and level.”
Sounds like you want some kind of gradient descent algorithm, that is, you want to increase the resolution (thus acquisition time) in high-frequency areas (corners, depth boundaries) while doing the opposite for low-frequency areas (smooth planes at same depth).
Wow, that is… wow.
You should see how I can let smoke escape from leds when I am not paying attention, it’s… not nearly as cool as this.
Excellent project!
I’m confused as to how they get a vertical line from the laser, the other issue is all that IR being flashed about.
For vertical scan, just get pull a thin hair from your head and put it in the path of the laser beam. The hair should be in horizontal position so that that the diffracted light covers a vertical line.
Then, with a camera CCD positioned perpendicular to the laser line, you just have to measure the distance in pixels from each point in the laser line to the vertical line crossing the CCD center.
See the whitepaper I cited in a comment above for more details.
OK, it is with the LENS…..
I had a design for a very low cost high resolution 3D laser scanner, but my one sticking point was getting a vertical line.
You can get a line from a laser by shing through an acrylic rod or wine glass stem.
That line will be extremely not straight if you use a wineglass stem, unless of course it is a very good wine glass! :P
I shared it \o/
The Kinect is Microsoft’s product, but NOT it’s creation. They bought it. Chen’s work is stellar and opens up the floodgates for others to get into the game.
Very impressive… A+ hack my friend.
Bob, the laser that was used here has a lens attached that produces a line. The laser was aligned vertically so only a horizontal scan (or sweep) was needed to collect the point cloud data.
I’d like to see the complete process of collecting the point cloud and importing it into MeshLab. I’m also curious if resolution could be increased if a stepper motor is used as opposed to the servo motor.
Now just take a picture and apply as a skin to the model.
I’ll give someone $50 to make this for me.
For $50 they will take your money and send you back a box of wood shavings.
We’ve gotten rid of the shavings to make more room for the 50s.
You can get a line by extracting the rotating scanning mirror and motor from an old laser printer. Or buy a “laser level” which has the optics built in to make a decent line.
Where can I get 3rd patry software? ;)
After you leave the first party and have your fun, if you’re still sober, it just magically appears from nowhere!!
;)
Great hack! pointcloud 3d data scanners are always welcome, i’d love to see a multi-scanner setup.
this looks a bit like the DAVID project (also laser and camera)
伟大工程,尊重!
This is a great finding…now I finally understand
how this form of 3D scanning works. I have came up with a unique way of 3D scanning without the use of any programing soft-wares. before I did not connect the dots between the triangle set-up and what i already understand…but now it’s clear..Now it’s just a matter of merging what I have created and what I have learned
Im looking for something cheap like this to lay out the interiors of larger buildings in 3D using SketchUp or something similar. I see that this only has a range of 6m. What would I have to modify to get it to accurately lay out something like an empty floor of a warehouse. We are talking something in the neighborhood of 75′ by 150′. Would I just need a more powerful laser, and is something with this kind of range even available to a lay person? Could I build more than one of these and set them up at multiple points along the floor and get them to run in tandem or something? Help?
Did you manage to do this? I need to do something similar with SketchUp. But I really only need the scanner to operate on a single plane, 2d rather than 3D then import to SketchUp to create accurate floor plans
.
Does anyone know, what’s the 3D mapping software used here?
Blender could handle these kind of data?