39 Raspberry Pi 3D Scanner

[Richard] just posted an Instructable on his ridiculously cool 39 Pi 3D Scanner! That’s right. 39 individual Raspberry Pies with camera modules.

But why? Well, [Richard] loves 3D printing, Arduinos, Raspberry pies, and his kids. He wanted to make some 3D models of his kids (because pictures are so last century), so he started looking into 3D scanners. Unfortunately almost all designs he found require the subject to sit still for a while — something his 2-year old is not a fan of. So he started pondering a way to take all the pictures in one go, to give him the ability to generate 3D models on the fly — without the wait. 

He originally looked at buying 39 cheap digital cameras, but didn’t want to have all the images on separate SD cards, as it would be rather tedious to extract all the images. Using the Raspberries on the other hand, he can grab them all off a network. So he set off to build a very awesome (and somewhat expensive) life-size 3D scanning booth. Full details are available on his blog at www.pi3dscan.com

Stick around after the break to see it in action at Maker Faire Groningen 2013!

30 thoughts on “39 Raspberry Pi 3D Scanner

          1. > “Using heaps of RPIs is “ubercool” and using hubs is “boring” technical solution.”

            Apparently so is being clueless. The webcam ‘solution’ isn’t capable of doing what this person did with 40 RPIs.

        1. Webcams is like giving birth to a guinea pig through your ear. No more like god luck knowing which camera is where. And a decent comparable webcam is in the same price range as the hole setup whit the pi system.

        2. Raspberrypi plus camera is about EUR 50,- You can get a decent USB camera for that money. But it won’t be 5 MPixel. And a raspberry pi cannot handle more than one 5 Mpixel camera anyway. The USB on the raspberry pi is a bit mediocre in the performance field.

          Now when you have 39 raspberry pi’s that are more or less identical, take a picture when commanded, compress it and send it over to a server (pi or not), you can achieve what this project aims for…

          Actually, I’d consider taking 2Mpixel videos, synchronizing them by triggering a flash say about one second into each video, and then you have data to extract “moving 3D”…

  1. I think if he’d tried to use 39 usb cameras, the small delay between each image capture would end up skewing the final 3d image.
    With 39 systems all waiting for a single command to do a capture, the skewing would be cut to a minimum and, with good software, could even be calibrated out.

  2. It would be far cooler to have a GoPro that spun around the person in high FPS mode in a quick spiral, this just seems.. wasteful? But it is interesting none the less, and those boards can be used for other things down the way.

    1. In addition what Elrinn said, I think that would introduce motion blur. I suppose technically it should be possible to compensate with image processing, but I doubt it would be able to achieve the same quality.

      1. That’s for stereoscopic movies, where you present different views to each eye but still only from one camera angle. With this you could theoretically record hologram-style 3d video that would be viewable from any angle.

          1. I was thinking of low fps 3d mesh data. Then some way of matching each mesh with the previous and do interpolation to save space (although that might incur some artifacts for fast moving stuff).

            And interestingly, I also thought of my oculus for some interesting applications: Capture a scene and then be able to walk through it while it’s playing. Would make a fun detective game (something like detect the pick-pocket or who poisoned the soup?).

            Another application would obviously be nsfw stuff..

      2. That’s for stereoscopic video, which presents slightly different images to each eye but is still only viewable from one camera angle. This could theoretically be used to record hologram-style video that is viewable from any angle.

  3. This reminds me of that drone they now have that captures a many many megapixel image live that it can capture entire boroughs or a city from high altitude, and it does so by using a boatload of common camera sensors set in an array.
    It sounds simple but it still requires some work to make it all go smoothly, even when you get army/spook money to develop it. You need to combine all the sensors, combine the data, and then stream the whole selected region.

Leave a Reply to KyleCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.