Amazing 3D-Scanner Teardown and Rebuild

0_10ea1b_776cdc71_origPour yourself a nice hot cup of tea, because [iliasam]’s latest work on a laser rangefinder (in Russian, translated here) is a long and interesting read. The shorter version is that he got his hands on a broken laser security scanner, nearly completely reverse-engineered it, got it working again, put it on a Roomba that was able to map out his apartment, and then re-designed it to become a tripod-mounted, full-room 3D scanner. Wow.

The scanner in question has a spinning mirror and a laser time-of-flight ranger, and is designed to shut down machinery when people enter a “no-go” region. As built, it returns ranges along a horizontal plane — it’s a 2D scanner. The conversion to a 3D scanner meant adding another axis, and to do this with sufficient precision required flipping the rig on its side, salvaging the fantastic bearings from a VHS machine, and driving it all with the surprisingly common A4988 stepper driver and an Arduino. A program on a PC reads in the data, and the stepper moves another 0.36 degrees. The results speak for themselves.

This isn’t [iliasam]’s first laser-rangefinder project, naturally. We’ve previously featured his homemade parallax-based ranger for use on a mobile robot, which is equally impressive. What amazes us most about these builds is the near-professional quality of the results pulled off on a shoestring budget.

Continue reading “Amazing 3D-Scanner Teardown and Rebuild”

Milk-Based 3D Scanner

3D scanners don’t have to be expensive or high-tech because all of the magic goes on in software. The hardware setup just needs to gather a bunch of cross-sections. In perhaps the lowest-tech of scanners that we’ve seen, [yenfre]’s GotMesh scanner uses milk.

Specifically, the apparatus is a pair of boxes, one with a hole drilled in it. You put the object in the top box and fill it with milk to cover the object. A camera takes pictures of the outline of the object in the milk as it drains out the hole, these get stitched together, and voilà.

There are limitations to this method. The object gets soaked in milk, so it won’t work for scanning sand-castles. (It’s optimally suited for chocolate-chip cookies, in our opinion.) If the camera is located directly above, the objects have to get wider as the milk drains out. You can do multiple takes with the object rotated at different angles or use multiple cameras to solve this problem. The edge-detection software will have issues with white objects in milk, so maybe you’ll want to scan that porcelain figurine in coffee, but you get the idea. More seriously, the rate of milk drain will slow down a bit as the amount of milk in the upper box decreases. This could also be handled in software.

In all, we’re not surprised that we don’t see commercial versions of this device, but we love the idea. It’s based on this experiment where they dip a guy in a tank of ink! If you just drank all your milk, but still have a line-laser lying around, maybe this build is more your speed. What’s your cheapest 3D scanner solution?

Polarizing 3D Scanner Gives Amazing Results

What if you could take a cheap 3D sensor like a Kinect and increase its effectiveness by three orders of magnitude? The Kinect is great, of course, but it does have a limited resolution. To augment this, MIT researchers are using polarized measurements to deduce 3D forms.

The Fresnel equations describe how the shape of an object changes reflected light polarization, and the researchers use the received polarization to infer the shape. The polarizing sensor is nothing more than a DSLR camera and a polarizing filter, and scanning resolution is down to 300 microns.

The problem with the Fresnel equations is that there is an ambiguity so that a single measurement of polarization doesn’t uniquely identify the shape, and the novel work here is to use information from depth sensors like Kinect to select from the alternatives.

Continue reading “Polarizing 3D Scanner Gives Amazing Results”

Making A Bobblehead Of You

Bobbleheads, you remember them, small figures with a spring-mounted comically large head. They brought joy to millions of car drivers every day as at least 97.5% of all registered cars in the 1960’s had bobbleheads mounted to the dash. Years later bobblehead popularity has waned but [Luis] is trying to bring them back, this time not as your iconic sports hero but as YOU!

[Luis] uses software called Skanect along with his Kinect to scan a persons geometry. There is a free version of Skanect but it is limited to exporting STL files no larger than 5,000 faces. That means that 3d printed bobbleheadscans of large objects (including people) come out looking noticeably faceted. [Luis] came up with a work-around that results in a much finer detailed scan. Instead of scanning an entire person with one scan, he would do 4 separate scans. Since each individual scan can support 5,000 faces, the resulting merged model can be up to 20,000 faces. Check out the comparison, the difference between the two scanning methods is quite noticeable. MeshMixer is the software used to merge the STL files of the 4 separate scans.

Once the full body is assembled in MeshMixer, it is time to separate the head from the body. A cylindrical hole is then made in the bottom of the head and the top of the body. This hole is just slightly larger than the spring used to support the head. The parts are then printed, painted and assembled. We have to say that the end result looks pretty darn good.

Portabilizing The Kinect

Way back when the Kinect was first released, there was a realization that this device would be the future of everything 3D. It was augmented reality, it was a new computer interface, it was a cool sensor for robotics applications, and it was a 3D scanner. When the first open source driver for the Kinect was released, we were assured that this is how we would get 3D data from real objects into a computer.

Since then, not much happened. We’re not using the Kinect for a UI, potato gamers were horrified they would be forced to buy the Kinect 2 with the new Xbox, and you’d be hard pressed to find a Kinect in a robot. 3D scanning is the only field where the Kinect hasn’t been over hyped, and even there it’s still a relatively complex setup.

This doesn’t mean a Kinect 3D scanner isn’t an object of desire for some people, or that it’s impossible to build a portabilzed version. [Mario]’s girlfriend works as an archaeologist, and having a tool to scan objects and places in 3D would be great for her. Because of this, [Mario] is building a handheld 3D scanner with a Raspberry Pi 2 and a Kinect.

This isn’t the first time we’ve seen a portablized Kinect. Way back in 2012, the Kinect was made handheld with the help of a Gumstix board. Since then, a million tiny ARM single board computers have popped up, and battery packs are readily available. It was only a matter of time until someone stepped up to the plate, and [Mario] was the guy.

The problem facing [Mario] isn’t hardware. Anyone can pick up a Kinect at Gamestop, the Raspberry Pi 2 should be more than capable of reading the depth sensor on the Kinect, and these parts can be tied together with 3D printed parts. The real problem is the software, and so far [Mario] has Libfreenect compiling without a problem on the Pi2. The project still requires a lot of additional libraries including some OpenCV stuff, but so far [Mario] has everything working.

You can check out his video of the proof of concept below.

Continue reading “Portabilizing The Kinect”

3D Scanning Rig And DIY Turntable

It seems almost every day 3D scanning is becoming more and more accessible to the general DIYer. The hardware required is minimal and there are several scanning softwares and workflows to choose from. However, if you have slowly walked around a subject while holding a Kinect and trying to get a good scan, you know this is not an easy task. A quick internet search will result in several DIY scanning setup solutions that have been cobbled together and lack substantial documentation…. until now! [aldricnegrier] is fighting back and has designed and documented a rotary table that will spin at a constant speed while a subject is 3D scanned, making person scanning just that much easier.

The project starts off with a plywood base with a Lazy Susan bearing assembly attached to the top. The Lazy Susan supports the rotating platform for the subject person to stand on, but it’s not just a platform, it’s also a huge gear! The platform teeth mesh with a much smaller 3D printed gear mounted on the shaft of a DC motor and reduction gearbox assembly.

Another goal of the project was to make the rotary table autonomous. There is an ultrasonic sensor mounted to the base aimed above the rotating platform. The ultrasonic sensor is connected to an Arduino and if the system senses someone or something on the platform for 3 seconds, the Arduino will command a DC motor driver to start spinning the platform.

As cool as this project is so far, [aldricnegrier] wanted to make it even cooler: he added speech recognition. Using Microsoft’s Speech Toolkit, saying the words ‘Start Skanect‘ will start the scanning process on the PC. Now, a sole person can scan themselves easily and reliably.

[aldricnegrier] has made all of his CAD files, STL files and Arduino code available so anyone wanting to build this clearly capable setup can do so!

Take a Spin on this Voice-Controlled 3D Scanning Rig

[Aldric Negrier] wanted to make 3D-scanning a person streamlined and simple. To that end, he created this voice-controlled 3D-scanning rig.

[Aldric] used a variety of hacking skills to make this project, and his thorough Instructable illustrates this nicely. Everything from CNC milling to Arduino programming to 3D-printing was incorporated into the making of this rig. Plywood was used to construct the base and the large toothed gear. A 12″ Lazy Susan bearing was attached to this gear to allow smooth rotation. In order to automate the rig, a 12V DC geared motor was attached to a smaller 3D-printed gear and positioned on the base. When the motor is on, the smaller gear’s teeth take the larger gear for a spin. He used a custom dual H-bridge motor driver made by a friend, which is connected to an Arduino Nano. The Nano is also connected to a Bluetooth module and an ultrasonic range finder. When an object within 1-35cm is detected on the rig for 3 seconds, the motor starts to spin, stopping when the object is no longer detected. A typical scan takes about 60 seconds.

This alone would have been a great project, but [Aldric] did not stop there. He wanted to be able to step on the rig and issue commands while being scanned. It makes sense if you want to scan yourself – get on the rig, assume the desired position, and then initiate the scan. He used the Windows speech recognition SDK to develop an application that issues commands via Bluetooth to Skanect, a 3D-scanning software. The commands are as simple as saying “Start Skanect.” You can also tell the motor to switch on or off and change its speed or direction without breaking form. [Aldric] used an Asus Xtion for a 3D-scanner, but a Kinect will also work. Afterwards, he smoothed his scans using MeshMixer, a program featured in previous hacks.

Check out the videos of the rig after the break. Voice commands are difficult to hear due to the background music in one of the videos, but if you listen carefully, you can hear them. You can also see more of [Aldric’s] projects here or on this YouTube channel.

Continue reading “Take a Spin on this Voice-Controlled 3D Scanning Rig”