What’s New In 3D Scanning? All-In-One Scanning Is Nice

3D scanning is important because the ability to digitize awkward or troublesome shapes from the real world can really hit the spot. One can reconstruct objects by drawing them up in CAD, but when there isn’t a right angle or a flat plane in sight, calipers and an eyeball just doesn’t cut it.

Scanning an object can create a digital copy, aid in reverse engineering, or help ensure a custom fit to something. The catch is making sure that scanning fits one’s needs, and isn’t more work than it’s worth.

I’ve previously written about what to expect from 3D scanning and how to work with it. Some things have changed and others have not, but 3D scanning’s possibilities remain only as good as the quality and ease of the scans themselves. Let’s see what’s new in this area.

All-in-One Handheld Scanning

MIRACO all-in-one 3D scanner by Revopoint uses a quad-camera IR structured light sensor to create 1:1 scale scans.

3D scanner manufacturer Revopoint offered to provide me with a test unit of a relatively new scanner, which I accepted since it offered a good way to see what has changed in this area.

The MIRACO is a self-contained handheld 3D scanner that, unlike most other hobby and prosumer options, has no need to be tethered to a computer. The computer is essentially embedded with the scanner as a single unit with a touchscreen. Scans can be previewed and processed right on the device.

Being completely un-tethered is useful in more ways than one. Most tethered scanners require bringing the object to the scanner, but a completely self-contained unit like the MIRACO makes it easier to bring the scanner to the subject. Scanning becomes more convenient and flexible, and because it processes scans on-board, one can review and adjust or re-scan right on the spot. This is more than just convenience. Taking good 3D scans is a skill, and rapid feedback makes practice and experimentation more accessible.

Continue reading “What’s New In 3D Scanning? All-In-One Scanning Is Nice”

A Soft Thumb-Sized Vision-Based Touch Sensor

A team from the Max Planck Institute for Intelligent Systems in Germany have developed a novel thumb-shaped touch sensor capable of resolving the force of a contact, as well as its direction, over the whole surface of the structure. Intended for dexterous manipulation systems, the system is constructed from easily sourced components, so should scale up to a larger assemblies without breaking the bank. The first step is to place a soft and compliant outer skin over a rigid metallic skeleton, which is then illuminated internally using structured light techniques. From there, machine learning can be used to estimate the shear and normal force components of the contact with the skin, over the entire surface, by observing how the internal envelope distorts the structured illumination.

The novelty here is the way they combine both photometric stereo processing with other structured light techniques, using only a single camera. The camera image is fed straight into a pre-trained machine learning system (details on this part of the system are unfortunately a bit scarce) which directly outputs an estimate of the contact shape and force distribution, with spatial accuracy reported good to less than 1 mm and force resolution down to 30 millinewtons. By directly estimating normal and shear force components the direction of the contact could be resolved to 5 degrees. The system is so sensitive that it can reportedly detect its own posture by observing the deformation of the skin due its own weight alone!

We’ve not covered all that many optical sensing projects, but here’s one using a linear CIS sensor to turn any TV into a touch screen. And whilst we’re talking about using cameras as sensors, here’s a neat way to use optical fibers to read multiple light-gates with a single camera and OpenCV.

Continue reading “A Soft Thumb-Sized Vision-Based Touch Sensor”

Extremely Precise Positional Tracking

lumi

A few folks over at Carnegie Mellon have come up with a very simple way to do high-speed motion tracking (PDF) with little more than a flashlight. It’s called Lumitrack, and while it looks like a Wiimote on the surface, it is in reality much more accurate and precise.

The system works by projecting structured light onto two linear optical sensors. The pattern of the light is an m-sequence – basically a barcode where every subset of the m-sequence is unique. By shining this light onto a linear sensor, Lumitrack can calculate where the light is coming from, and thus the position of whatever is holding the light.

Even though the entire system consists of only an ARM microcontroller (in the form of a Maple Mini board), two linear optical sensors, and a flashlight with an m-sequence gel, it’s very accurate and very, very fast. The team is able to read the position at over 1000 frames/second, nearly the limit of what can be done with the Maple’s serial connection.

Already there are some interesting applications for this system – game controllers, including swords, flight yokes, and toy cars, and also more artistic endeavors such as a virtual can of spray paint. It’s an interesting piece of tech, and with the right parts, something any of us can build at home.

You can see the Lumitrack demo video below.

Continue reading “Extremely Precise Positional Tracking”

3D Printering: Scanning 3D Models

The Makerbot Digitizer was announced this week, giving anyone with $1400 the ability to scan small objects and print out a copy on any 3D printer.

Given the vitriol spewed against Makerbot in the Hackaday comments and other forums on the Internet, it should be very obvious the sets of Hackaday readers and the target demographic Makerbot is developing and marketing towards do not intersect. We’re thinking anyone reading this would rather roll up their sleeves and build a 3D scanner, but where to start? Below are a few options out there for those of you who want a 3D scanner but are none too keen on Makerbot’s offering.

Continue reading “3D Printering: Scanning 3D Models”

Building A Better Kinect With A… Pager Motor?

Fresh from Microsoft Research is an ingenious way to reduce interference and decrease the error in a Kinect. Bonus: the technique only requires a motor with an offset weight, or just an oversized version of the vibration motor found in a pager.

Being the first of its kind of commodity 3D depth sensors, the tracking on a Kinect really isn’t that good. In every Kinect demo we’ve ever seen, there are always errors in the 3D tracking or missing data in the point cloud. The Shake ‘n’ Sense, as Microsoft Research calls it, does away with these problems simply by vibrating the IR projector and camera with a single motor.

In addition to getting high quality point clouds from a Kinect, this technique also allows for multiple Kinects to be used in the same room. In the video (and title pic for this post), you can see a guy walking around a room filled with beach balls in 3D, captured from an array of four Kinects.

This opens up the doors to a whole lot of builds that were impossible with the current iteration of the Kinect, but we’re thinking this is far too easy and too clever not to be though of before. We’d love to see some independent verification of this technique, so if you’ve got a Kinect project sitting around, strap a motor onto it, make a video and send it in.

Continue reading “Building A Better Kinect With A… Pager Motor?”

Building Your Own Portable 3D Camera

diy-3d-camera

[Steven] needed to come up with a project for the Computer Vision course he was taking, so he decided to try building a portable 3D camera. His goal was to build a Kinect-like 3D scanner, though his solution is better suited for very detailed still scenes, while the Kinect performs shallow, less detailed scans of dynamic scenes.

The device uses a TI DLP Pico projector for displaying the structured light patterns, while a cheap VGA camera is tasked with taking snapshots of the scene he is capturing. The data is fed into a Beagleboard, where OpenCV is used to create point clouds of the objects he is scanning. That data is then handed off to Meshlab, where the point clouds can be combined and tweaked to create the final 3D image.

As [Steven] points out, the resultant images are pretty impressive considering his rig is completely portable and that it only uses an HVGA projector with a VGA camera. He says that someone using higher resolution equipment would certainly be able to generate fantastically detailed 3D images with ease.

Be sure to check out his page for more details on the project, as well as links to the code he uses to put these images together.

Structured Light 3d Scanner

After futzing around with a cheap pico projector, a webcam and a little bit of software, [Jas Strong] built herself a 3d scanner.

In spite of the dozens of Kinect-based scanner projects, we’ve seen structured light 3d scanners before. This method of volumetric scanning projects a series of gradient images onto a subject. A camera captures images of the patterns of light and dark on the model, math happens, and 3d data is spit out of a computer.

[Jas] found a Microvision SHOWWX laser pico projector on Woot. The laser in the projector plays a large part in the quality of her 3d models – without a focus, [Jas] can get very accurate depth information up close. A Logitech webcam modified for a tighter focus handles the video capture responsibilities. The software side of things are a few of these structured light utilities that [Jas] melded into a single Processing sketch.

The results are pretty remarkable for a rig that uses woodworking clamps to hold everything together. [Jas]’ 3d model of her cat’s house looks very good. She’s got a few bugs to work out in her setup, but [Jas] plans on releasing her work out into the wild very soon. We’ll update this post whenever that happens. made her code available here. The code requires the ControlP5 and PeasyCam libraries.