[Thomas Sanladerer]’s YouTube Channel Goes In The Toilet

We like [Thomas Sanladerer], so when we say his channel has gone in the toilet, we mean that quite literally. He had a broken toilet and wanted to compare options for effecting a 3D printed repair. The mechanism is a wall-mounted flush mechanism with a small broken plastic part. Luckily, he had another identical unit that provided a part that wasn’t broken.

The first attempt was to 3D scan the good part. The first scanner’s software turned out to be finicky, and [Thomas] finally gave up on it. He finally used a handheld scanner which took about a half hour. It wasn’t, of course, perfect, so he also had to do some more post-processing.

The next step was to make measurements and draw the part in CAD. It took the same amount as the scan, and it is worth noting that the part had curves and angles — it wasn’t just a faceplate. The printed results were good, although a measurement error made the CAD model bind a bit instead of pivoting the way it should. The scan, of course, got it right.

A quick revision of the design solved that problem but, of course, it added some time to the process. At the end, he noticed that the scanned “good” part was also broken but in a different way. He added the additional part, which didn’t seem to bother the function. The scanned object required a little trimming but nothing tremendous.

In the end, the scanning was a bit quicker, partly because it didn’t suffer from the measurement error. However, [Thomas] noted that it was more fun to work in CAD. We thought the results looked better, anyway. [Thomas] thinks the scanners, at least the budget ones, are probably better for just getting reference objects into CAD to guide you when you create the actual objects to print.

It isn’t hard to make a cheap scanner. Some of the open designs are quite sophisticated.

Continue reading “[Thomas Sanladerer]’s YouTube Channel Goes In The Toilet”

Beautifully Rebuilding A VR Headset To Add AR Features

[PyottDesign] recently wrapped up a personal project to create himself a custom AR/VR headset that could function as an AR (augmented reality) platform, and make it easier to develop new applications in a headset that could do everything he needed. He succeeded wonderfully, and published a video showcase of the finished project.

Getting a headset with the features he wanted wasn’t possible by buying off the shelf, so he accomplished his goals with a skillful custom repackaging of a Quest 2 VR headset, integrating a Stereolabs Zed Mini stereo camera (aimed at mixed reality applications) and an Ultraleap IR 170 hand tracking module. These hardware modules have tons of software support and are not very big, but when sticking something onto a human face, every millimeter and gram counts.

Continue reading “Beautifully Rebuilding A VR Headset To Add AR Features”

3D Scanning A Room With A Steam Deck And A Kinect

It may not be obvious, but Valve’s Steam Deck is capable of being more than just a games console. Demonstrating this is [Parker Reed]’s experiment in 3D scanning his kitchen with a Kinect and Steam Deck combo, and viewing the resulting mesh on the Steam Deck.

The two pieces of hardware end up needing a lot of adapters and cables.

[Parker] runs the RTAB-Map software package on his Steam Deck, which captures a point cloud and color images while he pans the Kinect around. After that, the Kinect’s job is done and he can convert the data to a mesh textured with the color images. RTAB-Map is typically used in robotic applications, but we’ve seen it power completely self-contained DIY 3D scanners.

While logically straightforward, the process does require some finessing and fiddling to get it up and running. Reliability is a bit iffy thanks to the mess of cables and adapters required to get everything hooked up, but it does work. [Parker] shows off the whole touchy process, but you can skip a little past the five minute mark if you just want to see the scanning in action.

The Steam Deck has actual computer chops beneath its games console presentation, and we’ve seen a Steam Deck appear as a USB printer that saves received print jobs as PDFs, and one has even made an appearance in radio signal direction finding.

Continue reading “3D Scanning A Room With A Steam Deck And A Kinect”

Mommy, Where Do Ideas Come From?

We wrote up an astounding old use of technology – François Willème’s 3D scanning and modeling apparatus from 1861, over 150 years ago. What’s amazing about this technique is that it used absolutely cutting-edge technology for the time, photography, and the essence of a technique still used today in laser-line 3D scanners, or maybe even more closely related to the “bullet time” effect.

This got me thinking of how Willème could have possibly come up with the idea of taking 24 simultaneous photographs, tracing the outline in wood, and then re-assembling them radially into a 3D model. And all of this in photography’s very infancy.

But Willème was already a sculptor, and had probably seen how he could use photos to replace still models in the studio, at least to solidify proportions. And he was probably also familiar with making cameos, where the profile was often illuminated from behind and carved, often by tracing shadows. From these two, you could certainly imagine his procedure, but there’s still an admirable spark of genius at work.

Could you have had that spark without the existence of photography? Not really. Tracing shadows in the round is impractical unless you can fix them. The existence of photography enabled this idea, and countless others, to come into existence.

That’s what I think is neat about technology, and the sharing of new technological ideas. Oftentimes they are fantastic in and of themselves, like photography indubitably was. But just as often, the new idea is a seed for more new ideas that radiate outward like ripples in a pond.

In A Way, 3D Scanning Is Over A Century Old

In France during the mid-to-late 1800s, one could go into François Willème’s studio, sit for a photo session consisting of 24 cameras arranged in a circle around the subject, and in a matter of days obtain a photosculpture. A photosculpture was essentially a sculpture representing, with a high degree of exactitude, the photographed subject. The kicker was that it was both much faster and far cheaper than traditional sculpting, and the process was remarkably similar in principle to 3D scanning. Not bad for well over a century ago.

This article takes a look at François’ method for using the technology and materials of the time to create 3D reproductions of photographed subjects. The article draws a connection between photosculpture and 3D printing, but we think the commonality with 3D scanning is much clearer.

Continue reading “In A Way, 3D Scanning Is Over A Century Old”

3D Scanning Trouble? This Guide Has You Covered

When it comes to 3D scanning, a perfect surface looks a lot like the image above: thousands of distinct and random features, high contrast, no blurry areas, and no shiny spots. While most objects don’t look quite that good, it’s possible to get usable results anyway, and that’s what [Thomas] aims to help people do with his tips on how to create a perfect, accurate 3D scan with photogrammetry.

3D scanning in general is pretty far from being as simple as “point box, press button”, but there are tools available to make things easier. Good lighting is critical, polarizers can help, and products like chalk spray can temporarily add matte features to otherwise troublesome, shiny, or featureless objects. [Thomas] provides visuals of each of these, so one can get an idea of exactly what each of those elements brings to the table. There’s even a handy flowchart table to help troubleshoot and improve tricky scan situations.

[Thomas] knows his stuff when it comes to 3D scanning, seeing as he’s behind the OpenScan project. The last time we featured OpenScan was back in 2020, and things have clearly moved forward since then with a new design, the OpenScan Mini. Interesting in an open-sourced scanning solution? Be sure to give it a look.

NeRF: Shoot Photos, Not Foam Darts, To See Around Corners

Readers are likely familiar with photogrammetry, a method of creating 3D geometry from a series of 2D photos taken of an object or scene. To pull it off you need a lot of pictures, hundreds or even thousands, all taken from slightly different perspectives. Unfortunately the technique suffers where there are significant occlusions caused by overlapping elements, and shiny or reflective surfaces that appear to be different colors in each photo can also cause problems.

But new research from NVIDIA marries photogrammetry with artificial intelligence to create what the developers are calling an Instant Neural Radiance Field (NeRF). Not only does their method require far fewer images, as little as a few dozen according to NVIDIA, but the AI is able to better cope with the pain points of traditional photogrammetry; filling in the gaps of the occluded areas and leveraging reflections to create more realistic 3D scenes that reconstruct how shiny materials looked in their original environment.


If you’ve got a CUDA-compatible NVIDIA graphics card in your machine, you can give the technique a shot right now. The tutorial video after the break will walk you through setup and some of the basics, showing how the 3D reconstruction is progressively refined over just a couple of minutes and then can be explored like a scene in a game engine. The Instant-NeRF tools include camera-path keyframing for exporting animations with higher quality results than the real-time previews. The technique seems better suited for outputting views and animations than models for 3D printing, though both are possible.

Don’t have the latest and greatest NVIDIA silicon? Don’t worry, you can still create some impressive 3D scans using “old school” photogrammetry — all you really need is a camera and a motorized turntable.

Continue reading “NeRF: Shoot Photos, Not Foam Darts, To See Around Corners”