Beautifully Rebuilding A VR Headset To Add AR Features

[PyottDesign] recently wrapped up a personal project to create himself a custom AR/VR headset that could function as an AR (augmented reality) platform, and make it easier to develop new applications in a headset that could do everything he needed. He succeeded wonderfully, and published a video showcase of the finished project.

Getting a headset with the features he wanted wasn’t possible by buying off the shelf, so he accomplished his goals with a skillful custom repackaging of a Quest 2 VR headset, integrating a Stereolabs Zed Mini stereo camera (aimed at mixed reality applications) and an Ultraleap IR 170 hand tracking module. These hardware modules have tons of software support and are not very big, but when sticking something onto a human face, every millimeter and gram counts.

Continue reading “Beautifully Rebuilding A VR Headset To Add AR Features”

3D Scanning A Room With A Steam Deck And A Kinect

It may not be obvious, but Valve’s Steam Deck is capable of being more than just a games console. Demonstrating this is [Parker Reed]’s experiment in 3D scanning his kitchen with a Kinect and Steam Deck combo, and viewing the resulting mesh on the Steam Deck.

The two pieces of hardware end up needing a lot of adapters and cables.

[Parker] runs the RTAB-Map software package on his Steam Deck, which captures a point cloud and color images while he pans the Kinect around. After that, the Kinect’s job is done and he can convert the data to a mesh textured with the color images. RTAB-Map is typically used in robotic applications, but we’ve seen it power completely self-contained DIY 3D scanners.

While logically straightforward, the process does require some finessing and fiddling to get it up and running. Reliability is a bit iffy thanks to the mess of cables and adapters required to get everything hooked up, but it does work. [Parker] shows off the whole touchy process, but you can skip a little past the five minute mark if you just want to see the scanning in action.

The Steam Deck has actual computer chops beneath its games console presentation, and we’ve seen a Steam Deck appear as a USB printer that saves received print jobs as PDFs, and one has even made an appearance in radio signal direction finding.

Continue reading “3D Scanning A Room With A Steam Deck And A Kinect”

Mommy, Where Do Ideas Come From?

We wrote up an astounding old use of technology – François Willème’s 3D scanning and modeling apparatus from 1861, over 150 years ago. What’s amazing about this technique is that it used absolutely cutting-edge technology for the time, photography, and the essence of a technique still used today in laser-line 3D scanners, or maybe even more closely related to the “bullet time” effect.

This got me thinking of how Willème could have possibly come up with the idea of taking 24 simultaneous photographs, tracing the outline in wood, and then re-assembling them radially into a 3D model. And all of this in photography’s very infancy.

But Willème was already a sculptor, and had probably seen how he could use photos to replace still models in the studio, at least to solidify proportions. And he was probably also familiar with making cameos, where the profile was often illuminated from behind and carved, often by tracing shadows. From these two, you could certainly imagine his procedure, but there’s still an admirable spark of genius at work.

Could you have had that spark without the existence of photography? Not really. Tracing shadows in the round is impractical unless you can fix them. The existence of photography enabled this idea, and countless others, to come into existence.

That’s what I think is neat about technology, and the sharing of new technological ideas. Oftentimes they are fantastic in and of themselves, like photography indubitably was. But just as often, the new idea is a seed for more new ideas that radiate outward like ripples in a pond.

In A Way, 3D Scanning Is Over A Century Old

In France during the mid-to-late 1800s, one could go into François Willème’s studio, sit for a photo session consisting of 24 cameras arranged in a circle around the subject, and in a matter of days obtain a photosculpture. A photosculpture was essentially a sculpture representing, with a high degree of exactitude, the photographed subject. The kicker was that it was both much faster and far cheaper than traditional sculpting, and the process was remarkably similar in principle to 3D scanning. Not bad for well over a century ago.

This article takes a look at François’ method for using the technology and materials of the time to create 3D reproductions of photographed subjects. The article draws a connection between photosculpture and 3D printing, but we think the commonality with 3D scanning is much clearer.

Continue reading “In A Way, 3D Scanning Is Over A Century Old”

3D Scanning Trouble? This Guide Has You Covered

When it comes to 3D scanning, a perfect surface looks a lot like the image above: thousands of distinct and random features, high contrast, no blurry areas, and no shiny spots. While most objects don’t look quite that good, it’s possible to get usable results anyway, and that’s what [Thomas] aims to help people do with his tips on how to create a perfect, accurate 3D scan with photogrammetry.

3D scanning in general is pretty far from being as simple as “point box, press button”, but there are tools available to make things easier. Good lighting is critical, polarizers can help, and products like chalk spray can temporarily add matte features to otherwise troublesome, shiny, or featureless objects. [Thomas] provides visuals of each of these, so one can get an idea of exactly what each of those elements brings to the table. There’s even a handy flowchart table to help troubleshoot and improve tricky scan situations.

[Thomas] knows his stuff when it comes to 3D scanning, seeing as he’s behind the OpenScan project. The last time we featured OpenScan was back in 2020, and things have clearly moved forward since then with a new design, the OpenScan Mini. Interesting in an open-sourced scanning solution? Be sure to give it a look.

NeRF: Shoot Photos, Not Foam Darts, To See Around Corners

Readers are likely familiar with photogrammetry, a method of creating 3D geometry from a series of 2D photos taken of an object or scene. To pull it off you need a lot of pictures, hundreds or even thousands, all taken from slightly different perspectives. Unfortunately the technique suffers where there are significant occlusions caused by overlapping elements, and shiny or reflective surfaces that appear to be different colors in each photo can also cause problems.

But new research from NVIDIA marries photogrammetry with artificial intelligence to create what the developers are calling an Instant Neural Radiance Field (NeRF). Not only does their method require far fewer images, as little as a few dozen according to NVIDIA, but the AI is able to better cope with the pain points of traditional photogrammetry; filling in the gaps of the occluded areas and leveraging reflections to create more realistic 3D scenes that reconstruct how shiny materials looked in their original environment.

NVIDIA-Instant-NeRF-3D-Mesh

If you’ve got a CUDA-compatible NVIDIA graphics card in your machine, you can give the technique a shot right now. The tutorial video after the break will walk you through setup and some of the basics, showing how the 3D reconstruction is progressively refined over just a couple of minutes and then can be explored like a scene in a game engine. The Instant-NeRF tools include camera-path keyframing for exporting animations with higher quality results than the real-time previews. The technique seems better suited for outputting views and animations than models for 3D printing, though both are possible.

Don’t have the latest and greatest NVIDIA silicon? Don’t worry, you can still create some impressive 3D scans using “old school” photogrammetry — all you really need is a camera and a motorized turntable.

Continue reading “NeRF: Shoot Photos, Not Foam Darts, To See Around Corners”

3D Objects Without Scanning

There are many scanners — both commercial and homemade — that can take a variety of scans or images of a 3D object and convert it into something like a 3D printable file. When the process works, it works well, but the results can be finicky at best and will require a lot of manual tuning. According to [Samuel Garbett], you might as well just draw your own model using Blender. He shows you how using a Red Bull can which, granted, isn’t exactly the most complicated thing ever, but it isn’t the simplest either.

He does take one photo of the can, so there is a camera involved at some point. He also takes measurements using calipers, something you probably already have laying around.

Since it is just a can, there aren’t many required pictures or measurements as, say, a starship model. Once you have the measurements, of course, you could use the tool of your choice and since we aren’t very adept with Blender, we might have used something we think is easier like FreeCAD or OpenSCAD. However, Blender has a lot of power, so we suspect making the jump from can to the USS Enterprise might be more realistic for a Blender user.

Besides, it is good to see how other tools work and we were surprised that Blender could be relatively simple to use. Every time we see [Jared’s] channel, we think we should learn more about Blender. But if you have your heart set on a real scanner, there are plenty of open source designs you can print.