Get Great 3D Scans with Open Photogrammetry

Not long ago, photogrammetry — the process of stitching multiple photographs taken from different angles into a 3D whole — was hard stuff. Nowadays, it’s easy. [Mikolas Zuza] over at Prusa Printers, has a guide showing off cutting edge open-source software that’s not only more powerful, but also easier to use. They’ve also produced a video, which we’ve embedded below.

Basically, this is a guide to using Meshroom, which is based on the AliceVision photogrammetry framework. AliceVision is a research platform, so it’s got tremendous capability but doesn’t necessarily focus on the user experience. Enter Meshroom, which makes that power accessible.

Meshroom does all sorts of cool tricks, like showing you how the 3D reconstruction looks as you add more images to the dataset, so that you’ll know where to take the next photo to fill in incomplete patches. It can also reconstruct from video, say if you just walked around the object with a camera running.

The final render is computationally intensive, but AliceVision makes good use of a CUDA on Nvidia graphics cards, so you can cut your overnight renders down to a few hours if you’ve got the right hardware. But even if you have to wait for the results, they’re truly impressive. And best of all, you can get started building up your 3D model library using nothing more than that phone in your pocket.

If you want to know how to use the models that come out of photogrammetry, check out [Eric Strebel]’s video. And if all of this high-tech software foolery is too much for you, try a milk-based 3D scanner.

Continue reading “Get Great 3D Scans with Open Photogrammetry”

Three Dimensions: What Does That Really Mean?

The holy grail of display technology is to replicate what you see in the real world. This means video playback in 3D — but when it comes to displays, what is 3D anyway?

You don’t need me to tell you how far away we are from succeeding in replicating real life in a video display. Despite all the hype, there are only a couple of different approaches to faking those three-dimensions. Let’s take a look at what they are, and why they can call it 3D, but they’re not fooling us into believing we’re seeing real life… yet.

Continue reading “Three Dimensions: What Does That Really Mean?”

Use Nodes to Code Loads of G-code for 3D CNC Carving

Most CNC workflows start with a 3D model, which is then passed to CAM software to be converted into the G-code language that CNC machines love and understand. G-code, however, is simple enough that rudimentary coding skills are all you need to start writing your very own programmatic CNC tool paths. Any language that can output plain text is fully capable of enabling you to directly control powerful motors and rapidly spinning blades.

[siemenc] shows us how to use Grasshopper – a visual node-based programming system for Rhino 3D – to output G-code that makes some interesting patterns and shapes in wood when fed to a ShopBot. Though the Rhino software is a bit expensive and thus is not too widely available, [siemenc] walks through some background, theory, and procedures that could be useful and inspirational no matter what software or programming language you’re using to create your bespoke G-code.

For links to code and related blog posts, plus more lovely pictures of intricately carved plywood, check out [siemenc]’s personal site as well.

[via Bantam Tools]

Robot Maps Rooms with Help From iPhone

The Unity engine has been around since Apple started using Intel chips, and has made quite a splash in the gaming world. Unity allows developers to create 2D and 3D games, but there are some other interesting applications of this gaming engine as well. For example, [matthewhallberg] used it to build a robot that can map rooms in 3D.

The impetus for this project was a robotics company that used a series of robots around their business. The robots navigate using computer vision, but couldn’t map the rooms from scratch. They hired [matthewhallberg] to tackle this problem, and this robot is a preliminary result. Using the Unity engine and an iPhone, the robot can perform in one of three modes. The first is a user-controlled mode, the second is object following, and the third is 3D mapping.

The robot seems fairly easy to construct and only carries and iPhone, a Node MCU, some motors, and a battery. Most of the computational work is done remotely, with the robot simply receiving its movement commands from another computer. There’s a lot going on here, software-wise, and a lot of toolkits and software packages to install and communicate with one another, but the video below does a good job of showing what you’ll need and how it all works together. If that’s all too much, there are other robots with a form of computer vision that can get you started into the world of computer vision and mapping.

Continue reading “Robot Maps Rooms with Help From iPhone”

3D Printed Tourniquets are Not a Cinch

Saying that something is a cinch is a way of saying that it is easy. Modeling a thin handle with a hole through the middle seems like it would be a simple task accomplishable in a single afternoon and that includes the time to print a copy or two. We are here to tell you that is only the first task when making tourniquets for gunshot victims. Content warning: there are real pictures of severe trauma. Below, is a video of a training session with the tourniquets in Hayat Center in Gaza and has a simulated wound on a mannequin.

On the first pass, many things are done correctly: the handle is the correct length and diameter, the strap hole fit the strap, and the part is well oriented on the platen. As with many first iterations, it looks good on a screen, but in the real world, we all live under Murphy’s law. In practice, some of the strap holes had sharp edges that cut into the strap, and one of the printed buckles broke unexpectedly.

On the whole, the low cost and availability of the open-source tourniquets outweigh the danger of operating without them. Open-source medical devices are not just for use in the field, they can help with training too. This tourniquet is saving people and proving that modeling skills can be a big help in the real world.
Continue reading “3D Printed Tourniquets are Not a Cinch”

Watch Video on a Oscilloscope with an ESP32

[bitluni] got a brand new scope, and he couldn’t be happier. No, really — check the video below; he’s really happy. And to celebrate, he turned his scope into a vector display using an ESP32.

Using a scope in X-Y mode is nothing new, of course. The technique is used to display everything from Lissajous patterns from an SDR to bouncing balls from an analog computer. Taken on as more of an exercise to learn how to use his new tool than a practical project, [bitluni]’s project starts by using two DACs on an ESP32 to create simple Lissajous patterns to learn about the scope’s controls. Next he built some code to display 3D point clouds, but learned that the native DAC code wasn’t up to the job. A little hacking improved the speed 27-fold, which was enough for great 3D images and live video from an I²S camera module. The latter was accomplished by grabbing frames from the camera and rendering them pixel by pixel, CRT style. The results are pretty clean, and there’s a lot to be learned about both using scopes as X-Y displays and tweaking the ESP32 for maximum performance.

Need more background on the ESP32? Start by checking out these ESP32 tutorials.

 

Continue reading “Watch Video on a Oscilloscope with an ESP32”

Add Intuitiveness to OpenSCAD With Encoders

The first time I saw 3D modeling and 3D printing used practically was at a hack day event. We printed simple plastic struts to hold a couple of spring-loaded wires apart. Nothing revolutionary as far as parts go but it was the moment I realized the value of a printer.

Since then, I have used OpenSCAD because that is what I saw the first time but the intuitiveness of other programs led me to develop the OpenVectorKB which allowed the ubiquitous vectors in OpenSCAD to be changed at will while keeping the parametric qualities of the program, and even leveraging them.

All three values in a vector, X, Y, and Z, are modified by twisting encoder knobs. The device acts as a keyboard to

  1. select the relevant value
  2. replace it with an updated value
  3. refresh the display
  4. move the cursor back to the starting point

There is no software to install and it runs off a Teensy-LC so reprogramming it for other programs is possible in any program where rotary encoders may be useful. Additional modes include a mouse, arrow keys, Audacity editing controls, and VLC time searching.

Here’s an article in favor of OpenSCAD and here’s one against it. This article does a good job of explaining OpenSCAD.

Continue reading “Add Intuitiveness to OpenSCAD With Encoders”