Debunking Moon Landing Denial With An Arduino And Science

It’s sad that nearly half a century after the achievements of the Apollo program we’re still arguing with a certain subset of people who insist it never happened. Poring through the historical record looking for evidence that proves the missions couldn’t possibly have occurred has become a sad little cottage industry, and debunking the deniers is a distasteful but necessary ongoing effort.

One particularly desperate denier theory holds that fully spacesuited astronauts could never have exited the tiny hatch of the Lunar Excursion Module (LEM). [AstronomyLive] fought back at this tendentious claim in a clever way — with a DIY LIDAR scanner to measure Apollo artifacts in museums. The hardware is straightforward, with a Garmin LIDAR-Lite V3 scanner mounted on a couple of servos to make a quick pan-tilt head. The rig has a decidedly compliant look to it, with the sensor flopping around a bit as the servos move. But for the purpose, it seems perfectly fine.

[AstronomyLive] took the scanner to two separate museum exhibits, one to scan a LEM hatch and one to scan the suit Gene Cernan, the last man to stand on the Moon so far, wore while training for Apollo 17. With the LEM flying from the rafters, the scanner was somewhat stretching its abilities, so the point clouds he captured were a little on the low-res side. But in the end, a virtual Cernan was able to transition through the virtual LEM hatch, as expected.

Sadly, such evidence will only ever be convincing to those who need no convincing; the willfully ignorant will always find ways to justify their position. So let’s just celebrate the achievements of Apollo.

Continue reading “Debunking Moon Landing Denial With An Arduino And Science”

Make Use Of Your Drone Video With WebODM

If you ever watch the original Star Trek, Captain Kirk and crew spend a lot of time mapping new parts of the galaxy. In fact, at least one episode centered on them taking images of some new part of space. It might not be new, but if you have a drone, you probably have accumulated a lot of frames of aerial imagery from around your house (or wherever you fly).

WebODM allows you to create georeferenced maps, point clouds and textured 3D models from your drone footage. The software is really an integration and workflow manager for Open Drone Map, which does most of the heavy lifting.

Continue reading “Make Use Of Your Drone Video With WebODM”

Blending Real Objects With 3D Prints

It’s very subtle, but if you saw [Greg]’s 3D printed stone to Lego adapter while walking down the street, it might just cause you to stop mid-stride.

This modification to real objects begin with [Greg] taking dozens of pictures of the target object at many different angles. These pictures are then imported into Agisoft PhotoScan which takes all these photos and converts it into a very high-resolution, full-color point cloud.

After precisely measuring the real-world dimensions of the object to be modeled, [Greg] imported his point cloud into Blender and got started on the actual 3D modeling task. By reconstructing the original sandstone block in Blender, [Greg] was also able to model Lego parts.After subtracting the part of the model above the Lego parts, [Greg] had a bizarre-looking adapter that adapts Lego pieces to a real-life stone block.

It’s a very, very cool projet that demonstrates how good [Greg] is at making 3D models of real objects and modeling them inside a computer. After the break you can see a walkthrough of his work process, an impressive amount of expertise wrapped up in making the world just a little more strange.

Continue reading “Blending Real Objects With 3D Prints”

Visualizing Water Droplets And Building A CT Scanner

Since his nerves were wracked by presenting his project to an absurdly large crowd at this year’s SIGGRAPH, [James] is finally ready to share his method of mixing fluids via optical tomography with a much larger audience: the readership of Hackaday.

[James]’ project focuses on the problem of modeling mixing liquids from a multi-camera setup. The hardware is fairly basic, just 16 consumer-level video cameras arranged in a semicircle around a glass beaker full of water.

When [James] injects a little dye into the water, the diffusing cloud is captured by a handful of Sony camcorders. The images from these camcorders are sent through an algorithm that selects one point in the cloud and performs a random walk to find every other point in the cloud of liquid dye.

The result of all this computation is a literal volumetric cloud, allowing [James] to render, slice, and cut the cloud of dye any way he chooses. You can see the videos produced from this very cool build after the break.

Continue reading “Visualizing Water Droplets And Building A CT Scanner”