Even a relatively low-end desktop 3D printer will have no problems running off custom enclosures or parts for your latest project, and for many, that’s more than worth the cost of admission. But if you’re willing to put in the time and effort to become proficient with necessary CAD tools, even a basic 3D printer is capable of producing complex gadgets and mechanisms which would be extremely time consuming or difficult to produce with traditional manufacturing techniques.
Once you find yourself at this stage of your 3D printing career, there’s something of a fork in the road. The most common path is to design parts which are printed and then assembled with glue or standard fasteners. This is certainly the easiest way forward, and lets you use printed parts in a way that’s very familiar. It can also be advantageous if you’re looking to meld your own printed parts with existing hardware.
The other option is to fully embrace the unique capabilities of 3D printing. Forget about nuts and bolts, and instead design assemblies which snap-fit together. Start using more organic shapes and curves. Understand that objects are no longer limited to simple solids, and can have their own complex internal geometries. Does a hinge really need to be two separate pieces linked with a pin, or could you achieve the desired action by capturing one printed part inside of another?
If you’re willing to take this path less traveled, you may one day find yourself creating designs such as this fully 3D printed turntable by Brian Brocken. Intended for photographing or 3D scanning small objects without breaking the bank, the design doesn’t use ball bearings, screws, or even glue. Every single component is printed and fits together with either friction or integrated locking features. This is a functional device that can be printed and put to use anywhere, at any time. You could print one of these on the International Space Station and not have to wait on an order from McMaster-Carr to finish it.
With such a clever design, I couldn’t help but take a closer look at how it works, how it prints, and perhaps even some ways it could be adapted or refined going forward.
Having picked out a particularly well-formed starfruit for his project, [Frank] didn’t want to spend an inordinately long time attempting to recreate the organic lumps and bumps in modelling software, Instead, Meshroom was used to create a model through photogrammetry. After several failed attempts, success was achieved by using a textured rotating table as a background, with the starfruit painted in matte grey and a final dusting of black speckle. This gave the software enough visual cues to accurately model the fruit’s geometry.
With a 3D model to hand, Fusion Slicer was then used to generate a model that could be constructed out of flat lasercut pieces. The cutting outlines were then generated and passed to Rhino for final tweaking. With everything ready, parts were cut out of plywood and a small mockup of a potential lamp design was created. [Frank] is currently workshopping the design with the inhabitants of the dining room, prior to the final build.
If you don’t have access to a 3D scanner, you can get a lot done with photogrammetry. Basically, you take a bunch of pictures of an object from different angles, and then stitch them together with software to create a 3D model. For best results, you need consistent, diffuse lighting, an unchanging background, and a steady camera.
[Eric] can move the camera up and down the arc of the boom to get all the Z-positions he wants. The platform has a mark every 10° and there’s a pointer in the platform to line them up against for consistent camera positioning. He was pleasantly surprised by the results, which we agree are outstanding.
We always learn a lot from [Eric]’s videos, and this one’s no exception. Case in point: he makes a cardboard mock-up by laying out the pieces, and uses that to make a pattern for the recycled plywood and melamine version. In the photogrammetry video, he covers spray paint techniques to make objects reflect as little light as possible so the details don’t get lost.
Elliot Williams and Mike Szczys take a look at advances in photogrammetry (building 3D models out of many photographs from a regular camera), a delay pedal that’s both aesthetically and aurally pleasing, and the power of AI to identify garden slugs. Mike interviews Scotty Allen while walking the streets and stores of the Shenzhen Electronics markets. We delve into SD card problems with Raspberry Pi, putting industrial controls on your desk, building a Geiger counter for WiFi, and the sad truth about metal 3D printing.
Take a look at the links below if you want to follow along, and as always, tell us what you think about this episode in the comments!
Not long ago, photogrammetry — the process of stitching multiple photographs taken from different angles into a 3D whole — was hard stuff. Nowadays, it’s easy. [Mikolas Zuza] over at Prusa Printers, has a guide showing off cutting edge open-source software that’s not only more powerful, but also easier to use. They’ve also produced a video, which we’ve embedded below.
Basically, this is a guide to using Meshroom, which is based on the AliceVision photogrammetry framework. AliceVision is a research platform, so it’s got tremendous capability but doesn’t necessarily focus on the user experience. Enter Meshroom, which makes that power accessible.
Meshroom does all sorts of cool tricks, like showing you how the 3D reconstruction looks as you add more images to the dataset, so that you’ll know where to take the next photo to fill in incomplete patches. It can also reconstruct from video, say if you just walked around the object with a camera running.
The final render is computationally intensive, but AliceVision makes good use of a CUDA on Nvidia graphics cards, so you can cut your overnight renders down to a few hours if you’ve got the right hardware. But even if you have to wait for the results, they’re truly impressive. And best of all, you can get started building up your 3D model library using nothing more than that phone in your pocket.
If you want to know how to use the models that come out of photogrammetry, check out [Eric Strebel]’s video. And if all of this high-tech software foolery is too much for you, try a milk-based 3D scanner.
In its most basic sense, photogrammetry refers to taking measurements from photographs. In the sense being discussed here, it more precisely refers to the method of creating a 3D model from a series of photographs of a physical object. By taking appropriate images of an object, and feeding them through the right software, it’s possible to create a digital representation of the object without requiring any special hardware other than a camera.
[Eric] shares several tips and tricks for getting good results. Surface preparation is key, with the aim being to create a flat finish to avoid reflections causing problems. A grey primer is first sprayed on the object, followed by a dusting of black spots, which helps the software identify the object’s contours. Camera settings are also important, with wide apertures used to create a shallow depth-of-field that helps the object stand out from the background.
With the proper object preparation and camera technique taken care of, the hard work is done. All that’s then required is to feed the photos through the relevant software. [Eric] favors Agisoft Metashape, though there are a variety of packages that offer this functionality.
Those just starting out in 3D printing often believe that their next major purchase after the printer will be a 3D scanner. If you’re going to get something that can print a three dimensional model, why not get something that can create said models from real-world objects? But the reality is that only a small percentage ever follow through with buying the scanner; primarily because they are notoriously expensive, but also because the scanned models often require a lot of cleanup work to be usable anyway.
The general idea is to place a platform on the stepper motor, and have the Arduino rotate it 10 degrees at a time in front of a camera on a tripod. The camera is triggered by an IR LED on one of the Arduino’s digital pins, so that it takes a picture each time the platform rotates. There are configurable values to give the object time to settle down after rotation, and a delay to give the camera time to take the picture and get ready for the next one.
Once all the pictures have been taken, they are loaded into special software to perform what’s known as photogrammetry. By compiling all of the images together, the software is able to generate a fairly accurate 3D image. It might not have the resolution to make a 1:1 copy of a broken part, but it can help shave some modeling time when working with complex objects.