Late last year, artist [Steve Messam]’s project “Whistle” involved 16 steam engine whistles around Newcastle that would fire at different parts of the day over three months. The goal of the project was bring back the distinctive sound of the train whistles which used to be fixture of daily life, and to do so as authentically as possible. [Steve] has shared details on the construction and testing of the whistles, which as it turns out was a far more complex task than one might expect. The installation made use of modern technology like Raspberry Pi and cellular data networks, but when it came to manufacturing the whistles themselves the tried and true ways were best: casting in brass before machining on a lathe to finish.
The original whistles are a peek into a different era. The bell type whistle has three major components: a large bell at the top, a cup at the base, and a central column through which steam is piped. These whistles were usually made by apprentices, as they required a range of engineering and manufacturing skills to produce correctly, but were not themselves a critical mechanical component.
In the original whistle shown here, pressurized steam comes out from within the bottom cup and exits through the thin gap (barely visible in the image, it’s very narrow) between the cup and the flat shelf-like section of the central column. That ring-shaped column of air is split by the lip of the bell above it, and the sound is created. When it comes to getting the right performance, everything matters. The pressure of the air, the size of the gap, the sharpness of the bell’s lip, the spacing between the bell and the cup, and the shape of the bell itself all play a role. As a result, while the basic design and operation of the whistles were well-understood, there was a lot of work to be done to reproduce whistles that not only operated reliably in all types of weather using compressed air instead of steam, but did so while still producing an authentic re-creation of the original sound. As [Steve] points out, “with any project that’s not been done before, you really can’t do too much testing.”
Embedded below is one such test. It’s slow-motion footage of what happens when the whistle fires after filling with rainwater. You may want to turn your speakers down for this one: locomotive whistles really were not known for their lack of volume.
Most people are familiar with the idea that machine learning can be used to detect things like objects or people, but for anyone who’s not clear on how that process actually works should check out [Kurokesu]’s example project for detecting pedestrians. It goes into detail on exactly what software is used, how it is configured, and how to train with a dataset.
The application uses a USB camera and the back end work is done with Darknet, which is an open source framework for neural networks. Running on that framework is the YOLO (You Only Look Once) real-time object detection system. To get useful results, the system must be trained on large amounts of sample data. [Kurokesu] explains that while pre-trained networks can be used, it is still necessary to fine-tune the system by adding a dataset which more closely models the intended application. Training is itself a bit of a balancing act. A system that has been overly trained on a model dataset (or trained on too small of a dataset) will suffer from overfitting, a condition in which the system ends up being too picky and unable to usefully generalize. In terms of pedestrian detection, this results in false negatives — pedestrians that don’t get flagged because the system has too strict of an idea about what a pedestrian should look like.
[Kurokesu]’s walkthrough on pedestrian detection is great, but for those interested in taking a step further back and rolling their own projects, this fork of Darknet contains YOLO for Linux and Windows and includes practical notes and guides on installing, using, and training from a more general perspective. Interested in learning more about machine learning basics? Don’t forget Google has a free online crash course to get you up to speed.
[Maarten Tromp] recently took the time to document some of the unusual and creative electronic projects he received as gifts over the years. These gadgets were created in the early 2000’s and still work flawlessly today. Two of our favorites are shown here: Hardware Tetris Unit (shown in the image above) and Heap of Electronic Parts.
Heap of Electronic Parts was a kind of hardware puzzle and certainly lives up to its name. It’s a bunch of parts soldered in a mystifying way to the backs of four old EPROMs — the chips with the little window through which UV is used to erase the contents. Assured that the unit really did have a function, [Maarten] eventually figured out that when placed in sunlight, the device ticks, buzzes, and squeals. [Jeroen] had figured out that the EPROMs could act like tiny solar cells when placed in sunlight, and together the four generate just enough power to drive an oscillator connected to a piezo speaker. It still chirps happily away, even today.
Hardware Tetris Unit was a black box intended to be plugged into a serial port. With a terminal opened using the correct serial port settings, a fully-functional Tetris game using ASCII-art graphics could be played. It was even self-powered from the serial port pins.
Inside Hardware Tetris is an AVR microcontroller with some level shifters, and the source code and schematics are available for download. 14 years later, computers no longer have hardware serial ports but [Maarten] says a USB-to-serial converter worked just fine and the device still functions perfectly.
There are a couple more devices documented on [Maarten]’s gifts page, including a Zork-inspired mini text adventure and a hardware board that does some trippy demos on an old Nokia color LCD. [Maarten]’s friend [Jeroen Domburg] (aka Sprite_tm) had a hand in creating most of the gadgets, and he’s someone whose brilliant work we have had the good fortune to feature many times in the past.
An ultrasonic knife is a blade that vibrates a tiny amount at a high frequency, giving the knife edge minor superpowers. It gets used much like any other blade, but it becomes far easier to cut through troublesome materials like rubber or hard plastics. I was always curious about them, and recently made my own by modifying another tool. It turns out that an ultrasonic scaling tool intended for dental use can fairly easily be turned into a nimble little ultrasonic cutter for fine detail work.
I originally started thinking about an ultrasonic knife to make removing supports from SLA 3D prints easier. SLA resin prints are made from a smooth, hard plastic and can sometimes require a veritable forest of supports. These supports are normally removed with flush cutters, or torn off if one doesn’t care about appearances, but sometimes the density of supports makes this process awkward, especially on small objects.
I imagined that an ultrasonic blade would make short work of these pesky supports, and for the most part, I was right! It won’t effortlessly cut through a forest of support bases like a hot knife through butter, but it certainly makes it easier to remove tricky supports from the model itself. Specifically, it excels at slicing through fine areas while preserving delicate features. Continue reading “Making an Ultrasonic Cutter for Post-processing Tiny 3D Prints”→
We’re not sure what it is, but something about LEGO and music go together like milk and cookies when it comes to DIY musical projects. [Paul Wallace]’s Lego Music project is a sequencer that uses the colorful plastic pieces to build and control sound, but there’s a twist. The blocks aren’t snapped onto anything; the system is entirely visual. A computer running OpenCV uses a webcam to watch the arrangement of blocks, and overlays them onto a virtual grid where the positions of the pieces are used as inputs for the sequencer. The Y axis represents pitch, and the X axis represents time.
Embedded below are two videos. The first demonstrates how the music changes based on which blocks are placed, and where. The second is a view from the software’s perspective, and shows how the vision system processes the video by picking out the colored blocks, then using their positions to change different values which has an effect on the composition as a whole.
[Dan] feels that paper maché is an under-utilized and under-rated medium, and he puts out some stunning work on his blog as well as his YouTube channel. What’s great to see are his frank descriptions and explanations of what does and doesn’t work, and he’s not afraid to try new things and explore different ways to approach problems.
Enterprising hackers may not pick paper maché as their first choice to create creating custom enclosures, but it can be done and the accessibility and ease of use of the medium are certainly undeniable. One never knows when a tool or technique may come in handy.
Some time ago, [Trammell Hudson] took a shot at creating a tool that unfolds 3D models in STL format and outputs a color-coded 2D pattern that can be cut out using a laser cutter. With a little bending and gluing, the 3D model can be re-created out of paper or cardboard.
There are of course other and more full-featured tools for unfolding 3D models: Pepakura is used by many, but is not free and is Windows only. There is also a Blender extension called Paper Model that exists to export 3D shapes as paper models.
What’s interesting about [Trammell]’s project are the things he discovered while making it. The process of unfolding an STL may be conceptually simple, but the actual implementation is a bit tricky in ways that have little to do with number crunching.
For example, in a logical sense it doesn’t matter much where the software chooses to start the unfolding process, but in practice some start points yield much tighter groups of shapes that are easier to work with. Also, his software doesn’t optimize folding patterns, so sometimes the software will split a shape along a perfectly logical (but non-intuitive to a human) line and it can be difficult to figure out which pieces are supposed to attach where. The software remains in beta, but those who are interested can find it hosted on GitHub. It turns out that it’s actually quite challenging to turn a 3D model into an unfolded shape that still carries visual cues or resemblances to the original. Adding things like glue tabs in sensible places isn’t trivial, either.