A black, rectangular box is shown, with a number of waterproof screw connectors on the front.

A Ruggedized Raspberry Pi For Sailors

Nautical navigation has a long history of innovation, from the compass and chronometer to today’s computer-driven autopilot systems. That said, the poor compatibility of electronics with saltwater has consequently created a need for rugged, waterproof computers, a category to which [Matti Airas] of Hat Labs has contributed with the open-source HALPI2.

Powered by the Raspberry Pi Compute Module 5, the electronics are housed in a heavy duty enclosure made of aluminium, which also serves as a heat sink, and closes with a waterproof seal. It has a wide variety of external connectors, all likewise waterproofed: power, HDMI, NMEA 2000 and NMEA 0183, Ethernet, two USB 3.0 ports, and an external WiFi or Bluetooth antenna. The external ports are plugged into the carrier board by short extension cables, and there are even more ports on the carrier board, including two HDMI connectors, two MIPI connectors, four USB ports, and a full GPIO header. The case has plugs to install additional PG7 or SP13 waterproof connectors, so if the existing external connectors aren’t enough, you can add your own.

Besides physical ruggedness, the design is also resistant to electrical damage. It can run on power in the 10-32 volt range, and is protected by a fuse. A supercapacitor bank preserves operation during a power glitch, and if the outage lasts for more than five seconds, can keep the system powered for 30-60 seconds while the operating system shuts down safely. The HALPI2 can also accept power over NMEA 2000, in which case it has the option to limit current draw to 0.9 amps.

The design was originally created to handle navigation, data logging, and other boating tasks, so it’s been configured for and tested with OpenPlotter. Its potential uses are broader than that, however, and it’s also been tested with Raspberry Pi OS for more general projects. Reading through its website, the most striking thing is how thoroughly this is documented: the site describes everything from the LED status indicators to the screws that close the housing – even a template for drilling mounting holes.

Given the quality of this project, it probably won’t surprise you to hear this isn’t [Matti]’s first piece of nautical electronics, having previously made Sailor HATs for the ESP32 and the Raspberry Pi.

A lathe is shown on a tabletop. Instead of a normal lathe workspace, there is an XY positioning platform in front of the chuck, with two toolposts mounted on the platform. Stepper motors are mounted on the platform to drive it. The lathe has no tailpiece.

Turning A Milling Machine Into A Lathe

If you’re planning to make a metalworking lathe out of a CNC milling machine, you probably don’t expect getting a position sensor to work to be your biggest challenge. Nevertheless, this was [Anthony Zhang]’s experience. Admittedly, the milling machine’s manufacturer sells a conversion kit, which greatly simplifies the more obviously difficult steps, but getting it to cut threads automatically took a few hacks.

The conversion started with a secondhand Taig MicroMill 2019DSL CNC mill, which was well-priced enough to be purchased specifically for conversion into a lathe. Taig’s conversion kit includes the spindle, tool posts, mounting hardware, and other necessary parts, and the modifications were simple enough to take only a few hours of disassembly and reassembly. The final lathe reuses the motors and control electronics from the CNC, and the milling motor drives the spindle through a set of pulleys. The Y-axis assembly isn’t used, but the X- and Z-axes hold the tool post in front of the spindle.

The biggest difficulty was in getting the spindle indexing sensor working, which was essential for cutting accurate threads. [Anthony] started with Taig’s sensor, but there was no guarantee that it would work with the mill’s motor controller, since it was designed for a lathe controller. Rather than plug it in and hope it worked, he ended up disassembling both the sensor and the controller to reverse-engineer the wiring.

He found that it was an inductive sensor which detected a steel insert in the spindle’s pulley, and that a slight modification to the controller would let the two work together. In the end, however, he decided against using it, since it would have taken up the controller’s entire I/O port. Instead, [Anthony] wired his own I/O connector, which interfaces with a commercial inductive sensor and the end-limit switches. A side benefit was that the new indexing sensor’s mounting didn’t block moving the pulley’s drive belt, as the original had.

The end result was a small, versatile CNC lathe with enough accuracy to cut useful threads with some care. If you aren’t lucky enough to get a Taig to convert, there are quite a few people who’ve built their own CNC lathes, ranging from relatively simple to the extremely advanced.

A thick, rectangular device with rounded corners is shown, with a small screen in the upper half, above a set of selection buttons.

Further Adventures In Colorimeter Hacking

One of the great things about sharing hacks is that sometimes one person’s work inspires someone else to take it even further. A case in point is [Ivor]’s colorimeter hacking (parts two and three), which started with some relatively simple request spoofing to install non-stock firmware, and expanded from there until he had complete control over the hardware.

After reading [Adam Zeloof]’s work on replacing the firmware on a cosmetics spectrophotometer with general-purpose firmware, [Ivor] bought two of these colorimeters, one as a backup. He started with [Adam]’s method for updating the firmware by altering the request sent to an update server, but was only able to find the serial number from a quality-control unit. This installed the quality-control firmware, which encountered an error on the device. More searching led [Ivor] to another serial number, which gave him the base firmware, and let him dump and compare the cosmetic, quality-control, and base firmwares.

Continue reading “Further Adventures In Colorimeter Hacking”

A laboratory benchtop is shown. To the left, there is a distillation column above a collecting flask, with a tube leading from the flask to an adapter. The adapter has a frame holding a glass tube with a teflon stopper at one end, into which a smaller glass tube leads. At the other end of the larger tube is a round flask suspended in an oil bath.

Building A Rotary Evaporator For The Home Lab

The rotary evaporator (rotovap) rarely appears outside of well-provisioned chemistry labs. That means that despite being a fundamentally simple device, their cost generally puts them out of reach for amateur chemists. Nevertheless, they make it much more convenient to remove a solvent from a solution, so [Markus Bindhammer] designed and built his own.

Rotary evaporators have two flasks, one containing the solution to be evaporated, and one that collects the condensed solvent vapors. A rotary joint holds the evaporating flask partially immersed in a heated oil bath and connects the flask’s neck to a fixed vapor duct. Solvent vapors leave the first flask, travel through the duct, condense in a condenser, and collect in the second flask. A motor rotates the first flask, which spreads a thin layer of the solution across the flask walls, increasing the surface area and causing the liquid to evaporate more quickly.

Possibly the trickiest part of the apparatus is the rotary joint, which in [Markus]’s implementation is made of a ground-glass joint adapter surrounded by a 3D-printed gear adapter and two ball bearings. A Teflon stopper fits into one end of the adapter, the evaporation flask clips onto the other end, and a glass tube runs through the stopper. The ball bearings allow the adapter to rotate within a frame, the gear enables a motor to drive it, the Teflon stopper serves as a lubricated seal, and the non-rotating glass tube directs the solvent vapors into the condenser.

The flasks, condenser, and adapters were relatively inexpensive commercial glassware, and the frame that held them in place was primarily made of aluminium extrusion, with a few other pieces of miscellaneous hardware. In [Markus]’s test, the rotovap had no trouble evaporating isopropyl alcohol from one flask to the other.

This isn’t [Markus]’s first time turning a complex piece of scientific equipment into an amateur-accessible project, or, for that matter, making simpler equipment. He’s also taken on several major industrial chemistry processes.

Image Recognition On 0.35 Watts

Much of the expense of developing AI models, and much of the recent backlash to said models, stems from the massive amount of power they tend to consume. If you’re willing to sacrifice some ability and accuracy, however, you can get ever-more-decent results from minimal hardware – a tradeoff taken by the Grove Vision AI board, which runs image recognition in near-real time on only 0.35 Watts.

The heart of the board is a WiseEye processor, which combines two ARM Cortex M55 CPUs and an Ethos U55 NPU, which handles AI acceleration. The board connects to a camera module and a host device, such as another microcontroller or a more powerful computer. When the host device sends the signal, the Grove board takes a picture, runs image recognition on it, and sends the results back to the host computer. A library makes signaling over I2C convenient, but in this example [Jaryd] used a UART.

To let it run on such low-power hardware, the image recognition model needs some limits; it can run YOLO8, but it can only recognize one object, runs at a reduced resolution of 192×192, and has to be quantized down to INT8. Within those limits, though, the performance is impressive: 20-30 fps, good accuracy, and as [Jaryd] points out, less power consumption than a single key on a typical RGB-backlit keyboard. If you want another model, there are quite a few available, though apparently of varying quality. If all else fails, you can always train your own.

Continue reading “Image Recognition On 0.35 Watts”

A camera-based microscope is on a stand, looking down towards a slide which is held on a plastic stage. The stage is held in place by three pairs of brass rods, which run to red plastic cranks mounted to three stepper motors. On the opposite side of each crank from the connecting rod is a semicircular array of magnets.

Designing An Open Source Micro-Manipulator

When you think about highly-precise actuators, stepper motors probably aren’t the first device that comes to mind. However, as [Diffraction Limited]’s sub-micron capable micro-manipulator shows, they can reach extremely fine precision when paired with external feedback.

The micro-manipulator is made of a mobile platform supported by three pairs of parallel linkages, each linkage actuated by a crank mounted on a stepper motor. Rather than attaching to the structure with the more common flexures, these linkages swivel on ball joints. To minimize the effects of friction, the linkage bars are very long compared to the balls, and the wide range of allowed angles lets the manipulator’s stage move 23 mm in each direction.

To have precision as well as range, the stepper motors needed closed-loop control, which a magnetic rotary encoder provides. The encoder can divide a single rotation of a magnet into 100,000 steps, but this wasn’t enough for [Diffraction Limited]; to increase its resolution, he attached an array of alternating-polarity magnets to the rotor and positioned the magnetic encoder near these. As the rotor turns, the encoder’s local magnetic field rotates rapidly, creating a kind of magnetic gear.

A Raspberry Pi Pico 2 and three motor drivers control this creation; even here, the attention to detail is impressive. The motor drivers couldn’t have internal charge pumps or clocked logic units, since these introduce tiny timing errors and motion jitter. The carrier circuit board is double-sided and uses through-hole components for ease of replication; in a nice touch, the lower silkscreen displays pin numbers.

To test the manipulator’s capabilities, [Diffraction Limited] used it to position a chip die under a microscope. To test its accuracy and repeatability, he traced the path a slicer generated for the first layer of a Benchy, vastly scaled-down, with the manipulator. When run slowly to reduce thermal drift, it could trace a Benchy within a 20-micrometer square, and had a resolution of about 50 nanometers.

He’s already used the micro-manipulator to couple an optical fiber with a laser, but [Diffraction Limited] has some other uses in mind, including maskless lithography (perhaps putting the stepper in “wafer stepper”), electrochemical 3D printing, focus stacking, and micromachining. For another promising take on small-scale manufacturing, check out the RepRapMicron.

Continue reading “Designing An Open Source Micro-Manipulator”

In the center of the picture is a colored drawing of a man wearing a kimono, climbing out of a window. To the left and right the sides of two other pictures are just visible.

The Challenges Of Digitizing Paper Films

In the 1930s, as an alternative to celluloid, some Japanese companies printed films on paper (kami firumu), often in color and with synchronized 78 rpm record soundtracks. Unfortunately, between the small number produced, varying paper quality, and the destruction of World War II, few of these still survive. To keep more of these from being lost forever, a team at Bucknell University has been working on a digitization project, overcoming several technical challenges in the process.

The biggest challenge was the varying physical layout of the film. These films were printed in short strips, then glued together by hand, creating minor irregularities every few feet; the width of the film varied enough to throw off most film scanners; even the indexing holes were in inconsistent places, sometimes at the top or bottom of the fame, and above or below the frame border. The team’s solution was the Kyōrinrin scanner, named for a Japanese guardian spirit of lost papers. It uses two spools to run the lightly-tensioned film in front of a Blackmagic cinematic camera, taking a video of the continuously-moving film. To avoid damaging the film, the scanner contacts it in as few places as possible.

After taking the video, the team used a program they had written to recognize and extract still images of the individual frames, then aligned the frames and combined them into a watchable film. The team’s presented the digitized films at a number of locations, but if you’d like to see a quick sample, several of them are available on YouTube (one of which is embedded below).

This piece’s tipster pointed out some similarities to another recent article on another form of paper-based image encoding. If you don’t need to work with paper, we’ve also seen ways to scan film more accurately.

Continue reading “The Challenges Of Digitizing Paper Films”