We’re proud to announce the last round of speakers, as well as the two workshops that we’ll be running at 2025 Hackaday Europe in Berlin on March 15th and 16th — and Friday night the 14th, if you’re already in town.
The last two years that we’ve done Hackaday Europe in Berlin have been awesome, and this year promises to keep up the tradition. We can’t wait to get our hands on the crazy selection of SAO badge addons that are going to be in each and every schwag bag.
Tickets for the event itself are going fast, but the workshop tickets that go on sale at 8:00 AM PST sell out even faster. And you need the one to enjoy the other, so get your tickets now!
Holography is about capturing 3D data from a scene, and being able to reconstruct that scene — preferably in high fidelity. Holography is not a new idea, but engaging in it is not exactly a point-and-shoot affair. One needs coherent light for a start, and it generally only gets touchier from there. But now researchers describe a new kind of holographic camera that can capture a scene better and faster than ever. How much better? The camera goes from scene capture to reconstructed output in under 30 milliseconds, and does it using plain old incoherent light.
The camera and liquid lens is tiny. Together with the computation back end, they can make a holographic capture of a scene in under 30 milliseconds.
The new camera is a two-part affair: acquisition, and calculation. Acquisition consists of a camera with a custom electrically-driven liquid lens design that captures a focal stack of a scene within 15 ms. The back end is a deep learning neural network system (FS-Net) which accepts the camera data and computes a high-fidelity RGB hologram of the scene in about 13 ms. How good are the results? They beat other methods, and reconstruction of the scene using the data looks really, really good.
One might wonder what makes this different from, say, a 3D scene captured by a stereoscopic camera, or with an RGB depth camera (like the now-discontinued Intel RealSense). Those methods capture 2D imagery from a single perspective, combined with depth data to give an understanding of a scene’s physical layout.
Holography by contrast captures a scene’s wavefront information, which is to say it captures not just where light is coming from, but how it bends and interferes. This information can be used to optically reconstruct a scene in a way data from other sources cannot; for example allowing one to shift perspective and focus.
Being able to capture holographic data in such a way significantly lowers the bar for development and experimentation in holography — something that’s traditionally been tricky to pull off for the home gamer.
Making a multi-band amateur radio transceiver has always been a somewhat challenging project, and making one that also supported different modes would for many years have been of almost impossible complexity best reserved for expensive commercial projects. [Bob W7PUA] has tackled both in the form of a portable 10-band multi-mode unit, and we can honestly say he’s done a very good job indeed.
As you might expect in 2025 it’s a software defined radio (SDR), but to show how powerful the silicon available today is, it’s all implemented on a microcontroller. There’s a Teensy 4 with an audio codec board that does all the signal processing heavy lifting, and an RF board that takes care of the I/Q mixing and the analogue stuff.
Band switching is handled using a technique from the past; interchangeable plug-in coil and filter units, that do an effective job. The result is a modestly-powered but extremely portable rig that doesn’t look to have broken the bank, and since the write-up goes into detail on the software side we hope it might inform other SDR projects too. We might have gone for old-school embossed Dymo labels on that brushed aluminium case just for retro appeal, but we can’t fault it.
It’s not the first time we’ve looked at a small multi-band SDR here, but we think this one ups the game somewhat.
What does one do when frustrated at the lack of affordable, open source portable trackers? If you’re [OG-star-tech], you design your own and give it modular features that rival commercial offerings while you’re at it.
What’s a star tracker? It’s a method of determining position based on visible stars, but when it comes to astrophotography the term refers to a sort of hardware-assisted camera holder that helps one capture stable long-exposure images. This is done by moving the camera in such a way as to cancel out the effects of the Earth’s rotation. The result is long-exposure photographs without the stars smearing themselves across the image.
Interested? Learn more about the design by casting an eye over the bill of materials at the GitHub repository, browsing the 3D-printable parts, and maybe check out the assembly guide. If you like what you see, [OG-star-tech] says you should be able to build your own very affordably if you don’t mind 3D printing parts in ASA or ABS. Prefer to buy a kit or an assembled unit? [OG-star-tech] offers them for sale.
Frustration with commercial offerings (or lack thereof) is a powerful motive to design something or contribute to an existing project, and if it leads to more people enjoying taking photos of the night sky and all the wonderful things in it, so much the better.
Every few years or so, a development in computing results in a sea change and a need for specialized workers to take advantage of the new technology. Whether that’s COBOL in the 60s and 70s, HTML in the 90s, or SQL in the past decade or so, there’s always something new to learn in the computing world. The introduction of graphics processing units (GPUs) for general-purpose computing is perhaps the most important recent development for computing, and if you want to develop some new Python skills to take advantage of the modern technology take a look at this introduction to CUDA which allows developers to use Nvidia GPUs for general-purpose computing.
Of course CUDA is a proprietary platform and requires one of Nvidia’s supported graphics cards to run, but assuming that barrier to entry is met it’s not too much more effort to use it for non-graphics tasks. The guide takes a closer look at the open-source library PyTorch which allows a Python developer to quickly get up-to-speed with the features of CUDA that make it so appealing to researchers and developers in artificial intelligence, machine learning, big data, and other frontiers in computer science. The guide describes how threads are created, how they travel along within the GPU and work together with other threads, how memory can be managed both on the CPU and GPU, creating CUDA kernels, and managing everything else involved largely through the lens of Python.
Getting started with something like this is almost a requirement to stay relevant in the fast-paced realm of computer science, as machine learning has taken center stage with almost everything related to computers these days. It’s worth noting that strictly speaking, an Nvidia GPU is not required for GPU programming like this; AMD has a GPU computing platform called ROCm but despite it being open-source is still behind Nvidia in adoption rates and arguably in performance as well. Some other learning tools for GPU programming we’ve seen in the past include this puzzle-based tool which illustrates some of the specific problems GPUs excel at.
[Martin] of [Wintergatan] is on a quest to create the ultimate human-powered, modern marble music machine. His fearless mechanical exploration and engineering work, combined with considerable musical talent, has been an ongoing delight as he continually refines his designs. We’d like to highlight this older video in which he demonstrates how to dynamically regulate the speed of a human-cranked music machine by taking inspiration from gramophones: he uses a flyball governor (or centrifugal governor).
The faster the shaft turns, the harder the disk brake is applied.
These devices are a type of mechanical feedback system that was invented back in the 17th century but really took off once applied to steam engines. Here’s how they work: weights are connected to a shaft with a hinged assembly. The faster the shaft spins, the more the weights move outward due to centrifugal force. This movement is used to trigger some regulatory action, creating a feedback loop. In a steam engine, the regulator adjusts a valve which keeps the engine within a certain speed range. In a gramophone it works a wee bit differently, and this is the system [Wintergatan] uses.
To help keep the speed of his music machine within a certain narrow range, instead of turning a valve the flyball governor moves a large disk brake. The faster the shaft spins, the harder the brake is applied. Watch it in action in the video (embedded below) which shows [Wintergatan]’s prototype, demonstrating how effective it is.
[Wintergatan]’s marble machine started out great and has only gotten better over the years, with [Martin] tirelessly documenting his improvements on everything. After all, when every note is the product of multiple physical processes that must synchronize flawlessly, it makes sense to spend time doing things like designing the best method of dropping balls.
One final note: if you are the type of person to find yourself interested and engaged by these sorts of systems and their relation to obtaining better results and tighter tolerances, we have a great book recommendation for you.
Although a somewhat common feature on cars these days, tire pressure sensors (TPS) are also useful on bicycles. The SKS Airspy range of TPS products is one such example, which enables remote monitoring of the air pressure either to a special smartphone app (SKS MYBIKE) or to a Garmin device. Of course, proprietary solutions like this require reverse-engineering to liberate the hardware from nasty proprietary firmware limitations, which is exactly what [bitmeal] did with a custom firmware project.
Rather than the proprietary and closed communication protocol, the goal was to use the open ANT+ sensor instead, specifically the (non-certified) TPS profile which is supported by a range of cycling computers. Before this could happen the Airspy TPS hardware had to be first reverse-engineered so that new firmware could be developed and flashed. These devices use the nRF52832 IC, meaning that development tools are freely available. Flashing the custom firmware requires gaining access to the SWD interface, which will very likely void the warranty on a $160 – 240 device.
The SWD programmer is then attached to the 1.27 mm spaced SWD holes per the instructions on the GitHub page. After flashing the provided .hex file you can then connect to the TPS as an ANT+ device, but instructions are also provided for developing your own firmware.