It may sound like a provocative statement to make, but technology has been on a downward trend for a long time. That’s not a moral or ethical proclamation, but rather an observation about the scale of technology. Where once the height of technology was something like a water-powered mill, whose smallest parts were the size of a human hand and tolerances were measured in inches, today we routinely build machines by etching silicon chips with features measured in nanometers, look inside the smallest of cells and manipulate their innards, and use microscopes that can visualize materials at the atomic level.
The world has gotten much, much smaller lately, and operating on that scale requires thinking about motion in a different way than we’ve been used to. Being able to move things at nanometer resolutions isn’t easy, but it’s not impossible, and it can even be accomplished on a DIYer’s budget — if you know what you’re doing.
To help us sort through the realities of nano-scale positioning, En-Te Hwu, a professor at the Technical University of Denmark who works on micromachines for intelligent drug delivery, has spun up some really interesting low-cost nanopositioning systems. Using old DVD players or off-the-shelf linear slides, he’s able to achieve nanoscale movement and sensing for a variety of purposes. He’ll stop by the Hack Chat to discuss how we can build nanopositioning and sensing into our projects, and to start exploring the world we can’t even see.
When we think of robotics, the first thing that usually comes to mind for many of us is some sort of industrial arm that’s bolted to the floor, or perhaps a semi-autonomous rover trudging its way across the dusty Martian landscape. While these two environments are about as different as can be, the basic “rules” are pretty much the same. Being on firm ground ground gives the robot a clear understanding of its position and orientation, which greatly simplifies tasks such as avoiding collisions or interacting with nearby objects.
But what happens when that reference point goes away? How does a robot navigate when it’s flying through open space or hovering in mid-air? That’s just one of the problems that fascinates Nick Rehm, who stopped by to host this week’s Aerial Robotics Hack Chat to talk about his passion for flying robots. He’s currently an aerospace engineer at Johns Hopkins Applied Physics Laboratory, where he works on the unique challenges faced by autonomous flying vehicles such as the detection and avoidance of mid-air collisions, as well as the development of vertical take-off and landing (VTOL) systems. But before he had his Master’s in Aerospace Engineering and Rotorcraft, he got started the same way many of us did, by playing around with DIY projects.
In fact, regular Hackaday readers will likely recall seeing some of his impressive builds. His autonomous ekranoplan designed to follow a target using computer vision graced the front page in April. Back in 2020, we took a look at his recreation of SpaceX’s Starship prototype, which used a realistic arrangement of control surfaces and vectored thrust to perform the spacecraft’s signature “Belly Flop” maneuver — albeit with RC motors and propellers instead of rocket engines. But even before that, Nick recalls asking his mother for permission to pull apart a Wii controller so he could use its inertial measurement unit (IMU) in a wooden-framed tricopter he was working on.
Discussing some of these hobby builds leads the Chat towards Nick’s dRehmFlight project, a GPLv3 licensed flight control package that can run on relatively low-cost hardware, namely a Teensy 4.0 microcontroller paired with the GY-521 MPU6050 IMU. The project is designed to let hobbyists easily experiment with VTOL craft, specifically those that transition between vertical and horizontal flight profiles, and has powered the bulk of Nick’s own flying craft.
Moving onto more technical questions, Nick says one of the most difficult aspects when designing an autonomous flying vehicle is getting your constraints nailed down. What he means by that is having a clear goal of what the craft needs to do, and critically, how long it needs to do it. How far does the craft need to be able to fly? How fast? Does it need to loiter at the target location, and if so, for how long? The answers to these questions will largely dictate the form of the final vehicle, and are key to determining if it’s worth implementing the complexity of transitioning from VTOL to fixed-wing horizontal flight.
But according to Nick, the biggest challenge in aerial robotics is onboard state estimation. That is, the ability for the craft to know its position and orientation relative to the ground. While high-performance computers have gotten lighter and sensors have improved, he says there’s still no substitute for having a ground-based tracking system. He mentions that those fancy demonstrations you’ve seen with drones flying in formation and working collaboratively towards a task will almost certainly have an array of motion capture cameras tucked off to the side. This makes for an impressive show, but greatly limits the practical application of these drone swarms.
So what does the future of aerial robotics look like? Nick says open source projects like ArduPilot and PX4 are still great choices for hobbyists, but sees promise in newer platforms which pair the traditional autopilot with more onboard computing power, such as Auterion’s Skynode. More powerful flight controllers can enable techniques such as simultaneous localization and mapping (SLAM), which uses 3D scans of the environment to help the robot orient itself. He’s also very interested in technologies that enable autonomous flight in GPS-denied environments, which is critical for robotic craft that need to operate indoors or in situations where satellite navigation is unavailable or unreliable. In light of the incredible success of NASA’s Ingenuity helicopter, we imagine these techniques will also play an invaluable role in the future airborne exploration of Mars.
We want to thank Nick for hosting this week’s Aerial Robotics Hack Chat, which turned out to be one of the fastest hours in recent memory. His experience as both an avid hobbyist and a professional in the field provided exactly the sort of insight the Hackaday community looks for, and his gracious offer to keep in touch with several of those who attended the Chat to further discuss their projects speaks to how passionate he is about this topic. We expect to see great things from Nick going forward, and would love to have him join us again in the future to see what he’s been up to.
The Hack Chat is a weekly online chat session hosted by leading experts from all corners of the hardware hacking universe. It’s a great way for hackers connect in a fun and informal way, but if you can’t make it live, these overview posts as well as the transcripts posted to Hackaday.io make sure you don’t miss out.
When it comes to robots, especially ones that need to achieve some degree of autonomy, the more constrained the environment they work in, the easier it is for them to deal with the world. An industrial arm tethered next to a production line, for example, only has to worry about positioning its tool within its work envelope. The problems mount up for something like an autonomous car, though, which needs to deal with the world in two — or perhaps two and a half — dimensions.
But what about adding a third dimension? That’s the realm that aerial robots have to live and work in, and it’s where the problems get really interesting. Not only are there hardly any constraints to movement, but you’ve also got to deal with the problems of aerodynamic forces, navigation in space, and control systems that need to respond to the slightest of perturbations without overcompensating.
The atmosphere is a tough place to make a living, and dealing with the problems of aerial robotics has kept Nick Rehm occupied for many years as a hobbyist, and more recently as an aerospace engineer at Johns Hopkins Applied Physics Laboratory. Nick has spent his time away from the office solving the problems of autonomous flight, including detection and avoidance of mid-air collisions, development of vertical take-off and landing (VTOL) and fixed-wing aircraft, and even ground-effect aircraft. He’ll drop by the Hack Chat to discuss the problems of aerial robots and the challenges of unconventional aviation, and help us figure out how to deal with the third dimension.
Despite the fact that we’ve been doing them for years now, it’s still hard to predict how a Hack Chat will go. There’s no question it will be an hour of interesting discussion of course, that much is a given. But the dynamics of the conversation can range from a rigid Q&A, which isn’t exactly unexpected when you’ve only got a limited amount of time with a subject matter expert, to a freewheeling hangout with a group of people who all happen to be interested in the same thing.
This week’s Vintage Pro Audio Hack Chat with Frank Olson definitely took the latter approach. The allotted hour flew by in a blink, with so many anecdotes and ideas flying back and forth that at times it was tricky to follow. But no worries, with the Chat transcript to pore over, we can make sure none of that accrued first-hand knowledge goes to waste.
So what did we learn during this Chat? Well, it probably won’t come as much of a surprise to find that those who have an opinion on audio gear tend to have a strong opinion on it. Folks were painting with some fairly broad brushes, with particular manufacturers and even whole fields of technology receiving a bit of good-natured ribbing. If your favorite brand or piece of gear gets a specific shout-out, try not to take it too personally — at the end of the day, most in the Chat seemed to agree that sound is so subjective that the right choice is more often than not whatever sounds best to you at the moment.
Which leads directly into Frank’s work with custom microphones. As a musician he knew the sound he was looking for better than anyone, so rather than spend the money on big-name gear, he prefers to build it himself. But the real hook here is their unique construction, with pieces that reimagine design concepts from mid-century commercial equipment using unexpected materials such as thin pieces of walnut cut with a vinyl cutter. Frank explains that the structure of the microphone isn’t as critical these days thanks to the availability of powerful neodymium magnets, which gives the builder more freedom in terms of materials and tools. He says the goal is to inspire others to try building gear from what’s available to them rather than assuming it won’t work because it’s unconventional.
We appreciate Frank, and everyone else, stopping by this week for such a lively and friendly discussion. Let’s be honest, a Chat specifically for folks who want to discuss concepts as personal and nebulous as how they perceive the warmth of sound could have gotten a little heated. But the fact that everyone was able to express their opinions or ask for advice constructively is a real credit to the community.
The Hack Chat is a weekly online chat session hosted by leading experts from all corners of the hardware hacking universe. It’s a great way for hackers connect in a fun and informal way, but if you can’t make it live, these overview posts as well as the transcripts posted to Hackaday.io make sure you don’t miss out.
There was a time, and not all that long ago on the cosmic scale, that if you wanted to hear music, you either needed to make it yourself or hire someone to do it for you. For most of history, music was very much a here and now thing, and when the song was over, that was it.
Thankfully, those days are long gone, and for better or worse, we have instant access to whatever music we’re in the mood for. The Spotify client in your pocket is a far cry from the iPod of a few years back, or the Walkman of the 80s, or even a mid-century transistor radio. But no matter how you listen to your music, it all starts with getting the live music recorded, and that’s where we’ll be going with this Hack Chat.
Hooking up the preamps, mixers, mics, and recorders that make modern music possible is what Frank Olson is all about. You’ll probably recognize Frank’s name from his unique niche as a maker of wooden microphones, but dig a little deeper and he’s got a lot of experience with vintage pro audio gear. As both a musician and an audio engineer, Frank brings an enthusiast’s passion for recording gear to the Hack Chat, and we’re looking forward to picking his brain on the unique ways he’s found to turn sounds into music and to get to all down on tape.
We start this week with news from Mars, because, let’s face it, the news from this planet isn’t all that much fun lately. But a couple of milestones were reached on the Red Planet, the first being the arrival of Perseverance at the ancient river delta it was sent there to explore. The rover certainly took the scenic route to get there, having covered 10.6 km over the last 424 sols to move to a position only about 3.5 km straight-line distance from where it landed. Granted, a lot of that extra driving was in support of the unexpectedly successful Ingenuity demonstration, plus taking time for a lot of pit stops along the way at interesting features. But the rover is now in place to examine sedimentary rocks most likely to harbor the fossil remains of ancient aquatic life — as opposed to the mainly igneous rocks it has studied along the crater floor so far. We’re looking forward to seeing what happens.
It’s a fair bet that anyone regularly reading Hackaday has a voltmeter within arm’s reach, and there’s a good chance an oscilloscope isn’t far behind. But beyond that, things get a little murky. We’re sure some of you have access to a proper lab full of high-end test gear, even if only during business hours, but most of us have to make do with the essentials due to cost and space constraints.
The ideal solution is a magical little box that could be whatever piece of instrumentation you needed at the time: some days it’s an oscilloscope, while others it’s a spectrum analyzer, or perhaps even a generic data logger. To simplify things the device wouldn’t have a physical display or controls of its own, instead, you could plug it into your computer and control it through software. This would not only make the unit smaller and cheaper, but allow for custom user interfaces to be created that precisely match what the user is trying to accomplish.
Wishful thinking? Not quite. As guest host Ben Nizette explained during the Software Defined Instrumentation Hack Chat, the dream of replacing a rack of test equipment with a cheap pocket-sized unit is much closer to reality than you may realize. While software defined instruments might not be suitable for all applications, the argument could be made that any capability the average student or hobbyist is likely to need or desire could be met by hardware that’s already on the market.
Ben is the Product Manager at Liquid Instruments, the company that produces the Moku line of multi-instruments. Specifically, he’s responsible for the Moku:Go, an entry-level device that’s specifically geared for the education and maker markets. The slim device doesn’t cost much more than a basic digital oscilloscope, but thanks to the magic of software defined instrumentation (SDi), it can stand in for eleven instruments — all more than performant enough for their target users.
So what’s the catch? As you might expect, that’s the first thing folks in the Chat wanted to know. According to Ben, the biggest drawback is that all of your instrumentation has to share the same analog front-end. To remain affordable, that means everything the unit can do is bound by the same fundamental “Speed Limit” — which on the Moku:Go is 30 MHz. Even on the company’s higher-end professional models, the maximum bandwidth is measured in hundreds of megahertz.
Additionally, SDI has traditionally been limited to the speed of the computer it was attached to. But the Moku hardware manages to sidestep this particular gotcha by running the software side of things on an internal FPGA. The downside is that some of the device’s functions, such as the data logger, can’t actually live stream the data to the connected computer. Users will have to wait until the measurements are complete before they pull the results off, though Ben says there’s enough internal memory to store months worth of high-resolution data.
Of course, as soon as this community hears there’s an FPGA on board, they want to know if they can get their hands on it. To that end, Ben says the Moku:Go will be supported by their “Cloud Compile” service in June. Already available for the Moku:Pro, the browser-based application allows you to upload your HDL to the Liquid Instruments servers so it can be built and optimized. This gives power users complete access to the Moku hardware so they can build and deploy their own custom features and tools that precisely match their needs without a separate development kit. Understanding that obsolescence is always a problem with a cloud solution, Ben says they’re also working with Xilinx to allow users to do builds on their own computers while still implementing the proprietary “secret sauce” that makes it a Moku.
It’s hard not to get excited about the promise of software defined instrumentation, especially with companies like Liquid Instruments and Red Pitaya bringing the cost of the hardware down to the point where students and hackers can afford it. We’d like to thank Ben Nizette for taking the time to talk with the community about what he’s been working on, especially given the considerable time difference between the Hackaday Command Center and Liquid’s Australian headquarters. Anyone who’s willing to jump online and chat about FPGAs and phasemeters before the sun comes up is AOK in our book.
The Hack Chat is a weekly online chat session hosted by leading experts from all corners of the hardware hacking universe. It’s a great way for hackers connect in a fun and informal way, but if you can’t make it live, these overview posts as well as the transcripts posted to Hackaday.io make sure you don’t miss out.