You would think there are only so many ways for a robotic mouse to run a maze, but in its almost 50 year history, competitors in Micromouse events have repeatedly proven this assumption false. In the video after the break, [Veritasium] takes us on a fascinating journey through the development of Micromouse competition robots.
The goal of Micromouse is simple: Get to the destination square (center) of a maze in the shortest time. Competitors are not allowed to update the programming of their vehicles once the layout is revealed at the start of an event. Over the years, there have been several innovations that might seem obvious now but were groundbreaking at the time.
The most obvious first challenge is finding the maze’s center. Simple wall following in the first event in 1977 has developed into variations of the “flood fill” algorithm. Initially, all robots stopped before turning a corner until someone realized that you could cut corners at 45° and move diagonally if the robot is narrow enough. The shortest path is not always the fastest since cornering loses a lot of speed, so it’s sometimes possible to improve time by picking a slightly longer router with fewer corners.
More speed is only good if you can keep control, so many robots now incorporate fans to suck them down, increasing traction. This has led to speeds as high as 7 meters/second and cornering forces of up to 6 G. Even specks of dust can cause loss of control, so all competitors use tape to clean their wheels before a run. Many winning runs are now under 10 seconds, which require many design iterations to increase controllable speed and reduce weight.
All these innovations started as experiments, and the beauty of Microhouse lies in its accessibility. It doesn’t require much of a budget to get started, and the technical barrier to entry is lower than ever. We’ve looked at another Micromouse design before. Even if they aren’t micromice, we can’t get enough of tiny robots.
When we think of robotics, the first thing that usually comes to mind for many of us is some sort of industrial arm that’s bolted to the floor, or perhaps a semi-autonomous rover trudging its way across the dusty Martian landscape. While these two environments are about as different as can be, the basic “rules” are pretty much the same. Being on firm ground ground gives the robot a clear understanding of its position and orientation, which greatly simplifies tasks such as avoiding collisions or interacting with nearby objects.
But what happens when that reference point goes away? How does a robot navigate when it’s flying through open space or hovering in mid-air? That’s just one of the problems that fascinates Nick Rehm, who stopped by to host this week’s Aerial Robotics Hack Chat to talk about his passion for flying robots. He’s currently an aerospace engineer at Johns Hopkins Applied Physics Laboratory, where he works on the unique challenges faced by autonomous flying vehicles such as the detection and avoidance of mid-air collisions, as well as the development of vertical take-off and landing (VTOL) systems. But before he had his Master’s in Aerospace Engineering and Rotorcraft, he got started the same way many of us did, by playing around with DIY projects.
In fact, regular Hackaday readers will likely recall seeing some of his impressive builds. His autonomous ekranoplan designed to follow a target using computer vision graced the front page in April. Back in 2020, we took a look at his recreation of SpaceX’s Starship prototype, which used a realistic arrangement of control surfaces and vectored thrust to perform the spacecraft’s signature “Belly Flop” maneuver — albeit with RC motors and propellers instead of rocket engines. But even before that, Nick recalls asking his mother for permission to pull apart a Wii controller so he could use its inertial measurement unit (IMU) in a wooden-framed tricopter he was working on.
Discussing some of these hobby builds leads the Chat towards Nick’s dRehmFlight project, a GPLv3 licensed flight control package that can run on relatively low-cost hardware, namely a Teensy 4.0 microcontroller paired with the GY-521 MPU6050 IMU. The project is designed to let hobbyists easily experiment with VTOL craft, specifically those that transition between vertical and horizontal flight profiles, and has powered the bulk of Nick’s own flying craft.
Moving onto more technical questions, Nick says one of the most difficult aspects when designing an autonomous flying vehicle is getting your constraints nailed down. What he means by that is having a clear goal of what the craft needs to do, and critically, how long it needs to do it. How far does the craft need to be able to fly? How fast? Does it need to loiter at the target location, and if so, for how long? The answers to these questions will largely dictate the form of the final vehicle, and are key to determining if it’s worth implementing the complexity of transitioning from VTOL to fixed-wing horizontal flight.
But according to Nick, the biggest challenge in aerial robotics is onboard state estimation. That is, the ability for the craft to know its position and orientation relative to the ground. While high-performance computers have gotten lighter and sensors have improved, he says there’s still no substitute for having a ground-based tracking system. He mentions that those fancy demonstrations you’ve seen with drones flying in formation and working collaboratively towards a task will almost certainly have an array of motion capture cameras tucked off to the side. This makes for an impressive show, but greatly limits the practical application of these drone swarms.
So what does the future of aerial robotics look like? Nick says open source projects like ArduPilot and PX4 are still great choices for hobbyists, but sees promise in newer platforms which pair the traditional autopilot with more onboard computing power, such as Auterion’s Skynode. More powerful flight controllers can enable techniques such as simultaneous localization and mapping (SLAM), which uses 3D scans of the environment to help the robot orient itself. He’s also very interested in technologies that enable autonomous flight in GPS-denied environments, which is critical for robotic craft that need to operate indoors or in situations where satellite navigation is unavailable or unreliable. In light of the incredible success of NASA’s Ingenuity helicopter, we imagine these techniques will also play an invaluable role in the future airborne exploration of Mars.
We want to thank Nick for hosting this week’s Aerial Robotics Hack Chat, which turned out to be one of the fastest hours in recent memory. His experience as both an avid hobbyist and a professional in the field provided exactly the sort of insight the Hackaday community looks for, and his gracious offer to keep in touch with several of those who attended the Chat to further discuss their projects speaks to how passionate he is about this topic. We expect to see great things from Nick going forward, and would love to have him join us again in the future to see what he’s been up to.
The Hack Chat is a weekly online chat session hosted by leading experts from all corners of the hardware hacking universe. It’s a great way for hackers connect in a fun and informal way, but if you can’t make it live, these overview posts as well as the transcripts posted to Hackaday.io make sure you don’t miss out.
The world of automated farming may be an unglamorous one to those not invested in its attractions, but like the robots themselves that quietly get on in the background with tending crops, those who follow that path spend many seasons refining their designs. The Acorn is a newly-open-sourced robot from Twisted Fields, a Californian research farm, and it provides a fascinating look at the progress of a farming robot design from germination onwards.
The Acorn is not a CNC gantry for small intensive gardens in the manner of designs such as the Farmbot, instead it’s an autonomous solar-powered rover intended for larger farms which will cruise the fields continuously tending to the plants in its patch. It’s a work in progress, so what we see is the completed rover with the tools and machine vision to follow. It pursues the course of a low-cost lightweight platform, an aluminium chassis surmounted by the solar panel, with mountain bike front fork derived wheels at each corner. It has four wheel drive and four wheel steering, meaning that it can traverse the roughest of farmland. We can see its progress since a 2019 prototype, and while it seems as slow as the seasons themselves to mature, we can see that the final version could be a significantly useful machine on a small farm.
We don’t know how much time passed between the invention of the wheel and someone putting wheels on their feet, but we expect that was a great moment of discovery: combining the ability to roll off at speed and our leg’s ability to quickly adapt to changing terrain. Now that we have a wide assortment of recreational wheeled footwear, what’s next? How about teaching robots to skate, too? An IEEE Spectrum interview with [Marko Bjelonic] of ETH Zürich describes progress by one of many research teams working on the problem.
For many of us, the first robot we saw rolling on powered wheels at the end of actively articulated legs was when footage of the Boston Dynamics ‘Handle’ project surfaced a few years ago. Rolling up and down a wide variety of terrain and performing an occasional jump, its athleticism caused quite a stir in robotics circles. But when Handle was introduced as a commercial product, its job was… stacking boxes in a warehouse? That was disappointing. Warehouse floors are quite flat, leaving Handle’s agility under-utilized.
Boston Dynamic has typically been pretty tight-lipped on details of their robotics development, so we may never know the full story behind Handle. But what they have definitely accomplished is getting a lot more people thinking about the control problems involved. Even for humans, we face a nontrivial learning curve paved with bruised and occasionally broken body parts, and that’s even before we start applying power to the wheels. So there are plenty of problems to solve, generating a steady stream of research papers describing how robots might master this mode of locomotion.
Adding to the excitement is the fact this is becoming an area where reality is catching up to fiction, as wheeled-legged robots have been imagined in forms like Tachikoma of Ghost in the Shell. While those fictional robots have inspired projects ranging from LEGO creations to 28-servo beasts, their wheel and leg motions have not been autonomously coordinated as they are in this generation of research robots.
As control algorithms mature in robot research labs around the world, we’re confident we’ll see wheeled-legged robots finding applications in other fields. This concept is far too cool to be left stacking boxes in a warehouse.
It’s usually the simple ideas that sprout bigger ones, and this was the case when we saw [gzumwalt]’s single-motor walking robot crawling up a fridge door with magnets on its feet. (Video, embedded below.)
The walking mechanism consists of an inner foot and two outer feet, connected by three sets of rotating linkages, driven by a single geared motor. The feet move in a leapfrog motion, in small enough steps that the center of mass always stays inside the foot area, which keeps it from tipping over. Besides the previously mentioned ability to crawl around on a vertical magnetic surface, it’s also able to crawl over almost any obstacle shorter than its step length. A larger version should also be able to climb stairs.
As shown, this robot can only travel in a straight line, but this could be solved by adding a disc on the bottom of the inner foot to turn the robot when the outer feet are off the surface. Add some microswitch feelers and an Arduino, and it can autonomously explore your fridge without falling off. Maybe we’ll get around to building it ourselves, but be sure to drop us a tip if you beat us to it!
It’s amazing how many things have managed to move online in recent weeks, many with a beneficial side effect of eliminating travel making them more accessible to everyone around the world. Though some events had a virtual track before it was cool, among them the DARPA Subterranean Challenge (SubT) robotics competition. Recent additions to their “Hello World” tutorials (with promise of more to come) have continued to lower the barrier of entry for aspiring roboticists.
We all love watching physical robots explore the real world, which is why SubT’s “Systems Track” gets most of the attention. But such participation is necessarily restricted to people who have the resources to build and transport bulky hardware to the competition site, which is just a tiny subset of all the brilliant minds who can contribute. Hence the “Virtual Track” which is accessible to anyone with a computer that meets requirements. (64-bit Ubuntu 18 with NVIDIA GPU) The tutorials help get us up and running on SubT’s virtual testbed which continues to evolve. With every round, the organizers work to bring the virtual and physical worlds closer together. During the recent Urban Circuit, they made high resolution scans of both the competition course as well as participating robots.
There’s a lot of other traffic on various SubT code repositories. Motivated by Bitbucket sunsetting their Mercurial support, SubT is moving from Bitbucket to GitHub and picking up some housecleaning along the way. Together with the newly added tutorials, this is a great time to dive in and see if you want to assemble a team (both of human collaborators and virtual robots) to join in the next round of virtual SubT. But if you prefer to stay an observer of the physical world, enjoy this writeup with many fun details on systems track robots.
Robots might be finding their footing above ground, but today’s autonomous robots have a difficult time operating underground. DARPA wanted to give the state of the art a push forward, so they are running a Subterranean (SubT) Challenge which just wrapped up its latest round. A great review of this Urban Circuit competition (and some of the teams participating in it) has been published by IEEE Spectrum. This is the second of three underground problem subdomains presented to the participants, six months apart, preparing them for the final event which will combine all three types.
If you missed the livestream or prefer edited highlight videos, they’re all part of DARPAtv’s Subterranean Challenge playlist. Today it starts with a compilation of Urban Circuit highlights and continues to other videos. Including team profiles, video walkthrough of competition courses, actual competition footage, edited recap videos, and the awards ceremony. Half of the playlist are video from the Tunnels Circuit six months ago, so we can compare to see how teams performed and what they’ve learned along the way. Many more lessons were learned in the just-completed Urban Circuit and teams will spend the next six months improving their robots. By then we’ll have the Caves Circuit competition with teams ready to learn new lessons about operating robots underground.