Sending teams of tiny drones to explore areas and structures is a staple in sci-fi and research, but the weight and size of sensors and the required processing power have long been a limiting factor. In the video below, a research team from [ETH Zurich] breaks through these limits, demonstrating indoor mapping with a swarm of tiny drones without dependence on any external systems.
The drone is the modular Crazyflie platform, which uses stackable PCBs (decks) to expand capabilities. The team added a Flow deck for altitude control and motion tracking, and a Loco positioning deck with a UWB module determining relative distances between drones. On top of this, the team added two custom decks. The first mounts four VL53L5CX 8×8 pixel TOF sensors for omnidirectional LIDAR scanning. The final deck does handles all the required processing with a GAP9 System-on-Chip, which features 10 RISC-V cores running on just 200 mW of power.
Of course the special sauce of this project lies in the software. The team developed a lightweight collaborative Simultaneous Localization And Mapping (SLAM) algorithm which can be distributed across all the drones in the swarm. It combines LIDAR scan data and the estimated position of the drone during the scan, and then overlays the data for the scans for each location across different drones, compensating for errors in the odometry data. The team also implemented inter-drone collision avoidance, packet collision avoidance and optimizing drones’ paths. The code is supposed to be available on GitHub, but the link was broken at the time of writing.
When we think of robotics, the first thing that usually comes to mind for many of us is some sort of industrial arm that’s bolted to the floor, or perhaps a semi-autonomous rover trudging its way across the dusty Martian landscape. While these two environments are about as different as can be, the basic “rules” are pretty much the same. Being on firm ground ground gives the robot a clear understanding of its position and orientation, which greatly simplifies tasks such as avoiding collisions or interacting with nearby objects.
But what happens when that reference point goes away? How does a robot navigate when it’s flying through open space or hovering in mid-air? That’s just one of the problems that fascinates Nick Rehm, who stopped by to host this week’s Aerial Robotics Hack Chat to talk about his passion for flying robots. He’s currently an aerospace engineer at Johns Hopkins Applied Physics Laboratory, where he works on the unique challenges faced by autonomous flying vehicles such as the detection and avoidance of mid-air collisions, as well as the development of vertical take-off and landing (VTOL) systems. But before he had his Master’s in Aerospace Engineering and Rotorcraft, he got started the same way many of us did, by playing around with DIY projects.
In fact, regular Hackaday readers will likely recall seeing some of his impressive builds. His autonomous ekranoplan designed to follow a target using computer vision graced the front page in April. Back in 2020, we took a look at his recreation of SpaceX’s Starship prototype, which used a realistic arrangement of control surfaces and vectored thrust to perform the spacecraft’s signature “Belly Flop” maneuver — albeit with RC motors and propellers instead of rocket engines. But even before that, Nick recalls asking his mother for permission to pull apart a Wii controller so he could use its inertial measurement unit (IMU) in a wooden-framed tricopter he was working on.
Discussing some of these hobby builds leads the Chat towards Nick’s dRehmFlight project, a GPLv3 licensed flight control package that can run on relatively low-cost hardware, namely a Teensy 4.0 microcontroller paired with the GY-521 MPU6050 IMU. The project is designed to let hobbyists easily experiment with VTOL craft, specifically those that transition between vertical and horizontal flight profiles, and has powered the bulk of Nick’s own flying craft.
Moving onto more technical questions, Nick says one of the most difficult aspects when designing an autonomous flying vehicle is getting your constraints nailed down. What he means by that is having a clear goal of what the craft needs to do, and critically, how long it needs to do it. How far does the craft need to be able to fly? How fast? Does it need to loiter at the target location, and if so, for how long? The answers to these questions will largely dictate the form of the final vehicle, and are key to determining if it’s worth implementing the complexity of transitioning from VTOL to fixed-wing horizontal flight.
But according to Nick, the biggest challenge in aerial robotics is onboard state estimation. That is, the ability for the craft to know its position and orientation relative to the ground. While high-performance computers have gotten lighter and sensors have improved, he says there’s still no substitute for having a ground-based tracking system. He mentions that those fancy demonstrations you’ve seen with drones flying in formation and working collaboratively towards a task will almost certainly have an array of motion capture cameras tucked off to the side. This makes for an impressive show, but greatly limits the practical application of these drone swarms.
So what does the future of aerial robotics look like? Nick says open source projects like ArduPilot and PX4 are still great choices for hobbyists, but sees promise in newer platforms which pair the traditional autopilot with more onboard computing power, such as Auterion’s Skynode. More powerful flight controllers can enable techniques such as simultaneous localization and mapping (SLAM), which uses 3D scans of the environment to help the robot orient itself. He’s also very interested in technologies that enable autonomous flight in GPS-denied environments, which is critical for robotic craft that need to operate indoors or in situations where satellite navigation is unavailable or unreliable. In light of the incredible success of NASA’s Ingenuity helicopter, we imagine these techniques will also play an invaluable role in the future airborne exploration of Mars.
We want to thank Nick for hosting this week’s Aerial Robotics Hack Chat, which turned out to be one of the fastest hours in recent memory. His experience as both an avid hobbyist and a professional in the field provided exactly the sort of insight the Hackaday community looks for, and his gracious offer to keep in touch with several of those who attended the Chat to further discuss their projects speaks to how passionate he is about this topic. We expect to see great things from Nick going forward, and would love to have him join us again in the future to see what he’s been up to.
The Hack Chat is a weekly online chat session hosted by leading experts from all corners of the hardware hacking universe. It’s a great way for hackers connect in a fun and informal way, but if you can’t make it live, these overview posts as well as the transcripts posted to Hackaday.io make sure you don’t miss out.
When the Raspberry Pi 4 came out, [Frank Zhao] saw the potential to make a realtime 3D scanner that was completely handheld and self-contained. The device has an Intel RealSense D415 depth-sensing camera as the main sensor, which uses two IR cameras and an RGB camera along with the Raspberry Pi 4. The Pi uses a piece of software called RTAB-Map — intended for robotic applications — to take care of using the data from the camera to map the environment in 3D and localize itself within that 3D space. Everything gets recorded in realtime.
This handheld device can act as a 3D scanner because the data gathered by RTAB-Map consists of a point cloud of an area as well as depth information. When combined with the origin of the sensing unit (i.e. the location of the camera within that area) it can export a point cloud into a mesh and even apply a texture derived from the camera footage. An example is shown below the break. Continue reading “Handheld 3D Scanning, Using Raspberry Pi 4 And Intel RealSense Camera”→
Robot cars, DIY or otherwise, are hot right now. To do this right, you’re going to need cameras, LIDAR, or some other way of sensing the the world. Intel is again getting into the fray with a RealSense tracking camera for simultaneous localization and mapping for robotics, drone, and augmented reality needs.
The tech specs for the Intel RealSense T265 are impressive for small robotics uses. It includes 6DoF tracking gathered by two cameras, each with a 170° FoV. Connection to a computer is through USB 2.0 or 3.0. If you want to get an idea of how seriously Intel is taking the ‘robotics, and other power- and weight-limited platforms’ market, here’s a sample of what is on the one-page spec sheet: the T265 only uses 1.5 Watts, weighs 55 grams, and is 108 x 25 x 13 mm. There are also two M3 taps spaced 50mm apart on the back, which is an astonishing spec to publish on the product landing page. Simply the fact that the location and dimensions of the mounting holes is so prominent gives you an idea of how seriously Intel is taking robotics and prototyping applications.
This new SLAM camera complements Intel’s other tracking camera offerings, including those we’ve seen at Maker Faires past. It’s a competitor to the new crop of solid state LIDAR modules we’ve seen pop up recently. It’s not a Kinect, but we’re years past using a first-gen Kinect for robotics applications. Now, everything is custom chips and SLAM processing, and the RealSense T265 is the smallest platform to do that now.
[Jack Qiao] wanted an autonomous robot that could be handy around an ever-changing shop. He didn’t want a robot he’d have to baby sit. If he said, ‘bring me the 100 ohm resistors’, it would go find and bring them to him.
He iterated a bit, and ended up building quite a nice robot platform for under a thousand dollars. It’s got a realsense camera and a rangefinder from a Neato robotic vacuum. In addition to a mircrophone, it has a whole suite of additional sensors in its base, which is a stripped down robotic vacuum from a Korean manufacturer. A few more components come together to give it an arm and a gripper.
The thinking is done on a Nvidia Jetson TK1 board. The cores on the integrated graphics card are used to perform faster computer vision calculations. The software is all ROS based.
As can be seen in the video after the break. The robot uses SLAM techniques to successfully navigate and complete tasks such as fetch resistors, get water, and more. [Jack Qiao] is happy with his robot, and we would be too.
I start each day checking out the new and updated projects over on Hackaday.io. Each day one can find all manner of projects – from satellites to machine vision to rockets. One type of project which is always present are robots- robot arms, educational ‘bots, autonomous robots, and mobile robots. This week’s Hackaday.io had a few great robot projects show up on the “new and updated” page, so I’m using the Hacklet to take a closer look.
We start with [Jack Qiao] and Autonomous home robot that does things. [Jack] is building a robot that can navigate his home. He’s learned that just creating a robot that can get itself from point A to point B in the average home is a daunting task. To make this happen, he’s using the Simultaneous Localization and Mapping (SLAM) algorithm. He’s implementing SLAM with the help of Robotic Operating System (ROS). The robot started out as a test mule tethered to a laptop. It’s evolved to a wooden base with a mini ITX motherboard. Mapping data comes in through a Kinect V2, which will soon be upgraded to a Neato XV-11 LIDAR system.
Next up is [Tyler Spadgenske] with TyroBot. TyroBot is a walking robot with some lofty goals, including walking a mile in a straight line without falling down. [Tyler’s] inspiration comes from robots such as Bob the Biped and Zowi. So far, TyroBot consists of legs and feet printed in PLA. [Tyler] is going to use a 32 bit processor for [TyroBot’s] brain, and wants to avoid the Arduino IDE at any cost (including writing his own IDE from scratch). This project is just getting started, so head on over to the project page and watch TyroBot’s progress!
Next is [Mike Rigsby] with Little Friend. Little Friend is a companion robot. [Mike] found that robots spend more time charging batteries than interacting. This wouldn’t do for a companion robot. His solution was to do away with batteries all together. Little Friend is powered by super capacitors. An 8 minute charge will keep this little bot going for 75 minutes. An Arduino with a motor shield controls Little Friend’s DC drive motors, as well as two animated eyes. If you can’t tell, [Mike] used a tomato as his inspiration. This keeps Little Friend in the cute zone, far away from the uncanny valley.
Finally we have the walking robot king, [Radomir Dopieralski], with Logicoma-kun. For the uninitiated, a Logicoma is a robot tank (or “logistics robot”) from the Ghost in the Shell series. [Radomir] decided to bring these cartoon tanks to life – at least in miniature. The bulk of Logicoma-kun is built carefully cut and sculpted acrylic sheet. Movement is via popular 9 gram servos found all over the internet. [Radomir] recently wrote an update outlining his new brain for Logicoma-kun. An Arduino Pro Mini will handle servo control. The main computer will be an ESP8266 running Micropython. I can’t wait to see this little ‘bot take its first steps.
The Robot Operating System (ROS) is typically associated with big robots but [Grassjelly] decided to prove differently by creating Linorobot. This small, differential drive robot is similar in appearance to many small Arduino based robots often used for line following. Linorobot packs a lot more computing power with a Teensy 3.1 connected to a Radxa Rock Pro. The Teensy handles the motors, reading their encoders, and acquisition of IMU data.
The Radxa, new to us here at Hackaday, is a single board computer based on the quad-core ARM Cortex-A9 1.6 GHz CPU. It may not have been seen on our pages but if you’re at Hackaday Belgrade you can attend a session on building a cluster using it. The ability to run Linux is key to using ROS, which is an open source system for controlling robots. With the Radxa running ROS it interfaces directly to the Neato XV-11 Lidar’s dedicated controller board.
Avoiding the hand.
Mapping with lidar.
The Linorobot packs into a small robot the capabilities usually seen in much larger and expensive robots such as the Turtlebot 2. With this diminutive robot hackers can learn about doing SLAM (Simultaneous Localization and Mapping) and autonomous navigation, plus the other capabilities of ROS.
[Grassjelly] has a tutorial on building the robot which is also a good introduce to ROS. He provides the software as open source. It’s an impressive project which provides a small, comparatively affordable robot for learning and working with ROS. A video of Linorobot SLAMing and navigating [Grassjelly’s] lab is after the break.