Ground Effect Drone Flies Autonomously

There are a number of famous (yet fictional) sea monsters in the lakes and oceans around the world, but in the Caspian Sea one turned out to be real. This is where the first vehicles specifically built to take advantage of the ground effect were built by the Soviet Union, and one of the first was known as the Caspian Sea Monster due to the mystery surrounding its discovery. While these unique airplane/boat hybrids were eventually abandoned after several were built for military use, the style of aircraft still has some niche uses and can even be used as a platform for autonomous drones.

This build from [Think Flight] started off as a simple foam model of just such a ground effect vehicle (or “ekranoplan”) in his driveway. With a few test flights the model was refined enough to attach a small propeller and battery. The location of the propeller changed from rear-mounted to front-mounted and then back to rear-mounted for the final version, with each configuration having different advantages and disadvantages. The final model includes an Arudino running an autopilot program called Ardupilot, and with an air speed sensor installed the drone is able to maintain flight in the ground effect and autonomously navigate pre-programmed waypoints around a lake at high speed.

For a Cold War technology that’s been largely abandoned by militaries in favor of other modes of transportation due to its limited use case and extremely narrow flight tolerances, ground effect vehicles are relatively popular as remote controlled vehicles. This RC ekranoplan used the same Ardupilot software but paired with a LIDAR system instead of GPS to navigate its way around its environment.

Thanks to [TTN] for the tip!

Continue reading “Ground Effect Drone Flies Autonomously”

Getting Ready For Mars: The Seven Minutes Of Terror

For the past seven months, NASA’s newest Mars rover has been closing in on its final destination. As Perseverance eats up the distance and heads for the point in space that Mars will occupy on February 18, 2021, the rover has been more or less idle. Tucked safely into its aeroshell, we’ve heard little from the lonely space traveler lately, except for a single audio clip of the whirring of its cooling pumps.

Its placid journey across interplanetary space stands in marked contrast to what lies just ahead of it. Like its cousin and predecessor Curiosity, Perseverance has to successfully negotiate a gauntlet of orbital and aerodynamic challenges, and do so without any human intervention. NASA mission planners call it the Seven Minutes of Terror, since the whole process will take just over 400 seconds from the time it encounters the first wisps of the Martian atmosphere to when the rover is safely on the ground within Jezero Crater.

For that to happen, and for the two-billion-dollar mission to even have a chance at fulfilling its primary objective of searching for signs of ancient Martian life, every system on the spacecraft has to operate perfectly. It’s a complicated, high-energy ballet with high stakes, so it’s worth taking a look at the Seven Minutes of Terror, and what exactly will be happening, in detail.

Continue reading “Getting Ready For Mars: The Seven Minutes Of Terror”

Legged Robots Put On Wheels And Skate Away

We don’t know how much time passed between the invention of the wheel and someone putting wheels on their feet, but we expect that was a great moment of discovery: combining the ability to roll off at speed and our leg’s ability to quickly adapt to changing terrain. Now that we have a wide assortment of recreational wheeled footwear, what’s next? How about teaching robots to skate, too? An IEEE Spectrum interview with [Marko Bjelonic] of ETH Zürich describes progress by one of many research teams working on the problem.

For many of us, the first robot we saw rolling on powered wheels at the end of actively articulated legs was when footage of the Boston Dynamics ‘Handle’ project surfaced a few years ago. Rolling up and down a wide variety of terrain and performing an occasional jump, its athleticism caused quite a stir in robotics circles. But when Handle was introduced as a commercial product, its job was… stacking boxes in a warehouse? That was disappointing. Warehouse floors are quite flat, leaving Handle’s agility under-utilized.

Boston Dynamic has typically been pretty tight-lipped on details of their robotics development, so we may never know the full story behind Handle. But what they have definitely accomplished is getting a lot more people thinking about the control problems involved. Even for humans, we face a nontrivial learning curve paved with bruised and occasionally broken body parts, and that’s even before we start applying power to the wheels. So there are plenty of problems to solve, generating a steady stream of research papers describing how robots might master this mode of locomotion.

Adding to the excitement is the fact this is becoming an area where reality is catching up to fiction, as wheeled-legged robots have been imagined in forms like Tachikoma of Ghost in the Shell. While those fictional robots have inspired projects ranging from LEGO creations to 28-servo beasts, their wheel and leg motions have not been autonomously coordinated as they are in this generation of research robots.

As control algorithms mature in robot research labs around the world, we’re confident we’ll see wheeled-legged robots finding applications in other fields. This concept is far too cool to be left stacking boxes in a warehouse.

Continue reading “Legged Robots Put On Wheels And Skate Away”

Robots Learning To Understand Their Surroundings

Today it is pretty easy to build a robot with an onboard camera and have fun manually driving through that first-person view. But builders with dreams of autonomy quickly learn there is a lot of work between camera installation and autonomously executing a “go to chair” command. Fortunately we can draw upon work such as View Parsing Network by [Bowen Pan, Jiankai Sun, et al]

When a camera image comes into a computer, it is merely a large array of numbers representing red, green, and blue color values and our robot has no idea what that image represents. Over the past years, computer vision researchers have found pretty good solutions for problems of image classification (“is there a chair?”) and segmentation (“which pixels correspond to the chair?”) While useful for building an online image search engine, this is not quite enough for robot navigation.

A robot needs to translate those pixel coordinates into real-world layout, and this is the problem View Parsing Network offers to solve. Detailed in Cross-view Semantic Segmentation for Sensing Surroundings (DOI 10.1109/LRA.2020.3004325) the system takes in multiple camera views looking all around the robot. Results of image segmentation are then synthesized into a 2D top-down segmented map of the robot’s surroundings. (“Where is the chair located?”)

The authors documented how to train a view parsing network in a virtual environment, and described the procedure to transfer a trained network to run on a physical robot. Today this process demands a significantly higher skill level than “download Arduino sketch” but we hope such modules will become more plug-and-play in the future for better and smarter robots.

[IROS 2020 Presentation video (duration 10:51) requires free registration, available until at least Nov. 25th 2020. One-minute summary embedded below.]

Continue reading “Robots Learning To Understand Their Surroundings”

Autonomous Sentry Gun Packs A Punch And A Ton Of Build Tips

What has dual compressed-air cannons, 500 roll-on deodorant balls, and a machine-learning brain with a bad attitude? We didn’t know either, until [Leo Fernekes] dropped this video on his autonomous robot sentry gun and saw it in action for ourselves.

Now, we’ve seen tons of sentry guns on these pages before, shooting everything from water to various forms of Nerf. And plenty of those builds have used some form of machine vision to aim the gun onto the target. So while it might appear that [Leo]’s plowing old ground here, this build is chock full of interesting tips and tricks.

It started when [Leo] saw a video on TensorFlow basics from our friend [Edje Electronics], which gave him the boost needed to jump into an AI project. The controller he ended up with looks for humans in the scene and slews the turret onto target, where the air cannons can do their thing. The hefty ammo is propelled by compressed air, which is dumped into the chamber using a solenoid valve with an interesting driver that maximizes the speed at which it opens. Style points go to the bacteriophage T4-inspired design, and to the sequence starting at 1:34 which reminded us of the factory scene from RoboCop.

[Leo] really put a ton of work into this project, and the results show. He is hoping to get an art gallery or museum to show it as an interactive piece to comment on one possible robot-human future, presumably after getting guests to sign a release. Whatever happens to it, the robot looks great and [Leo] learned a lot from it, as did we.

Continue reading “Autonomous Sentry Gun Packs A Punch And A Ton Of Build Tips”

Dropping A Glider From 18,000 Feet

[Tarik and Kemal] have an objective in mind: to drop a home-made autonomous glider from a high-altitude balloon and safely return it to home. To motivate them, [Tarik] has decided not to cut his hair until they reach 18,000 feet. Given the ambition of their project, it isn’t surprising that his hair is getting rather long now.

Continue reading “Dropping A Glider From 18,000 Feet”

Drones Can Undertake Excavations Without Human Intervention

Researchers from Denmark’s Aarhus University have developed a method for autonomous drone scanning and measurement of terrains, allowing drones to independently navigate themselves over excavation grounds. The only human input is a starting location and the desired cliff face for scanning.

For researchers studying quarries, capturing data about gravel, walls, and other natural and man-made formations is important for understanding the properties of the terrain. Controlling the drones can be expensive though, since there’s considerable skill involved in manually flying the drone and keeping its camera steady and perpendicular to the wall it is capturing.

The process designed is a Gaussian model that predicts the wind encountered near the wall, estimating the strength based on the inputs it receives as it moves. It uses both nonlinear model predictive control (NMPC) and a PID controller in its feedback control system, which calculate the values to send to the drone’s motor controller. A long short-term memory (LSTM) model is used for calculating the predictions. It’s been successfully tested in a chalk quarry in Denmark and will continue to be tested as its algorithms are improved.

Getting a drone to hover and move between GPS waypoints is easy enough, but once they need to maneuver around obstacles it starts getting tricky. Research like this will be invaluable for developing systems that help drones navigate in areas where their human operators can’t reach.

[Thanks to Qes for the tip!]