RC car without a top, showing electronics inside.

Fast Indoor Robot Watches Ceiling Lights, Instead Of The Road

[Andy]’s robot is an autonomous RC car, and he shares the localization algorithm he developed to help the car keep track of itself while it zips crazily around an indoor racetrack. Since a robot like this is perfectly capable of driving faster than it can sense, his localization method is the secret to pouring on additional speed without worrying about the car losing itself.

The regular pattern of ceiling lights makes a good foundation for the system to localize itself.

To pull this off, [Andy] uses a camera with a fisheye lens aimed up towards the ceiling, and the video is processed on a Raspberry Pi 3. His implementation is slick enough that it only takes about 1 millisecond to do a localization update, netting a precision on the order of a few centimeters. It’s sort of like a fast indoor GPS, using math to infer position based on the movement of ceiling lights.

To be useful for racing, this localization method needs to be combined with a map of the racetrack itself, which [Andy] cleverly builds by manually driving the car around the track while building the localization data. Once that is in place, the car has all it needs to autonomously zip around.

Interested in the nitty-gritty details? You’re in luck, because all of the math behind [Andy]’s algorithm is explained on the project page linked above, and the GitHub repository for [Andy]’s autonomous car has all the implementation details.

The system is location-dependent, but it works so well that [Andy] considers track localization a solved problem. Watch the system in action in the two videos embedded below.

Continue reading “Fast Indoor Robot Watches Ceiling Lights, Instead Of The Road”

Automate The Freight: Autonomous Ships Look For Their Niche

It is by no means an overstatement to say that life as we know it would grind to a halt without cargo ships. If any doubt remained about that fact, the last year and a half of supply chain woes put that to bed; we all now know just how much of the stuff we need — and sadly, a lot of the stuff we don’t need but still think we do — comes to us by way of one or more ocean crossings, on vessels specialized to carry everything from shipping containers to bulk liquid and solid cargo.

While the large and complex vessels that form the backbone of these globe-spanning supply chains are marvelous engineering achievements, they’re still utterly dependent on their crews to make them run efficiently. So it’s not at all surprising to learn that some shipping lines are working on ways to completely automate their cargo ships, to reduce their exposure to the need for human labor. On paper, it seems like a great idea — unless you’re a seafarer, of course. But is it a realistic scenario? Will shipping companies realize the savings that they apparently hope for by having fleets of unmanned cargo vessels plying the world’s oceans? Is this the right way to automate the freight?

Continue reading “Automate The Freight: Autonomous Ships Look For Their Niche”

Buoyant Aero MK4 keeps station in a tail wind

Aerodynamic Buoyant Blimp Budges Into Low Cost Cargo Commerce

Before the Wright Brothers powered their way across the sands of Kitty Hawk or Otto Lilienthal soared from the hills of Germany, enveloping hot air in a balloon was the only way to fly. Concepts were refined as time went by, and culminated in the grand Zeppelins of the 1930’s. However since the tragic end of the Zeppelin era, lighter than air aircraft have often been viewed as a novelty in the aviation world.

Several companies have come forward in the last decade, pitching enormous lighter than air machines for hauling large amounts of cargo at reduced cost. These behemoths rely on a mixture of natural buoyancy and lifting body designs and are intended to augment ferries and short haul commercial aviation routes.

It was this landscape where Buoyant Aero founders [Ben] and [Joe] saw an underserved that they believe they can thrive in: Transporting 300-600 lbs between warehouses or airports. They aim to increase the safety, cargo capacity, and range of traditional quadcopter concepts, and halve the operating costs of a typical Cessna 182. They hope to help people such as those rural areas of Alaska where high transportation costs double the grocery bill.

Like larger designs, Buoyant Aero’s hybrid airship relies on aerodynamic lift to supply one third the needed lift. Such an arrangement eliminates the need for ballast when empty while retaining the handling and navigation characteristics needed for autonomous flight. The smaller scale prototype’s outstanding ability to maneuver sharply and hold station with a tailwind is displayed in the video below the break. You can also learn more about their project on their Hacker News launch. We look forward to seeing the larger prototypes as they are released!

Perhaps this project will inspire your own miniature airship, in which case you may want to check out the Blimpduino for some low buck ideas. We recently covered some other Hybrid Airships that are trying to scale things even further. And if you have your own blimpy ideas you’d like to pass along, please let us know via the Tip Line!

Continue reading “Aerodynamic Buoyant Blimp Budges Into Low Cost Cargo Commerce”

Ground Effect Drone Flies Autonomously

There are a number of famous (yet fictional) sea monsters in the lakes and oceans around the world, but in the Caspian Sea one turned out to be real. This is where the first vehicles specifically built to take advantage of the ground effect were built by the Soviet Union, and one of the first was known as the Caspian Sea Monster due to the mystery surrounding its discovery. While these unique airplane/boat hybrids were eventually abandoned after several were built for military use, the style of aircraft still has some niche uses and can even be used as a platform for autonomous drones.

This build from [Think Flight] started off as a simple foam model of just such a ground effect vehicle (or “ekranoplan”) in his driveway. With a few test flights the model was refined enough to attach a small propeller and battery. The location of the propeller changed from rear-mounted to front-mounted and then back to rear-mounted for the final version, with each configuration having different advantages and disadvantages. The final model includes an Arudino running an autopilot program called Ardupilot, and with an air speed sensor installed the drone is able to maintain flight in the ground effect and autonomously navigate pre-programmed waypoints around a lake at high speed.

For a Cold War technology that’s been largely abandoned by militaries in favor of other modes of transportation due to its limited use case and extremely narrow flight tolerances, ground effect vehicles are relatively popular as remote controlled vehicles. This RC ekranoplan used the same Ardupilot software but paired with a LIDAR system instead of GPS to navigate its way around its environment.

Thanks to [TTN] for the tip!

Continue reading “Ground Effect Drone Flies Autonomously”

Getting Ready For Mars: The Seven Minutes Of Terror

For the past seven months, NASA’s newest Mars rover has been closing in on its final destination. As Perseverance eats up the distance and heads for the point in space that Mars will occupy on February 18, 2021, the rover has been more or less idle. Tucked safely into its aeroshell, we’ve heard little from the lonely space traveler lately, except for a single audio clip of the whirring of its cooling pumps.

Its placid journey across interplanetary space stands in marked contrast to what lies just ahead of it. Like its cousin and predecessor Curiosity, Perseverance has to successfully negotiate a gauntlet of orbital and aerodynamic challenges, and do so without any human intervention. NASA mission planners call it the Seven Minutes of Terror, since the whole process will take just over 400 seconds from the time it encounters the first wisps of the Martian atmosphere to when the rover is safely on the ground within Jezero Crater.

For that to happen, and for the two-billion-dollar mission to even have a chance at fulfilling its primary objective of searching for signs of ancient Martian life, every system on the spacecraft has to operate perfectly. It’s a complicated, high-energy ballet with high stakes, so it’s worth taking a look at the Seven Minutes of Terror, and what exactly will be happening, in detail.

Continue reading “Getting Ready For Mars: The Seven Minutes Of Terror”

Legged Robots Put On Wheels And Skate Away

We don’t know how much time passed between the invention of the wheel and someone putting wheels on their feet, but we expect that was a great moment of discovery: combining the ability to roll off at speed and our leg’s ability to quickly adapt to changing terrain. Now that we have a wide assortment of recreational wheeled footwear, what’s next? How about teaching robots to skate, too? An IEEE Spectrum interview with [Marko Bjelonic] of ETH Zürich describes progress by one of many research teams working on the problem.

For many of us, the first robot we saw rolling on powered wheels at the end of actively articulated legs was when footage of the Boston Dynamics ‘Handle’ project surfaced a few years ago. Rolling up and down a wide variety of terrain and performing an occasional jump, its athleticism caused quite a stir in robotics circles. But when Handle was introduced as a commercial product, its job was… stacking boxes in a warehouse? That was disappointing. Warehouse floors are quite flat, leaving Handle’s agility under-utilized.

Boston Dynamic has typically been pretty tight-lipped on details of their robotics development, so we may never know the full story behind Handle. But what they have definitely accomplished is getting a lot more people thinking about the control problems involved. Even for humans, we face a nontrivial learning curve paved with bruised and occasionally broken body parts, and that’s even before we start applying power to the wheels. So there are plenty of problems to solve, generating a steady stream of research papers describing how robots might master this mode of locomotion.

Adding to the excitement is the fact this is becoming an area where reality is catching up to fiction, as wheeled-legged robots have been imagined in forms like Tachikoma of Ghost in the Shell. While those fictional robots have inspired projects ranging from LEGO creations to 28-servo beasts, their wheel and leg motions have not been autonomously coordinated as they are in this generation of research robots.

As control algorithms mature in robot research labs around the world, we’re confident we’ll see wheeled-legged robots finding applications in other fields. This concept is far too cool to be left stacking boxes in a warehouse.

Continue reading “Legged Robots Put On Wheels And Skate Away”

Robots Learning To Understand Their Surroundings

Today it is pretty easy to build a robot with an onboard camera and have fun manually driving through that first-person view. But builders with dreams of autonomy quickly learn there is a lot of work between camera installation and autonomously executing a “go to chair” command. Fortunately we can draw upon work such as View Parsing Network by [Bowen Pan, Jiankai Sun, et al]

When a camera image comes into a computer, it is merely a large array of numbers representing red, green, and blue color values and our robot has no idea what that image represents. Over the past years, computer vision researchers have found pretty good solutions for problems of image classification (“is there a chair?”) and segmentation (“which pixels correspond to the chair?”) While useful for building an online image search engine, this is not quite enough for robot navigation.

A robot needs to translate those pixel coordinates into real-world layout, and this is the problem View Parsing Network offers to solve. Detailed in Cross-view Semantic Segmentation for Sensing Surroundings (DOI 10.1109/LRA.2020.3004325) the system takes in multiple camera views looking all around the robot. Results of image segmentation are then synthesized into a 2D top-down segmented map of the robot’s surroundings. (“Where is the chair located?”)

The authors documented how to train a view parsing network in a virtual environment, and described the procedure to transfer a trained network to run on a physical robot. Today this process demands a significantly higher skill level than “download Arduino sketch” but we hope such modules will become more plug-and-play in the future for better and smarter robots.

[IROS 2020 Presentation video (duration 10:51) requires free registration, available until at least Nov. 25th 2020. One-minute summary embedded below.]

Continue reading “Robots Learning To Understand Their Surroundings”