Making Autonomous Racing Drones Lean And Mean

Recently the MAVLab (Micro Air Vehicle Laboratory) at the Technical University of Delft in the Netherlands proudly proclaimed having made an autonomic drone that’s a mere 72 grams in weight. The best part? It’s designed to take part in drone races. What this means is that using a single camera and onboard processing, this little drone with a diameter of 10 centimeters has to navigate the course, while avoiding obstacles.

To achieve this goal, they took an Eachine trashcan drone, replacing its camera with an open source JeVois smart machine vision camera and the autopilot software with the Paparazzi open UAV software. Naturally, scaling a racing drone down to this size came at an obvious cost: with its low-quality sensors, relatively low-quality camera and limited processing power compared to its big brothers it has to rely strongly on algorithms that compensate for drift and other glitches while racing.

Currently the drone is mainly being tested at a four-gate race track at TU Delft’s Cyberzoo, where it can fly multiple laps at a leisurely two meters per second, using its gate-detecting algorithms to zip from gate to gate. By using machine vision to do the gate detection, the drone can deal with gates being displaced from their position indicated on the course map.

While competitive with other, much larger autonomous racing drones, the system is still far removed from the performance of human-controlled racing drones. To close this gap, MAVLab’s [Christophe De Wagter] mentions that they’re looking at improving the algorithms to make them better at predictive control and state estimation, as well as the machine vision side. Ideally these little drones should be able to be far more nimble and quick than they are today.

See a video of the drone in action after the link.

Continue reading “Making Autonomous Racing Drones Lean And Mean”

DARPA Goes Underground For Next Challenge

We all love reading about creative problem-solving work done by competitors in past DARPA robotic challenges. Some of us even have ambition to join the fray and compete first-hand instead of just reading about them after the fact. If this describes you, step on up to the DARPA Subterranean Challenge.

Following up on past challenges to build autonomous vehicles and humanoid robots, DARPA now wants to focus collective brainpower solving problems encountered by robots working underground. There will be two competition tracks: the Systems Track is what we’ve come to expect, where teams build both the hardware and software of robots tackling the competition course. But there will also be a Virtual Track, opening up the challenge to those without resources to build big expensive physical robots. Competitors on the virtual track will run their competition course in the Gazebo robot simulation environment. This is similar to the NASA Space Robotics Challenge, where algorithms competed to run a virtual robot through tasks in a simulated Mars base. The virtual environment makes the competition accessible for people without machine shops or big budgets. The winner of NASA SRC was, in fact, a one-person team.

Back on the topic of the upcoming DARPA challenge: each track will involve three sub-domains. Each of these have civilian applications in exploration, infrastructure maintenance, and disaster relief as well as the obvious military applications.

  • Man-made tunnel systems
  • Urban underground
  • Natural cave networks

There will be a preliminary circuit competition for each, spaced roughly six months apart, to help teams get warmed up one environment at a time. But for the final event in Fall of 2021, the challenge course will integrate all three types.

More details will be released on Competitor’s Day, taking place September 27th 2018. Registration for the event just opened on August 15th. Best of luck to all the teams! And just like we did for past challenges, we will excitedly follow progress. (And have a good-natured laugh at fails.)

DroNet: learning to fly by driving

Delivery Drones Can Learn From Driving And Cycling

Increasingly these days drones are being used for urban surveillance, delivery, and examining architectural structures. To do this autonomously often involves using “map-localize-plan” techniques wherein first, the location is determined on a map using GPS, and then based on that, control commands are produced.

A neural network that does steering and collision prediction can compliment the map-localize-plan techniques. However, the neural network needs to be trained using video taken from actual flying drones. But generating that training video involves many hours of flying drones at street level putting vehicles and pedestrians at risk. To train their DroNet, Researchers from the University of Zurich and the Universidad Politecnica de Madrid have come up with safer sources for that video, video recorded from driving cars and bicycles.

DroNet
DroNet

For the drone steering predictions, they used over 70,000 images and corresponding steering angles from the publically available car driving data from Udacity’s Open Source Self-Driving project. For the collision predictions, they mounted a GoPro camera to the handlebars of a bicycle and drove around a city. Video recording began when the bicycle was distant from an object and stopped when very close to the object. In total, they collected 32,000 images.

To use the trained network, images from the drone’s forward-facing camera were fed into the network and the output was a steering angle and a probability of collision, which was turned into a velocity. The drone remained at a constant height above ground, though it did work well from 1.5 meters to 5 meters up. It successfully navigated road lanes and avoided moving pedestrians and bicycles. Intersections did confuse it though, likely due to the open spaces messing with the collision predictions. But we think that shouldn’t be a problem when paired with map-localize-plan techniques as a direction to move through the intersection would be chosen for it using the location on the map.

As you can see in the video below, it not only does a decent job of flying down lanes but it also flies well in a parking garage and a hallway, even though it wasn’t trained for either of these.

Continue reading “Delivery Drones Can Learn From Driving And Cycling”

Taking First Place At IMAV 2016 Drone Competition

The IMAV (International Micro Air Vehicle) conference and competition is a yearly flying robotics competition hosted by a different University every year. AKAMAV – a university student group at TU Braunschweig in Germany – have written up a fascinating and detailed account of what it was like to compete (and take first place) in 2016’s eleven-mission event hosted by the Beijing Institute of Technology.

AKAMAV’s debrief of IMAV 2016 is well-written and insightful. It covers not only the five outdoor and six indoor missions, but also details what it was like to prepare for and compete in such an intensive event. In their words, “If you share even a remote interest in flying robots and don’t mind the occasional spectacular crash, this place was Disney Land on steroids.”

Continue reading “Taking First Place At IMAV 2016 Drone Competition”

Meet Blue Jay, The Flying Drone Pet Butler

20 students of the Eindhoven University of Technology (TU/e) in the Netherlands share one vision of the future: the fully domesticated drone pet – a flying friend that helps you whenever you need it and in general, is very, very cute. Their drone “Blue Jay” is packed with sensors, has a strong claw for grabbing and carrying cargo, navigates autonomously indoors, and interacts with humans at eye level.

Continue reading “Meet Blue Jay, The Flying Drone Pet Butler”

Flying High With Zynq

[Aerotenna] recently announced the first successful flight of an unmanned air vehicle (UAV) powered by a Xilinx Zynq processor running ArduPilot. The Zynq is a dual ARM processor with an onboard FPGA that can offload the processor or provide custom I/O devices. They plan to release their code to their OcPoC (Octagonal Pilot on a Chip) project, an open source initiative that partners with Dronecode, an open source UAV platform.

Continue reading “Flying High With Zynq”

Sea Rendering

Project Sea Rendering Autonomously Renders Sea Bottoms

[Geir] has created a pretty neat device, it’s actually his second version of an autonomous boat that maps the depths of lakes and ponds. He calls it the Sea Rendering. The project is pretty serious as the hull was specially made of fiberglass. The propulsion is a simple DC motor and the rudder is powered by an RC servo. A light and flag adorn the top deck making the small craft visible to other larger boats that may be passing by. Seven batteries are responsible for all of the power requirements.

Sea Rendering

The craft’s course is pre-programmed in Mission Planner and uses ArduPilot loaded on an Arduino to steer to the defined way points. An onboard GPS module determines the position of the boat while a transducer measures the depth of the water. Both position and depth values are then saved to an SD card. Those values can later be imported into a software called Dr Depth that generates a topographic map of the water-covered floor.

[Geir] has sent this bad boy out on an 18 km journey passing through 337 way points. That’s pretty impressive! He estimates that the expected run time is 24 hours at a top speed of 3 km/h, meaning it could potentially travel 72 km on a single charge while taking 700 depth measurements during the voyage.

Continue reading “Project Sea Rendering Autonomously Renders Sea Bottoms”