Increasingly these days drones are being used for urban surveillance, delivery, and examining architectural structures. To do this autonomously often involves using “map-localize-plan” techniques wherein first, the location is determined on a map using GPS, and then based on that, control commands are produced.
A neural network that does steering and collision prediction can compliment the map-localize-plan techniques. However, the neural network needs to be trained using video taken from actual flying drones. But generating that training video involves many hours of flying drones at street level putting vehicles and pedestrians at risk. To train their DroNet, Researchers from the University of Zurich and the Universidad Politecnica de Madrid have come up with safer sources for that video, video recorded from driving cars and bicycles.
For the drone steering predictions, they used over 70,000 images and corresponding steering angles from the publically available car driving data from Udacity’s Open Source Self-Driving project. For the collision predictions, they mounted a GoPro camera to the handlebars of a bicycle and drove around a city. Video recording began when the bicycle was distant from an object and stopped when very close to the object. In total, they collected 32,000 images.
To use the trained network, images from the drone’s forward-facing camera were fed into the network and the output was a steering angle and a probability of collision, which was turned into a velocity. The drone remained at a constant height above ground, though it did work well from 1.5 meters to 5 meters up. It successfully navigated road lanes and avoided moving pedestrians and bicycles. Intersections did confuse it though, likely due to the open spaces messing with the collision predictions. But we think that shouldn’t be a problem when paired with map-localize-plan techniques as a direction to move through the intersection would be chosen for it using the location on the map.
As you can see in the video below, it not only does a decent job of flying down lanes but it also flies well in a parking garage and a hallway, even though it wasn’t trained for either of these.
[Richard]’s project is based on the EOgma Neo machine learning library. Using a type of machine learning known as Sparse Predictive Hierarchies, or SPH, the algorithm is first trained with user input. [Richard] trained the model by driving it around a small track. The algorithm takes into account the steering and throttle inputs from the human driver and also monitors the feed from the Raspberry Pi camera. After training the model for a few laps, the car is then ready to drive itself.
Fundamentally, this is working on a much simpler level than a full-sized self-driving car. As the video indicates, the steering angle is predicted based on the grayscale pixel data from the camera feed. The track is very simple and the contrast of the walls to the driving surface makes it easier for the machine learning algorithm to figure out where it should be going. Watching the video feed reminds us of simple line-following robots of years past; this project achieves a similar effect in a completely different way. As it stands, it’s a great learning project on how to work with machine learning systems.
Alphabet’s self-driving car offshoot, Waymo, feels that may be the case as they were recently granted a patent for vehicles that soften on impact. Sensors would identify an impending collision and adjust ‘tension members’ on the vehicle’s exterior to cushion the blow. These ‘members’ would be corrugated sections or moving panels that absorb the impact alongside the crumpling effect of the vehicle, making adjustments based on the type of obstacle the vehicle is about to strike.
How’s your parallel parking? It’s a scenario that many drivers dread to the point of avoidance. But this 360° ultrasonic sensor will put even the most skilled driver to shame, at least those who pilot tiny remote-controlled cars.
Watch the video below a few times and you’ll see that within the limits of the test system, [Dimitris Platis]’ “SonicDisc” sensor does a pretty good job of nailing the parallel parking problem, a driving skill so rare that car companies have spent millions developing vehicles that do it for you. The essential task is good spatial relations, and that’s where SonicDisc comes in. A circular array of eight HC-SR04 ultrasonic sensors hitched to an ATmega328P, the SonicDisc takes advantage of interrupts to make reading the eight sensors as fast as possible. The array can take a complete set of readings every 10 milliseconds, which is fast enough to allow for averaging successive readings to filter out some of the noise that gets returned. Talking to the car’s microcontroller over I2C, the sensor provides a wealth of ranging data that lets the car quickly complete a parallel parking maneuver. And as a bonus, SonicDisc is both open source and cheap to build — about $10 a copy.
The [BBC] is reporting that driverless semi-trailer trucks or as we call them in the UK driverless Lorries are to be tested on UK roads. A contract has been awarded to the Transport Research Laboratory (TRL) for the trials. Initially the technology will be tested on closed tracks, but these trials are expected to move to major roads by the end of 2018.
All of these Lorries will be manned and driven in formation of up to three lorries in single file. The lead vehicle will connect to the others wirelessly and control their braking and acceleration. Human drivers will still be present to steer the following lorries in the convoy.
This automation will allow the trucks to drive very close together, reducing drag for the following vehicles to improve fuel efficiency.”Platooning” as they call these convoys has been tested in a number of countries around the world, including the US, Germany, and Japan.
Are these actually autonomous vehicles? This question is folly when looking toward the future of “self-driving”. The transition to robot vehicles will not happen in the blink of an eye, even if the technological barriers were all suddenly solved. That’s because it’s untenable for human drivers to suddenly be on the road with vehicles that don’t have a human brain behind the wheel. These changes will happen incrementally. The lorry tests are akin to networked cruise control. But we can see a path that will add in lane drift warnings, steering correction, and more incremental automation until only the lead vehicle has a person behind the wheel.
There is a lot of interest in the self driving industry right now from the self driving potato to autonomous delivery. We’d love to hear your vision of how automated delivery will sneak its way into our everyday lives. Tell us what you think in the comments below.
When I started the Automate the Freight series, my argument was that long before the vaunted day when we’ll be able to kick back and read the news or play a video game while our fully autonomous car whisks us to work, economic forces will dictate that automation will have already penetrated the supply chain. There’s much more money to be saved by carriers like FedEx and UPS cutting humans out of the loop while delivering parcels to homes and businesses than there is for car companies to make by peddling the comfort and convenience of driverless commuting.
But the other end of the supply chain is ripe for automation, too. For every smile-adorned Amazon package delivered, a whole bunch of waste needs to be toted away. Bag after bag of garbage needs to go somewhere else, and at least in the USA, municipalities are usually on the hook for the often nasty job, sometimes maintaining fleets of purpose-built trucks and employing squads of workers to make weekly pickups, or perhaps farming the work out to local contractors.
Either way you slice it, the costs for trash removal fall on the taxpayers, and as cities and towns look for ways to stretch those levies even further, there’s little doubt that automation of the waste stream will start to become more and more attractive. But what will it take to fully automate the waste removal process? And how long before the “garbage man” becomes the “garbage ‘bot”?
Potatoes deserve to roam the earth, so [Marek Baczynski] created the first self-driving potato, ushering in a new era of potato rights. Potato batteries have been around forever. Anyone who’s played Portal 2 knows that with a copper and zinc electrode, you can get a bit of current out of a potato. Tubers have been powering clocks for decades in science classrooms around the world. It’s time for something — revolutionary.
[Marek] knew that powering a timepiece wasn’t enough for his potato, so he picked up a Texas Instruments BQ25504 boost converter energy harvesting chip. A potato can output around 0.4 V at 0.6 mA. The 25504 uses this power to slowly charge a capacitor. Every fifteen minutes or so, enough energy is stored to power a motor for a short time. [Marek] built a car for his potato — or more fittingly, he built his potato into a car.
The starch-powered capacitor moves the potato car about 8 cm per cycle. Over the course of a day, the potato can travel around 7.5 meters. Not very far, but hey, that’s further than the average potato travels on its own power. Of course, any traveling potato needs a name, so [Marek] dubbed his new pet “Pontus”. Check out the video after the break to see the ultimate fate of poor Pontus.