On yet another one of those long, pointless road trips that seemed to punctuate my life starting when I got my license, I was plying the roads somewhere in eastern Pennsylvania with a friend. He told me that on long trips he’d often relieve the boredom by finding another car from the same state as his destination, and then just follow it. I wasn’t sure then how staring at the same car, hour after hour, mile after mile, would do anything but increase the boredom while making you look sort of creepy, but it seemed to work for him.
What works for college kids in cars also works for long-haul truckers, and the concept of a convoy has long been a fact of life on the road and a part of popular culture. Hardly a trip on the US Interstate goes by without seeing a least two truckers traveling in close formation, partly for companionship and mutual support but also for economic reasons. And now technology is poised to take convoying to the next level, as platooning becomes yet another way to automate the freight.
Before I got a license and a car, getting to and from high school was an ordeal. The hour-long bus ride was awful, as one would expect when sixty adolescents are crammed together with minimal supervision. Avoiding the realities going on around me was a constant chore, aided by frequent mental excursions. One such wandering led me to the conclusion that we high schoolers were nothing but cargo on a delivery truck designed for people. That was a cheery fact to face at the beginning of a school day.
What’s true for a bus full of students is equally true for every city bus, trolley, subway, or long-haul motorcoach you see. People can be freight just as much as pallets of groceries in a semi or a bunch of smiling boxes and envelopes in a brown panel truck. And the same economic factors that we’ve been insisting will make it far more likely that autonomous vehicles will penetrate the freight delivery market before we see self-driving passenger vehicles are at work with people moving. This time on Automate the Freight: what happens when the freight is people?
It should come as no surprise that we here at Hackaday are big boosters of autonomous systems like self-driving vehicles. That’s not to say we’re without a healthy degree of skepticism, and indeed, the whole point of the “Automate the Freight” series is that economic forces will create powerful incentives for companies to build out automated delivery systems before they can afford to capitalize on demand for self-driving passenger vehicles. There’s a path to the glorious day when you can (safely) nap on the way to work, but that path will be paved by shipping and logistics companies with far deeper pockets than the average commuter.
So it was with some interest that we saw a flurry of announcements in the popular press recently regarding automated deliveries. Each by itself wouldn’t be worthy of much attention; companies are always maneuvering to be seen as ahead of the curve on coming trends, and often show off glitzy, over-produced videos and well-crafted press releases as a low-effort way to position themselves as well as to test markets. But seeing three announcements at one time was unusual, and may point to a general feeling by manufacturers that automated deliveries are just around the corner. Plus, each story highlighted advancements in areas specifically covered by “Automate the Freight” articles, so it seemed like a perfect time to review them and perhaps toot our own horn a bit.
Increasingly these days drones are being used for urban surveillance, delivery, and examining architectural structures. To do this autonomously often involves using “map-localize-plan” techniques wherein first, the location is determined on a map using GPS, and then based on that, control commands are produced.
A neural network that does steering and collision prediction can compliment the map-localize-plan techniques. However, the neural network needs to be trained using video taken from actual flying drones. But generating that training video involves many hours of flying drones at street level putting vehicles and pedestrians at risk. To train their DroNet, Researchers from the University of Zurich and the Universidad Politecnica de Madrid have come up with safer sources for that video, video recorded from driving cars and bicycles.
For the drone steering predictions, they used over 70,000 images and corresponding steering angles from the publically available car driving data from Udacity’s Open Source Self-Driving project. For the collision predictions, they mounted a GoPro camera to the handlebars of a bicycle and drove around a city. Video recording began when the bicycle was distant from an object and stopped when very close to the object. In total, they collected 32,000 images.
To use the trained network, images from the drone’s forward-facing camera were fed into the network and the output was a steering angle and a probability of collision, which was turned into a velocity. The drone remained at a constant height above ground, though it did work well from 1.5 meters to 5 meters up. It successfully navigated road lanes and avoided moving pedestrians and bicycles. Intersections did confuse it though, likely due to the open spaces messing with the collision predictions. But we think that shouldn’t be a problem when paired with map-localize-plan techniques as a direction to move through the intersection would be chosen for it using the location on the map.
As you can see in the video below, it not only does a decent job of flying down lanes but it also flies well in a parking garage and a hallway, even though it wasn’t trained for either of these.
[Richard]’s project is based on the EOgma Neo machine learning library. Using a type of machine learning known as Sparse Predictive Hierarchies, or SPH, the algorithm is first trained with user input. [Richard] trained the model by driving it around a small track. The algorithm takes into account the steering and throttle inputs from the human driver and also monitors the feed from the Raspberry Pi camera. After training the model for a few laps, the car is then ready to drive itself.
Fundamentally, this is working on a much simpler level than a full-sized self-driving car. As the video indicates, the steering angle is predicted based on the grayscale pixel data from the camera feed. The track is very simple and the contrast of the walls to the driving surface makes it easier for the machine learning algorithm to figure out where it should be going. Watching the video feed reminds us of simple line-following robots of years past; this project achieves a similar effect in a completely different way. As it stands, it’s a great learning project on how to work with machine learning systems.
Alphabet’s self-driving car offshoot, Waymo, feels that may be the case as they were recently granted a patent for vehicles that soften on impact. Sensors would identify an impending collision and adjust ‘tension members’ on the vehicle’s exterior to cushion the blow. These ‘members’ would be corrugated sections or moving panels that absorb the impact alongside the crumpling effect of the vehicle, making adjustments based on the type of obstacle the vehicle is about to strike.
How’s your parallel parking? It’s a scenario that many drivers dread to the point of avoidance. But this 360° ultrasonic sensor will put even the most skilled driver to shame, at least those who pilot tiny remote-controlled cars.
Watch the video below a few times and you’ll see that within the limits of the test system, [Dimitris Platis]’ “SonicDisc” sensor does a pretty good job of nailing the parallel parking problem, a driving skill so rare that car companies have spent millions developing vehicles that do it for you. The essential task is good spatial relations, and that’s where SonicDisc comes in. A circular array of eight HC-SR04 ultrasonic sensors hitched to an ATmega328P, the SonicDisc takes advantage of interrupts to make reading the eight sensors as fast as possible. The array can take a complete set of readings every 10 milliseconds, which is fast enough to allow for averaging successive readings to filter out some of the noise that gets returned. Talking to the car’s microcontroller over I2C, the sensor provides a wealth of ranging data that lets the car quickly complete a parallel parking maneuver. And as a bonus, SonicDisc is both open source and cheap to build — about $10 a copy.