The leap to self-driving cars could be as game-changing as the one from horse power to engine power. If cars prove able to drive themselves better than humans do, the safety gains could be enormous: auto accidents were the #8 cause of death worldwide in 2016. And who doesn’t want to turn travel time into something either truly restful or alternatively productive?
But getting there is a big challenge, as Alfred Jones knows all too well. The Head of Mechanical Engineering at Lyft’s level-5 self-driving division, his team is building the roof racks and other gear that gives the vehicles their sensors and computational hardware. In his keynote talk at Hackaday Remoticon, Alfred Jones walks us through what each level of self-driving means, how the problem is being approached, and where the sticking points are found between what’s being tested now and a truly steering-wheel-free future.
Check out the video below, and take a deeper dive into the details of his talk.
Multirotor aircraft enjoy many intrinsic advantages, but as machines that fight gravity with brute force, energy efficiency is not considered among them. In the interest of stretching range, several air-ground hybrid designs have been explored. Flying cars, basically, to run on the ground when it isn’t strictly necessary to be airborne. But they all share the same challenge: components that make a car work well on the ground are range-sapping dead weight while in the air. [Youming Qin et al.] explored cutting that dead weight as much as possible and came up with Hybrid Aerial-Ground Locomotion with a Single Passive Wheel.
As the paper’s title made clear, they went full minimalist with this design. Gone are the driveshaft, brakes, steering, even other wheels. All that remained is a single unpowered wheel bolted to the bottom of their dual-rotor flying machine. Minimizing the impact on flight characteristics is great, but how would that work on the ground? As a tradeoff, these rotors have to keep spinning even while in “ground mode”. They are responsible for keeping the machine upright, and they also have to handle tasks like steering. These and other control algorithm problems had to be sorted out before evaluating whether such a compromised ground vehicle is worth the trouble.
Happily, the result is a resounding “yes”. Even though the rotors have to continue running to do different jobs while on the ground, that was still far less effort than hovering in the air. Power consumption measurements indicate savings of up to 77%, and there are a lot of potential venues for tuning still awaiting future exploration. Among them is to better understand interaction with ground effect, which is something we’ve seen enable novel designs. This isn’t exactly the flying car we were promised, but its development will still be interesting to watch among all the other neat ideas under development to keep multirotors in the air longer.
Tesla have always aimed to position themselves as part automaker, part tech company. Their unique offering is that their vehicles feature cutting-edge technology not available from their market rivals. The company has long touted it’s “full self-driving” technology, and regular software updates have progressively unlocked new functionality in their cars over the years.
The latest “V10” update brought a new feature to the fore – known as Smart Summon. Allowing the driver to summon their car remotely from across a car park, this feature promises to be of great help on rainy days and when carrying heavy loads. Of course, the gulf between promises and reality can sometimes be a yawning chasm.
How Does It Work?
Smart Summon is activated through the Tesla smartphone app. Users are instructed to check the vehicle’s surroundings and ensure they have line of sight to the vehicle when using the feature. This is combined with a 200 foot (61 m) hard limit, meaning that Smart Summon won’t deliver your car from the back end of a crowded mall carpark. Instead, it’s more suited to smaller parking areas with clear sightlines.
Once activated, the car will back out of its parking space, and begin to crawl towards the user. As the user holds down the button, the car moves, and will stop instantly when let go. Using its suite of sensors to detect pedestrians and other obstacles, the vehicle is touted to be able to navigate the average parking environment and pick up its owners with ease.
No Plan Survives First Contact With The Enemy
With updates rolled out over the air, Tesla owners jumped at the chance to try out this new functionality. Almost immediately, a cavalcade of videos began appearing online of the technology. Many of these show that things rarely work as well in the field as they do in the lab.
As any driver knows, body language and communication are key to navigating a busy parking area. Whether it’s a polite nod, an instructional wave, or simply direct eye contact, humans have become well-rehearsed at self-managing the flow of traffic in parking areas. When several cars are trying to navigate the area at once, a confused human can negotiate with others to take turns to exit the jam. Unfortunately, a driverless car lacks all of these abilities.
A great example is this drone video of a Model 3 owner attempting a Smart Summon in a small linear carpark. Conditions are close to ideal – a sunny day, with little traffic, and a handful of well-behaved pedestrians. In the first attempt, the hesitation of the vehicle is readily apparent. After backing out of the space, the car simply remains motionless, as two human drivers are also attempting to navigate the area. After backing up further, the Model 3 again begins to inch forward, with seemingly little ability to choose between driving on the left or the right. Spotting the increasing frustration of the other road users, the owner is forced to walk to the car and take over. In a second attempt, the car is again flummoxed by an approaching car, and simply grinds to a halt, unable to continue. Communication between autonomous vehicles and humans is an active topic of research, and likely one that will need to be solved sooner rather than later to truly advance this technology.
Pulling straight out of a wide garage onto an empty driveway is a corner case they haven’t quite mastered yet.
Other drivers have had worse experiences. One owner had their Tesla drive straight into the wall of their garage, an embarrassing mistake even most learner drivers wouldn’t make. Another had a scary near miss, when the Telsa seemingly failed to understand its lack of right of way. The human operator can be seen to recognise an SUV approaching at speed from the vehicle’s left, but the Tesla fails to yield, only stopping at the very last minute. It’s likely that the Smart Summon software doesn’t have the ability to understand right of way in parking environments, where signage is minimal and it’s largely left up to human intuition to figure out.
This is one reason why the line of sight requirement is key – had the user let go of the button when first noticing the approaching vehicle, the incident would have been avoided entirely. Much like other self-driving technologies, it’s not always clear how much responsibility still lies with the human in the loop, which can have dire results. And more to the point, how much responsibility should the user have, when he or she can’t know what the car is going to decide to do?
More amusingly, an Arizona man was caught chasing down a Tesla Model 3 in Phoenix, seeing the vehicle rolling through the carpark without a driver behind the wheel. While the embarassing incident ended without injury, it goes to show that until familiarity with this technology spreads, there’s a scope for misunderstandings to cause problems.
It’s Not All Bad, Though
Some users have had more luck with the feature. While it’s primarily intended to summon the car to the user’s GPS location, it can also be used to direct the car to a point within a 200 foot radius. In this video, a Tesla can be seen successfully navigating around a sparsely populated carpark, albeit with some trepidation. The vehicle appears to have difficulty initially understanding the structure of the area, first attempting a direct route before properly making its way around the curbed grass area. The progress is more akin to a basic line-following robot than an advanced robotic vehicle. However, it does successfully avoid running down its owner, who attempts walking in front of the moving vehicle to test its collision avoidance abilities. If you value your limbs, probably don’t try this at home.
Wanting to explore a variety of straightforward and oddball situations, [DirtyTesla] decided to give the tech a rundown himself. The first run in a quiet carpark is successful, albeit with the car weaving, reversing unnecessarily, and ignoring a stop sign. Later runs are more confident, with the car clearly choosing the correct lane to drive in, and stopping to check for cross traffic. Testing on a gravel driveway was also positive, with the car properly recognising the grass boundaries and driving around them. That is, until the fourth attempt, when the car gently runs off the road and comes to a stop in the weeds. Further tests show that dark conditions and heavy rain aren’t a show stopper for the system, but it’s still definitely imperfect in operation.
Reality Check
Fundamentally, there’s plenty of examples out there that suggest this technology isn’t ready for prime-time. Unlike other driver-in-the-loop aids, like parallel parking assists, it appears that users put a lot more confidence in the ability of Smart Summon to detect obstacles on its own, leading to many near misses and collisions.
If all it takes is a user holding a button down to drive a 4000 pound vehicle into a wall, perhaps this isn’t the way to go. It draws parallels to users falling asleep on the highway when using Tesla’s AutoPilot – drivers are putting ultimate trust in a system that is, at best, only capable when used in combination with a human’s careful oversight. But even then, how is the user supposed to know what the car sees? Tesla’s tools seem to have a way of lulling users into a false sense of confidence, only to be betrayed almost instantly to the delight of Youtube viewers around the world.
While it’s impossible to make anything truly foolproof, it would appear that Tesla has a ways to go to get Smart Summon up to scratch. Combine this with the fact that in 90% of videos, it would have been far quicker for an able-bodied driver to simply walk to the vehicle and drive themselves, and it definitely appears to be more of a gimmick than a useful feature. If it can be improved, and limitations such as line-of-sight and distance can be negated, it will quickly become a must-have item on luxury vehicles. That may yet be some years away, however. Watch this space, as it’s unlikely other automakers will rest for long!
A simple robot that performs line-following or obstacle avoidance can fit all of its logic inside a single Arduino sketch. But as a robot’s autonomy increases, its corresponding software gets complicated very quickly. It won’t be long before diagnostic monitoring and logging comes in handy, or the desire to encapsulate feature areas and orchestrate how they work together. This is where tools like the Robot Operating System (ROS) come in, so we don’t have to keep reinventing these same wheels. And Open Robotics just released ROS 2 Dashing Diademata for all of us to use.
ROS is an open source project that’s been underway since 2007 and updated regularly, each named after a turtle species. What makes this one worthy of extra attention? Dashing marks the first longer term support (LTS) release of ROS 2, a refreshed second generation of ROS. All high level concepts stayed the same, meaning almost everything in our ROS orientation guide is still applicable in ROS 2. But there were big changes under the hood reflecting technical advances over the past decade.
ROS was built in an age where a Unix workstation cost thousands of dollars, XML was going to be how we communicate all data online, and an autonomous robot cost more than a high-end luxury car. Now we have $35 Raspberry Pi running Linux, XML has fallen out of favor due to processing overhead, and some autonomous robots arehigh-end luxury cars. For these and many other reasons, the people of Open Robotics decided it was time to make a clean break from legacy code.
The break has its detractors, as it meant leaving behind the vast library of freely available robot intelligence modules released by researchers over the years. Popular ones were (or will be) ported to ROS 2, and there is a translation bridge sufficient to work with some, but the rest will be left behind. However, this update also resolved many of the deal-breakers preventing adoption outside of research, making ROS more attractive for commercial investment which should bring more robots mainstream.
Judging by responses to the release announcement, there are plenty of people eager to put ROS 2 to work, but it is not the only freshly baked robotics framework around. We just saw Nvidia release their Isaac Robot Engine tailored to make the most of their Jetson hardware.
A great place to get your feet wet with the data-network-wonderland that is modern-day automobiles is the Car Hacking Village at DEF CON. I stopped by on Saturday afternoon to see what it was all about and the place was packed. From Ducati motorcycles to junkyard instrument clusters, and from mobility scooters to autonomous RC test tracks, this feels like one of the most interactive villages in the whole con.
The news is full of self-driving cars and while there is some bad news, most of it is pretty positive. It seems a foregone conclusion that it is just a matter of time before calling for an Uber doesn’t involve another person. But according to a recent article, [Ernst Dickmanns] — a German aerospace engineer — built three autonomous vehicles starting in 1986 and culminating with on-the-road demonstrations in 1994 for Daimler.
It is hard to imagine what had to take place to get a self-driving car in 1986. The article asserts that you need computer analysis of video at 10 frames a second minimum. In the 1980s doing a single frame in 10 minutes was considered an accomplishment. [Dickmanns’] vehicles borrowed tricks from how humans drive. They focused on a small area at any one moment and tried to ignore things that were not relevant.
In one bad week in March, two people were indirectly killed by automated driving systems. A Tesla vehicle drove into a barrier, killing its driver, and an Uber vehicle hit and killed a pedestrian crossing the street. The National Transportation Safety Board’s preliminary reports on both accidents came out recently, and these bring us as close as we’re going to get to a definitive view of what actually happened. What can we learn from these two crashes?
There is one outstanding factor that makes these two crashes look different on the surface: Tesla’s algorithm misidentified a lane split and actively accelerated into the barrier, while the Uber system eventually correctly identified the cyclist crossing the street and probably had time to stop, but it was disabled. You might say that if the Tesla driver died from trusting the system too much, the Uber fatality arose from trusting the system too little.
But you’d be wrong. The forward-facing radar in the Tesla should have prevented the accident by seeing the barrier and slamming on the brakes, but the Tesla algorithm places more weight on the cameras than the radar. Why? For exactly the same reason that the Uber emergency-braking system was turned off: there are “too many” false positives and the result is that far too often the cars brake needlessly under normal driving circumstances.
The crux of the self-driving at the moment is precisely figuring out when to slam on the brakes and when not. Brake too often, and the passengers are annoyed or the car gets rear-ended. Brake too infrequently, and the consequences can be worse. Indeed, this is the central problem of autonomous vehicle safety, and neither Tesla nor Uber have it figured out yet.