For years now we have been told that self-driving cars will be the Next Big Thing, and we’ve seen some companies — yes, Tesla but others too — touting current and planned features with names like “Autopilot” and “self-driving”. Cutting through the marketing hype to unpacking what that really means is difficult. But there is a standard for describing these capabilities, assigning them as levels from zero to five.
Now we’re greeted with the news that Honda have put a small number of vehicles in the showrooms in Japan that are claimed to be the first commercially available level 3 autonomous cars. That claim is debatable as for example Audi briefly had level 3 capabilities on one of their luxury sedans despite having few places to sell it in which it could be legally used. But the Honda Legend SENSING Elite can justifiably claim to be the only car on the market to the general public with the feature at the moment. It has a battery of sensors to keep track of its driver, its position, and the road conditions surrounding it. The car boasts a “Traffic Jam Pilot” mode, which “enables the automated driving system to drive the vehicle under certain conditions, instead of the driver, such as when the vehicle is in congested traffic on an expressway“.
Sounds impressive, but just what is a level 3 autonomous car, and what are all the other levels?
The notion of self driving cars isn’t new. You might be surprised at the number of such projects dating back to the 1920s. Many of these systems relied on external aids built into the roadways. It’s only recently that self driving cars on existing roadways are becoming closer to reality than fiction — increased computer processing power, smaller and power-efficient computers, compact Lidar and millimeter-wave Radar sensors are but a few enabling technologies. In South Korea, [Prof Min-hong Han] and his team of students took advantage of these technological advances and built an autonomous car which successfully navigated the streets of Seoul in several field trials. A second version subsequently drove itself along the 300 km journey from Seoul to the southern port city Busan. You might think this is boring news, until you realize this was accomplished back in the early 1990s using an Intel 386-powered desktop computer.
The project created a lot of buzz at the time, and was shown at the Daejeon Expo ’93 international exposition. Alas, the government eventually decided to cancel the research program, as it didn’t fit into their focus on heavy industries like ship building and steel production. Given the tremendous focus on self-driving and autonomous vehicles today, and with the benefit of hindsight, we wonder if that was the best choice. This isn’t the only decision from Seoul that seems questionable when viewed from the present — Samsung executives famously declined to buy Andy Rubin’s new operating system for digital cameras and handsets back in late 2004, and a few weeks later Android was purchased by Google.
You should check out [Prof Han]’s YouTube channel showing videos of the car’s camera while operating in various conditions and overlaid with the lane recognition markers and other information. I’ve driven the streets of Seoul, and that alone can be a frightening experience. But [Han] manages to stretch out in the back seat, so confident in his system that he doesn’t even wear a seatbelt.
There comes a moment when our project sees the light of day, publicly presented to people who are curious to see the results of all our hard work, only for it to fail in a spectacularly embarrassing way. This is the dreaded “Demo Curse” and it recently befell the SIT Acronis Autonomous team. Their Roborace car gained social media infamy as it was seen launching off the starting line and immediately into a wall. A team member explained what happened.
A few explanations had started circulating, but only in the vague terms of a “steering lock” without much technical detail until this emerged. Steering lock? You mean like The Club? Well, sort of. While there was no steering wheel immobilization steel bar on the car, a software equivalent did take hold within the car’s systems. During initialization, while a human driver was at the controls, one of the modules sent out NaN (Not a Number) instead of a valid numeric value. This was never seen in testing, and it wreaked havoc at the worst possible time.
A module whose job was to ensure numbers stay within expected bounds said “not a number, not my problem!” That NaN value propagated through to the vehicle’s CAN data bus, which didn’t define the handling of NaN so it was arbitrarily translated into a very large number causing further problems. This cascade of events resulted in a steering control system locked to full right before the algorithm was given permission to start driving. It desperately tried to steer the car back on course, without effect, for the few short seconds until it met the wall.
While embarrassing and not the kind of publicity the Schaffhausen Institute of Technology or their sponsor Acronis was hoping for, the team dug through logs to understand what happened and taught their car to handle NaN properly. Driving a backup car, round two went very well and the team took second place. So they had a happy ending after all. Congratulations! We’re very happy this problem was found and fixed on a closed track and not on public roads.
Self-driving technology is a holy grail that promises to forever change the way we interact with cars. Thus far, there’s been plenty of hype and excitement, but full vehicles that remove the driver from the equation have remained far off. Tesla have long posited themselves as a market leader in this area, with their Autopilot technology allowing some limited autonomy on select highways. However, in a recent announcement, they have heralded the arrival of a new “Full Self Driving” ability for select beta testers in their early access program.
Taking Things Up A Notch
The new software update further extends the capabilities of Tesla vehicles to drive semi-autonomously. Despite the boastful “Full Self Driving” moniker, or FSD for short, it’s still classified as a Level 2 driving automation system, which relies on human intervention as a backup. This means that the driver must be paying attention and ready to take over in an instant, at all times. Users are instructed to keep their hands on the wheel at all times, but predictably, videos have already surfaced of users ignoring this measure.
The major difference between FSD and the previous Autopilot software is the ability to navigate city streets. Formerly, Tesla vehicles were only able to self-drive on highways, where the more regular flow of traffic is easier to handle. City streets introduce far greater complexity, with hazards like parked cars, pedestrians, bicycles, and complicated intersections. Unlike others in the field, who are investing heavily in LIDAR technology, Tesla’s system relies entirely on cameras and radar to navigate the world around it. Continue reading “Tesla Begins “Full Self Driving” Public Beta As Waymo And Cruise Go Unattended”→
Tesla have always aimed to position themselves as part automaker, part tech company. Their unique offering is that their vehicles feature cutting-edge technology not available from their market rivals. The company has long touted it’s “full self-driving” technology, and regular software updates have progressively unlocked new functionality in their cars over the years.
The latest “V10” update brought a new feature to the fore – known as Smart Summon. Allowing the driver to summon their car remotely from across a car park, this feature promises to be of great help on rainy days and when carrying heavy loads. Of course, the gulf between promises and reality can sometimes be a yawning chasm.
How Does It Work?
Smart Summon is activated through the Tesla smartphone app. Users are instructed to check the vehicle’s surroundings and ensure they have line of sight to the vehicle when using the feature. This is combined with a 200 foot (61 m) hard limit, meaning that Smart Summon won’t deliver your car from the back end of a crowded mall carpark. Instead, it’s more suited to smaller parking areas with clear sightlines.
Once activated, the car will back out of its parking space, and begin to crawl towards the user. As the user holds down the button, the car moves, and will stop instantly when let go. Using its suite of sensors to detect pedestrians and other obstacles, the vehicle is touted to be able to navigate the average parking environment and pick up its owners with ease.
No Plan Survives First Contact With The Enemy
With updates rolled out over the air, Tesla owners jumped at the chance to try out this new functionality. Almost immediately, a cavalcade of videos began appearing online of the technology. Many of these show that things rarely work as well in the field as they do in the lab.
As any driver knows, body language and communication are key to navigating a busy parking area. Whether it’s a polite nod, an instructional wave, or simply direct eye contact, humans have become well-rehearsed at self-managing the flow of traffic in parking areas. When several cars are trying to navigate the area at once, a confused human can negotiate with others to take turns to exit the jam. Unfortunately, a driverless car lacks all of these abilities.
A great example is this drone video of a Model 3 owner attempting a Smart Summon in a small linear carpark. Conditions are close to ideal – a sunny day, with little traffic, and a handful of well-behaved pedestrians. In the first attempt, the hesitation of the vehicle is readily apparent. After backing out of the space, the car simply remains motionless, as two human drivers are also attempting to navigate the area. After backing up further, the Model 3 again begins to inch forward, with seemingly little ability to choose between driving on the left or the right. Spotting the increasing frustration of the other road users, the owner is forced to walk to the car and take over. In a second attempt, the car is again flummoxed by an approaching car, and simply grinds to a halt, unable to continue. Communication between autonomous vehicles and humans is an active topic of research, and likely one that will need to be solved sooner rather than later to truly advance this technology.
Pulling straight out of a wide garage onto an empty driveway is a corner case they haven’t quite mastered yet.
Other drivers have had worse experiences. One owner had their Tesla drive straight into the wall of their garage, an embarrassing mistake even most learner drivers wouldn’t make. Another had a scary near miss, when the Telsa seemingly failed to understand its lack of right of way. The human operator can be seen to recognise an SUV approaching at speed from the vehicle’s left, but the Tesla fails to yield, only stopping at the very last minute. It’s likely that the Smart Summon software doesn’t have the ability to understand right of way in parking environments, where signage is minimal and it’s largely left up to human intuition to figure out.
This is one reason why the line of sight requirement is key – had the user let go of the button when first noticing the approaching vehicle, the incident would have been avoided entirely. Much like other self-driving technologies, it’s not always clear how much responsibility still lies with the human in the loop, which can have dire results. And more to the point, how much responsibility should the user have, when he or she can’t know what the car is going to decide to do?
More amusingly, an Arizona man was caught chasing down a Tesla Model 3 in Phoenix, seeing the vehicle rolling through the carpark without a driver behind the wheel. While the embarassing incident ended without injury, it goes to show that until familiarity with this technology spreads, there’s a scope for misunderstandings to cause problems.
It’s Not All Bad, Though
Some users have had more luck with the feature. While it’s primarily intended to summon the car to the user’s GPS location, it can also be used to direct the car to a point within a 200 foot radius. In this video, a Tesla can be seen successfully navigating around a sparsely populated carpark, albeit with some trepidation. The vehicle appears to have difficulty initially understanding the structure of the area, first attempting a direct route before properly making its way around the curbed grass area. The progress is more akin to a basic line-following robot than an advanced robotic vehicle. However, it does successfully avoid running down its owner, who attempts walking in front of the moving vehicle to test its collision avoidance abilities. If you value your limbs, probably don’t try this at home.
Wanting to explore a variety of straightforward and oddball situations, [DirtyTesla] decided to give the tech a rundown himself. The first run in a quiet carpark is successful, albeit with the car weaving, reversing unnecessarily, and ignoring a stop sign. Later runs are more confident, with the car clearly choosing the correct lane to drive in, and stopping to check for cross traffic. Testing on a gravel driveway was also positive, with the car properly recognising the grass boundaries and driving around them. That is, until the fourth attempt, when the car gently runs off the road and comes to a stop in the weeds. Further tests show that dark conditions and heavy rain aren’t a show stopper for the system, but it’s still definitely imperfect in operation.
Reality Check
Fundamentally, there’s plenty of examples out there that suggest this technology isn’t ready for prime-time. Unlike other driver-in-the-loop aids, like parallel parking assists, it appears that users put a lot more confidence in the ability of Smart Summon to detect obstacles on its own, leading to many near misses and collisions.
If all it takes is a user holding a button down to drive a 4000 pound vehicle into a wall, perhaps this isn’t the way to go. It draws parallels to users falling asleep on the highway when using Tesla’s AutoPilot – drivers are putting ultimate trust in a system that is, at best, only capable when used in combination with a human’s careful oversight. But even then, how is the user supposed to know what the car sees? Tesla’s tools seem to have a way of lulling users into a false sense of confidence, only to be betrayed almost instantly to the delight of Youtube viewers around the world.
While it’s impossible to make anything truly foolproof, it would appear that Tesla has a ways to go to get Smart Summon up to scratch. Combine this with the fact that in 90% of videos, it would have been far quicker for an able-bodied driver to simply walk to the vehicle and drive themselves, and it definitely appears to be more of a gimmick than a useful feature. If it can be improved, and limitations such as line-of-sight and distance can be negated, it will quickly become a must-have item on luxury vehicles. That may yet be some years away, however. Watch this space, as it’s unlikely other automakers will rest for long!
You have doubtlessly heard the news. A robotic Uber car in Arizona struck and killed [Elaine Herzberg] as she crossed the street. Details are sketchy, but preliminary reports indicate that the accident was unavoidable as the woman crossed the street suddenly from the shadows at night.
If and when more technical details emerge, we’ll cover them. But you can bet this is going to spark a lot of conversation about autonomous vehicles. Given that Hackaday readers are at the top of the technical ladder, it is likely that your thoughts on the matter will influence your friends, coworkers, and even your politicians. So what do you think?
The US National Highway Traffic Safety Administration (NHTSA) report on the May 2016 fatal accident in Florida involving a Tesla Model S in Autopilot mode just came out (PDF). The verdict? “the Automatic Emergency Braking (AEB) system did not provide any warning or automated braking for the collision event, and the driver took no braking, steering, or other actions to avoid the collision.” The accident was a result of the driver’s misuse of the technology.
This places no blame on Tesla because the system was simply not designed to handle obstacles travelling at 90 degrees to the car. Because the truck that the Tesla plowed into was sideways to the car, “the target image (side of a tractor trailer) … would not be a “true” target in the EyeQ3 vision system dataset.” Other situations that are outside of the scope of the current state of technology include cut-ins, cut-outs, and crossing path collisions. In short, the Tesla helps prevent rear-end collisions with the car in front of it, but has limited side vision. The driver should have known this.
The NHTSA report concludes that “Advanced Driver Assistance Systems … require the continual and full attention of the driver to monitor the traffic environment and be prepared to take action to avoid crashes.” The report also mentions the recent (post-Florida) additions to Tesla’s Autopilot that help make sure that the driver is in the loop.
The takeaway is that humans are still responsible for their own safety, and that “Autopilot” is more like anti-lock brakes than it is like Skynet. Our favorite footnote, in carefully couched legalese: “NHTSA recognizes that other jurisdictions have raised concerns about Tesla’s use of the name “Autopilot”. This issue is outside the scope of this investigation.” (The banner image is from this German YouTube video where a Tesla rep in the back seat tells the reporter that he can take his hands off the wheel. There may be mixed signals here.)
There are other details that make the report worth reading if, like us, you would like to see some more data about how self-driving cars actually perform on the road. On one hand, Tesla’s Autosteer function seems to have reduced the rate at which their cars got into crashes. On the other, increasing use of the driving assistance functions comes with an increase driver inattention for durations of three seconds or longer.
People simply think that the Autopilot should do more than it actually does. Per the report, this problem of “driver misuse in the context of semi-autonomous vehicles is an emerging issue.” Whether technology will improve fast enough to protect us from ourselves is an open question.