Tesla’s Smart Summon – Gimmick Or Greatness?

Tesla have always aimed to position themselves as part automaker, part tech company. Their unique offering is that their vehicles feature cutting-edge technology not available from their market rivals. The company has long touted it’s “full self-driving” technology, and regular software updates have progressively unlocked new functionality in their cars over the years.

The latest “V10” update brought a new feature to the fore – known as Smart Summon. Allowing the driver to summon their car remotely from across a car park, this feature promises to be of great help on rainy days and when carrying heavy loads. Of course, the gulf between promises and reality can sometimes be a yawning chasm.

How Does It Work?

Holding the “Come To Me” button summons the vehicle to the user’s location. Releasing the button stops the car immediately.

Smart Summon is activated through the Tesla smartphone app. Users are instructed to check the vehicle’s surroundings and ensure they have line of sight to the vehicle when using the feature. This is combined with a 200 foot (61 m) hard limit, meaning that Smart Summon won’t deliver your car from the back end of a crowded mall carpark. Instead, it’s more suited to smaller parking areas with clear sightlines.

Once activated, the car will back out of its parking space, and begin to crawl towards the user. As the user holds down the button, the car moves, and will stop instantly when let go. Using its suite of sensors to detect pedestrians and other obstacles, the vehicle is touted to be able to navigate the average parking environment and pick up its owners with ease.

No Plan Survives First Contact With The Enemy

With updates rolled out over the air, Tesla owners jumped at the chance to try out this new functionality. Almost immediately, a cavalcade of videos began appearing online of the technology. Many of these show that things rarely work as well in the field as they do in the lab.

As any driver knows, body language and communication are key to navigating a busy parking area. Whether it’s a polite nod, an instructional wave, or simply direct eye contact, humans have become well-rehearsed at self-managing the flow of traffic in parking areas. When several cars are trying to navigate the area at once, a confused human can negotiate with others to take turns to exit the jam. Unfortunately, a driverless car lacks all of these abilities.

This situation proved all too much for the Tesla, and the owner was forced to intervene.

A great example is this drone video of a Model 3 owner attempting a Smart Summon in a small linear carpark. Conditions are close to ideal – a sunny day, with little traffic, and a handful of well-behaved pedestrians. In the first attempt, the hesitation of the vehicle is readily apparent. After backing out of the space, the car simply remains motionless, as two human drivers are also attempting to navigate the area. After backing up further, the Model 3 again begins to inch forward, with seemingly little ability to choose between driving on the left or the right. Spotting the increasing frustration of the other road users, the owner is forced to walk to the car and take over. In a second attempt, the car is again flummoxed by an approaching car, and simply grinds to a halt, unable to continue. Communication between autonomous vehicles and humans is an active topic of research, and likely one that will need to be solved sooner rather than later to truly advance this technology.

Pulling straight out of a wide garage onto an empty driveway is a corner case they haven’t quite mastered yet.

An expensive repair bill, courtesy of Smart Summon.

Other drivers have had worse experiences. One owner had their Tesla drive straight into the wall of their garage, an embarrassing mistake even most learner drivers wouldn’t make. Another had a scary near miss, when the Telsa seemingly failed to understand its lack of right of way. The human operator can be seen to recognise an SUV approaching at speed from the vehicle’s left, but the Tesla fails to yield, only stopping at the very last minute. It’s likely that the Smart Summon software doesn’t have the ability to understand right of way in parking environments, where signage is minimal and it’s largely left up to human intuition to figure out.

This is one reason why the line of sight requirement is key – had the user let go of the button when first noticing the approaching vehicle, the incident would have been avoided entirely. Much like other self-driving technologies, it’s not always clear how much responsibility still lies with the human in the loop, which can have dire results. And more to the point, how much responsibility should the user have, when he or she can’t know what the car is going to decide to do?

More amusingly, an Arizona man was caught chasing down a Tesla Model 3 in Phoenix, seeing the vehicle rolling through the carpark without a driver behind the wheel. While the embarassing incident ended without injury, it goes to show that until familiarity with this technology spreads, there’s a scope for misunderstandings to cause problems.

It’s Not All Bad, Though

Some users have had more luck with the feature. While it’s primarily intended to summon the car to the user’s GPS location, it can also be used to direct the car to a point within a 200 foot radius. In this video, a Tesla can be seen successfully navigating around a sparsely populated carpark, albeit with some trepidation. The vehicle appears to have difficulty initially understanding the structure of the area, first attempting a direct route before properly making its way around the curbed grass area. The progress is more akin to a basic line-following robot than an advanced robotic vehicle. However, it does successfully avoid running down its owner, who attempts walking in front of the moving vehicle to test its collision avoidance abilities. If you value your limbs, probably don’t try this at home.

No, not like that!

Wanting to explore a variety of straightforward and oddball situations, [DirtyTesla] decided to give the tech a rundown himself. The first run in a quiet carpark is successful, albeit with the car weaving, reversing unnecessarily, and ignoring a stop sign. Later runs are more confident, with the car clearly choosing the correct lane to drive in, and stopping to check for cross traffic. Testing on a gravel driveway was also positive, with the car properly recognising the grass boundaries and driving around them. That is, until the fourth attempt, when the car gently runs off the road and comes to a stop in the weeds. Further tests show that dark conditions and heavy rain aren’t a show stopper for the system, but it’s still definitely imperfect in operation.

Reality Check

Fundamentally, there’s plenty of examples out there that suggest this technology isn’t ready for prime-time. Unlike other driver-in-the-loop aids, like parallel parking assists, it appears that users put a lot more confidence in the ability of Smart Summon to detect obstacles on its own, leading to many near misses and collisions.

If all it takes is a user holding a button down to drive a 4000 pound vehicle into a wall, perhaps this isn’t the way to go. It draws parallels to users falling asleep on the highway when using Tesla’s AutoPilot – drivers are putting ultimate trust in a system that is, at best, only capable when used in combination with a human’s careful oversight. But even then, how is the user supposed to know what the car sees? Tesla’s tools seem to have a way of lulling users into a false sense of confidence, only to be betrayed almost instantly to the delight of Youtube viewers around the world.

While it’s impossible to make anything truly foolproof, it would appear that Tesla has a ways to go to get Smart Summon up to scratch. Combine this with the fact that in 90% of videos, it would have been far quicker for an able-bodied driver to simply walk to the vehicle and drive themselves, and it definitely appears to be more of a gimmick than a useful feature. If it can be improved, and limitations such as line-of-sight and distance can be negated, it will quickly become a must-have item on luxury vehicles. That may yet be some years away, however. Watch this space, as it’s unlikely other automakers will rest for long!

Uber Has An Autonomous Fatality

You have doubtlessly heard the news. A robotic Uber car in Arizona struck and killed [Elaine Herzberg] as she crossed the street. Details are sketchy, but preliminary reports indicate that the accident was unavoidable as the woman crossed the street suddenly from the shadows at night.

If and when more technical details emerge, we’ll cover them. But you can bet this is going to spark a lot of conversation about autonomous vehicles. Given that Hackaday readers are at the top of the technical ladder, it is likely that your thoughts on the matter will influence your friends, coworkers, and even your politicians. So what do you think?

Continue reading “Uber Has An Autonomous Fatality”

Autopilots Don’t Kill Drivers, Humans Do

The US National Highway Traffic Safety Administration (NHTSA) report on the May 2016 fatal accident in Florida involving a Tesla Model S in Autopilot mode just came out (PDF). The verdict? “the Automatic Emergency Braking (AEB) system did not provide any warning or automated braking for the collision event, and the driver took no braking, steering, or other actions to avoid the collision.” The accident was a result of the driver’s misuse of the technology.

quote-not-a-true-targetThis places no blame on Tesla because the system was simply not designed to handle obstacles travelling at 90 degrees to the car. Because the truck that the Tesla plowed into was sideways to the car, “the target image (side of a tractor trailer) … would not be a “true” target in the EyeQ3 vision system dataset.” Other situations that are outside of the scope of the current state of technology include cut-ins, cut-outs, and crossing path collisions. In short, the Tesla helps prevent rear-end collisions with the car in front of it, but has limited side vision. The driver should have known this.

The NHTSA report concludes that “Advanced Driver Assistance Systems … require the continual and full attention of the driver to monitor the traffic environment and be prepared to take action to avoid crashes.” The report also mentions the recent (post-Florida) additions to Tesla’s Autopilot that help make sure that the driver is in the loop.

The takeaway is that humans are still responsible for their own safety, and that “Autopilot” is more like anti-lock brakes than it is like Skynet. Our favorite footnote, in carefully couched legalese: “NHTSA recognizes that other jurisdictions have raised concerns about Tesla’s use of the name “Autopilot”. This issue is outside the scope of this investigation.” (The banner image is from this German YouTube video where a Tesla rep in the back seat tells the reporter that he can take his hands off the wheel. There may be mixed signals here.)

cropped_shot_2017-01-23-181745There are other details that make the report worth reading if, like us, you would like to see some more data about how self-driving cars actually perform on the road. On one hand, Tesla’s Autosteer function seems to have reduced the rate at which their cars got into crashes. On the other, increasing use of the driving assistance functions comes with an increase driver inattention for durations of three seconds or longer.

People simply think that the Autopilot should do more than it actually does. Per the report, this problem of “driver misuse in the context of semi-autonomous vehicles is an emerging issue.” Whether technology will improve fast enough to protect us from ourselves is an open question.

[via Popular Science].

Automate The Freight: Robotic Deliveries Are On The Way

Seems like all the buzz about autonomous vehicles these days centers around self-driving cars. Hands-free transportation certainly has its appeal – being able to whistle up a ride with a smartphone app and converting commute time to Netflix binge time is an alluring idea. But is autonomous personal transportation really the killer app that everyone seems to think it is? Wouldn’t we get more bang for the buck by automating something a little more mundane and a lot more important? What about automating the shipping of freight?

Look around the next time you’re not being driven to work by a robot and you’re sure to notice a heck of a lot of trucks on the road. From small panel trucks making local deliveries to long-haul tractor trailers working cross-country routes, the roads are lousy with trucks. And behind the wheel of each truck is a human driver (or two, in the case of team-driven long-haul rigs). The drivers are the weak point in this system, and the big reason I think self-driving trucks will be commonplace long before we see massive market penetration of self-driving cars.

Continue reading “Automate The Freight: Robotic Deliveries Are On The Way”

The Predictability Problem With Self-Driving Cars

A law professor and an engineering professor walk into a bar. What comes out is a nuanced article on a downside of autonomous cars, and how to deal with it. The short version of their paper: self-driving cars need to be more predictable to humans in order to coexist.

We share living space with a lot of machines. A good number of them are mobile and dangerous but under complete human control: the car, for instance. When we want to know what another car at an intersection is going to do, we think about the driver of the car, and maybe even make eye contact to see that they see us. We then think about what we’d do in their place, and the traffic situation gets negotiated accordingly.

When its self-driving car got into an accident in February, Google replied that “our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.” Apparently, so did the car, right before it drove out in front of an oncoming bus. The bus driver didn’t expect the car to pull (slowly) into its lane, either.

All of the other self-driving car accidents to date have been the fault of other drivers, and the authors think this is telling. If you unexpectedly brake all the time, you can probably expect to eventually get hit from behind. If people can’t read your car’s AI’s mind, you’re gonna get your fender bent.

The paper’s solution is to make autonomous vehicles more predictable, and they mention a number of obvious solutions, from “I-sense-you” lights to inter-car communication. But then there are aspects we hadn’t thought about: specific markings that indicate the AIs capabilities, for instance. A cyclist signalling a left turn would really like to know if the car behind has the new bicyclist-handsignal-recognition upgrade before entering the lane. The ability to put your mind into the mind of the other car is crucial, and requires tons of information about the driver.

All of this may require and involve legislation. Intent and what all parties to an accident “should have known” are used in court to apportion blame in addition to the black-and-white of the law. When one of the parties is an AI, this gets murkier. How should you know what the algorithm should have been thinking? This is far from a solved problem, and it’s becoming more relevant.

We’ve written on the ethics of self-driving cars before, but simply in terms of their decision-making ability. This paper brings home the idea that we also need to be able to understand what they’re thinking, which is as much a human-interaction and legal problem as it is technological.

[Headline image: Google Self-Driving Car Project]

Self-Driving Acura, Built In A Garage

[George Hotz], better known by his hacker moniker [GeoHot], was the first person to successfully hack the iPhone — now he’s trying his hand at building his very own self-driving vehicle.

The 26-year-old already has an impressive rap sheet, being the first to hack the PS3 when it came out, and to be sued because of it.

According to Bloomberg reporter [Ashlee Vance], [George] built this self driving vehicle in around a month — which, if true, is pretty damn incredible. It’s a 2016 Acura ILX with a lidar array on its roof, as well as a few cameras. The glove box has been ripped out to house the electronics, including a mini-PC, GPS sensors, and network switches. A large 21.5″ LCD screen sits in the dash, not unlike the standard Tesla affair.

Oh, and it runs Linux. Continue reading “Self-Driving Acura, Built In A Garage”

The Ethics Of Self-Driving Cars Making Deadly Decisions

Self-driving cars are starting to pop up everywhere as companies slowly begin to test and improve them for the commercial market. Heck, Google’s self-driving car actually has its very own driver’s license in Nevada! There have been minimal accidents, and most of the time, they say it’s not the autonomous cars’ fault. But when autonomous cars are widespread — there will still be accidents — it’s inevitable. And what will happen when your car has to decide whether to save you, or a crowd of people? Ever think about that before?

It’s an extremely valid concern, and raises a huge ethical issue. In the rare circumstance that the car has to choose the “best” outcome — what will determine that? Reducing the loss of life? Even if it means crashing into a wall, mortally injuring you, the driver? Maybe car manufacturers will finally have to make ejection seats a standard feature!

Continue reading “The Ethics Of Self-Driving Cars Making Deadly Decisions”