Tesla’s Smart Summon – Gimmick Or Greatness?

Tesla have always aimed to position themselves as part automaker, part tech company. Their unique offering is that their vehicles feature cutting-edge technology not available from their market rivals. The company has long touted it’s “full self-driving” technology, and regular software updates have progressively unlocked new functionality in their cars over the years.

The latest “V10” update brought a new feature to the fore – known as Smart Summon. Allowing the driver to summon their car remotely from across a car park, this feature promises to be of great help on rainy days and when carrying heavy loads. Of course, the gulf between promises and reality can sometimes be a yawning chasm.

How Does It Work?

Holding the “Come To Me” button summons the vehicle to the user’s location. Releasing the button stops the car immediately.

Smart Summon is activated through the Tesla smartphone app. Users are instructed to check the vehicle’s surroundings and ensure they have line of sight to the vehicle when using the feature. This is combined with a 200 foot (61 m) hard limit, meaning that Smart Summon won’t deliver your car from the back end of a crowded mall carpark. Instead, it’s more suited to smaller parking areas with clear sightlines.

Once activated, the car will back out of its parking space, and begin to crawl towards the user. As the user holds down the button, the car moves, and will stop instantly when let go. Using its suite of sensors to detect pedestrians and other obstacles, the vehicle is touted to be able to navigate the average parking environment and pick up its owners with ease.

No Plan Survives First Contact With The Enemy

With updates rolled out over the air, Tesla owners jumped at the chance to try out this new functionality. Almost immediately, a cavalcade of videos began appearing online of the technology. Many of these show that things rarely work as well in the field as they do in the lab.

As any driver knows, body language and communication are key to navigating a busy parking area. Whether it’s a polite nod, an instructional wave, or simply direct eye contact, humans have become well-rehearsed at self-managing the flow of traffic in parking areas. When several cars are trying to navigate the area at once, a confused human can negotiate with others to take turns to exit the jam. Unfortunately, a driverless car lacks all of these abilities.

This situation proved all too much for the Tesla, and the owner was forced to intervene.

A great example is this drone video of a Model 3 owner attempting a Smart Summon in a small linear carpark. Conditions are close to ideal – a sunny day, with little traffic, and a handful of well-behaved pedestrians. In the first attempt, the hesitation of the vehicle is readily apparent. After backing out of the space, the car simply remains motionless, as two human drivers are also attempting to navigate the area. After backing up further, the Model 3 again begins to inch forward, with seemingly little ability to choose between driving on the left or the right. Spotting the increasing frustration of the other road users, the owner is forced to walk to the car and take over. In a second attempt, the car is again flummoxed by an approaching car, and simply grinds to a halt, unable to continue. Communication between autonomous vehicles and humans is an active topic of research, and likely one that will need to be solved sooner rather than later to truly advance this technology.

Pulling straight out of a wide garage onto an empty driveway is a corner case they haven’t quite mastered yet.

An expensive repair bill, courtesy of Smart Summon.

Other drivers have had worse experiences. One owner had their Tesla drive straight into the wall of their garage, an embarrassing mistake even most learner drivers wouldn’t make. Another had a scary near miss, when the Telsa seemingly failed to understand its lack of right of way. The human operator can be seen to recognise an SUV approaching at speed from the vehicle’s left, but the Tesla fails to yield, only stopping at the very last minute. It’s likely that the Smart Summon software doesn’t have the ability to understand right of way in parking environments, where signage is minimal and it’s largely left up to human intuition to figure out.

This is one reason why the line of sight requirement is key – had the user let go of the button when first noticing the approaching vehicle, the incident would have been avoided entirely. Much like other self-driving technologies, it’s not always clear how much responsibility still lies with the human in the loop, which can have dire results. And more to the point, how much responsibility should the user have, when he or she can’t know what the car is going to decide to do?

More amusingly, an Arizona man was caught chasing down a Tesla Model 3 in Phoenix, seeing the vehicle rolling through the carpark without a driver behind the wheel. While the embarassing incident ended without injury, it goes to show that until familiarity with this technology spreads, there’s a scope for misunderstandings to cause problems.

It’s Not All Bad, Though

Some users have had more luck with the feature. While it’s primarily intended to summon the car to the user’s GPS location, it can also be used to direct the car to a point within a 200 foot radius. In this video, a Tesla can be seen successfully navigating around a sparsely populated carpark, albeit with some trepidation. The vehicle appears to have difficulty initially understanding the structure of the area, first attempting a direct route before properly making its way around the curbed grass area. The progress is more akin to a basic line-following robot than an advanced robotic vehicle. However, it does successfully avoid running down its owner, who attempts walking in front of the moving vehicle to test its collision avoidance abilities. If you value your limbs, probably don’t try this at home.

No, not like that!

Wanting to explore a variety of straightforward and oddball situations, [DirtyTesla] decided to give the tech a rundown himself. The first run in a quiet carpark is successful, albeit with the car weaving, reversing unnecessarily, and ignoring a stop sign. Later runs are more confident, with the car clearly choosing the correct lane to drive in, and stopping to check for cross traffic. Testing on a gravel driveway was also positive, with the car properly recognising the grass boundaries and driving around them. That is, until the fourth attempt, when the car gently runs off the road and comes to a stop in the weeds. Further tests show that dark conditions and heavy rain aren’t a show stopper for the system, but it’s still definitely imperfect in operation.

Reality Check

Fundamentally, there’s plenty of examples out there that suggest this technology isn’t ready for prime-time. Unlike other driver-in-the-loop aids, like parallel parking assists, it appears that users put a lot more confidence in the ability of Smart Summon to detect obstacles on its own, leading to many near misses and collisions.

If all it takes is a user holding a button down to drive a 4000 pound vehicle into a wall, perhaps this isn’t the way to go. It draws parallels to users falling asleep on the highway when using Tesla’s AutoPilot – drivers are putting ultimate trust in a system that is, at best, only capable when used in combination with a human’s careful oversight. But even then, how is the user supposed to know what the car sees? Tesla’s tools seem to have a way of lulling users into a false sense of confidence, only to be betrayed almost instantly to the delight of Youtube viewers around the world.

While it’s impossible to make anything truly foolproof, it would appear that Tesla has a ways to go to get Smart Summon up to scratch. Combine this with the fact that in 90% of videos, it would have been far quicker for an able-bodied driver to simply walk to the vehicle and drive themselves, and it definitely appears to be more of a gimmick than a useful feature. If it can be improved, and limitations such as line-of-sight and distance can be negated, it will quickly become a must-have item on luxury vehicles. That may yet be some years away, however. Watch this space, as it’s unlikely other automakers will rest for long!

Takata Airbag Recalls Widen To Potentially Affect Other Types Of Airbag

The Takata airbag case has become the largest product recall in history, caused over 20 deaths, and cost many billions of dollars. Replacement efforts are still ongoing, and sadly, the body count continues to rise.  Against this backdrop, further recalls have been announced affecting another type of Takata airbag.

The recall affects BMW 3 Series vehicles, produced between 1997 and 2000. Notably, it appears these cars may have been built before Takata’s fateful decision to produce airbag inflators using ammonium nitrate propellants, known for their instability. Instead, these vehicles likely used Takata’s proprietary tetrazole propellant, or Non-Azide Driver Inflators (NADI). These were developed in the 1990s, and considered a great engineering feat at the time. They were eventually phased out around 2001 for cost reasons, leading to the scandal that rolls on to this day.

As these airbags were produced before the switch to ammonium nitrate, they have thus far escaped scrutiny as part of existing recalls. Two recent incidents of airbag misdeployments in Australia led to the recall, causing a death and a serious injury. BMW Australia have advised owners not to drive affected vehicles, and are offering loan or hire cars to affected vehicles. Given the age of the affected vehicles, the company is considering a buyback program in the event that suitable replacement parts cannot be made available.

This development is foreboding, as it suggests yet more cars, originally considered safe, are now at risk of injuring or killing occupants in the event of a crash. It’s not yet clear exactly which makes are effected by this recall, but expect the numbers of vehicles to continue to climb.

[via Sydney Morning Herald]

Hiring From A Makerspace Pays Off

A makerspace is a great place to use specialty tools that may be too expensive or large to own by oneself, but there are other perks that come with participation in that particular community. For example, all of the skills you’ve gained by using all that fancy equipment may make you employable in some very niche situations. [lukeiamyourfather] from the Dallas Makerspace recently found himself in just that situation, and was asked to image a two-million-year-old fossil.

The fossil was being placed into a CT machine for imaging, but was too thick to properly view. These things tend to be fragile, so he spent some time laser cutting an acrylic stand in order to image the fossil vertically instead of horizontally. Everything that wasn’t fossil had to be non-conductive for the CT machine, so lots of fishing line and foam was used as well. After the imaging was done, he was also asked to 3D print a model for a display in the museum.

This is all going on at the Perot Museum of Nature and Science if you happen to be in the Dallas area. It’s interesting to see these skills put to use out in the wild as well, especially for something as rare and fragile as studying an old fossil. Also, if you’d like to see if your local makerspace measures up to the Dallas makerspace, we featured a tour of it back in 2014, although they have probably made some updates since then.

Tiny SAO, Tough CTF Challenge!

Over the year or two since the SAO connector specification was published, otherwise known as the Shitty Addon, we’ve seen a huge variety of these daughter boards for our favourite electronic badges. Many of them are works of art, but there’s another subset that’s far less about show and more about clever functionality. [Uri Shaked]’s little SAO is rather unprepossessing to look at, being a small round PCB with only an ATtiny microcontroller, reset button, and solitary LED, but its interest lies not in its looks but its software. It contains a series of CTF puzzles within, and despite its apparent simplicity should contain enough to detain even the hardiest puzzle-solving hackers.

It’s a puzzle of three parts, at the simplest level merely flashing the LED is enough, while the next level involves retrieving a buried string from the firmware and the last requires replacing the string with one of your own. You are only allowed to do so through the SAO connector, but fortunately you do have the benefit of access to the source code to trawl for vulnerabilities. There is a hefty hint that the data sheet for the microcontroller might also be useful.

[Uri] has appeared many times on these pages, most recently when he added a microscope to his 3D printer.