Automate The Freight: Robotic Deliveries Are On The Way

Seems like all the buzz about autonomous vehicles these days centers around self-driving cars. Hands-free transportation certainly has its appeal – being able to whistle up a ride with a smartphone app and converting commute time to Netflix binge time is an alluring idea. But is autonomous personal transportation really the killer app that everyone seems to think it is? Wouldn’t we get more bang for the buck by automating something a little more mundane and a lot more important? What about automating the shipping of freight?

Look around the next time you’re not being driven to work by a robot and you’re sure to notice a heck of a lot of trucks on the road. From small panel trucks making local deliveries to long-haul tractor trailers working cross-country routes, the roads are lousy with trucks. And behind the wheel of each truck is a human driver (or two, in the case of team-driven long-haul rigs). The drivers are the weak point in this system, and the big reason I think self-driving trucks will be commonplace long before we see massive market penetration of self-driving cars.

Continue reading “Automate The Freight: Robotic Deliveries Are On The Way”

The Predictability Problem With Self-Driving Cars

A law professor and an engineering professor walk into a bar. What comes out is a nuanced article on a downside of autonomous cars, and how to deal with it. The short version of their paper: self-driving cars need to be more predictable to humans in order to coexist.

We share living space with a lot of machines. A good number of them are mobile and dangerous but under complete human control: the car, for instance. When we want to know what another car at an intersection is going to do, we think about the driver of the car, and maybe even make eye contact to see that they see us. We then think about what we’d do in their place, and the traffic situation gets negotiated accordingly.

When its self-driving car got into an accident in February, Google replied that “our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.” Apparently, so did the car, right before it drove out in front of an oncoming bus. The bus driver didn’t expect the car to pull (slowly) into its lane, either.

All of the other self-driving car accidents to date have been the fault of other drivers, and the authors think this is telling. If you unexpectedly brake all the time, you can probably expect to eventually get hit from behind. If people can’t read your car’s AI’s mind, you’re gonna get your fender bent.

The paper’s solution is to make autonomous vehicles more predictable, and they mention a number of obvious solutions, from “I-sense-you” lights to inter-car communication. But then there are aspects we hadn’t thought about: specific markings that indicate the AIs capabilities, for instance. A cyclist signalling a left turn would really like to know if the car behind has the new bicyclist-handsignal-recognition upgrade before entering the lane. The ability to put your mind into the mind of the other car is crucial, and requires tons of information about the driver.

All of this may require and involve legislation. Intent and what all parties to an accident “should have known” are used in court to apportion blame in addition to the black-and-white of the law. When one of the parties is an AI, this gets murkier. How should you know what the algorithm should have been thinking? This is far from a solved problem, and it’s becoming more relevant.

We’ve written on the ethics of self-driving cars before, but simply in terms of their decision-making ability. This paper brings home the idea that we also need to be able to understand what they’re thinking, which is as much a human-interaction and legal problem as it is technological.

[Headline image: Google Self-Driving Car Project]

Self-Driving Acura, Built In A Garage

[George Hotz], better known by his hacker moniker [GeoHot], was the first person to successfully hack the iPhone — now he’s trying his hand at building his very own self-driving vehicle.

The 26-year-old already has an impressive rap sheet, being the first to hack the PS3 when it came out, and to be sued because of it.

According to Bloomberg reporter [Ashlee Vance], [George] built this self driving vehicle in around a month — which, if true, is pretty damn incredible. It’s a 2016 Acura ILX with a lidar array on its roof, as well as a few cameras. The glove box has been ripped out to house the electronics, including a mini-PC, GPS sensors, and network switches. A large 21.5″ LCD screen sits in the dash, not unlike the standard Tesla affair.

Oh, and it runs Linux. Continue reading “Self-Driving Acura, Built In A Garage”

The Ethics Of Self-Driving Cars Making Deadly Decisions

Self-driving cars are starting to pop up everywhere as companies slowly begin to test and improve them for the commercial market. Heck, Google’s self-driving car actually has its very own driver’s license in Nevada! There have been minimal accidents, and most of the time, they say it’s not the autonomous cars’ fault. But when autonomous cars are widespread — there will still be accidents — it’s inevitable. And what will happen when your car has to decide whether to save you, or a crowd of people? Ever think about that before?

It’s an extremely valid concern, and raises a huge ethical issue. In the rare circumstance that the car has to choose the “best” outcome — what will determine that? Reducing the loss of life? Even if it means crashing into a wall, mortally injuring you, the driver? Maybe car manufacturers will finally have to make ejection seats a standard feature!

Continue reading “The Ethics Of Self-Driving Cars Making Deadly Decisions”