For years now we have been told that self-driving cars will be the Next Big Thing, and we’ve seen some companies — yes, Tesla but others too — touting current and planned features with names like “Autopilot” and “self-driving”. Cutting through the marketing hype to unpacking what that really means is difficult. But there is a standard for describing these capabilities, assigning them as levels from zero to five.
Now we’re greeted with the news that Honda have put a small number of vehicles in the showrooms in Japan that are claimed to be the first commercially available level 3 autonomous cars. That claim is debatable as for example Audi briefly had level 3 capabilities on one of their luxury sedans despite having few places to sell it in which it could be legally used. But the Honda Legend SENSING Elite can justifiably claim to be the only car on the market to the general public with the feature at the moment. It has a battery of sensors to keep track of its driver, its position, and the road conditions surrounding it. The car boasts a “Traffic Jam Pilot” mode, which “enables the automated driving system to drive the vehicle under certain conditions, instead of the driver, such as when the vehicle is in congested traffic on an expressway“.
Sounds impressive, but just what is a level 3 autonomous car, and what are all the other levels?
The notion of self driving cars isn’t new. You might be surprised at the number of such projects dating back to the 1920s. Many of these systems relied on external aids built into the roadways. It’s only recently that self driving cars on existing roadways are becoming closer to reality than fiction — increased computer processing power, smaller and power-efficient computers, compact Lidar and millimeter-wave Radar sensors are but a few enabling technologies. In South Korea, [Prof Min-hong Han] and his team of students took advantage of these technological advances and built an autonomous car which successfully navigated the streets of Seoul in several field trials. A second version subsequently drove itself along the 300 km journey from Seoul to the southern port city Busan. You might think this is boring news, until you realize this was accomplished back in the early 1990s using an Intel 386-powered desktop computer.
The project created a lot of buzz at the time, and was shown at the Daejeon Expo ’93 international exposition. Alas, the government eventually decided to cancel the research program, as it didn’t fit into their focus on heavy industries like ship building and steel production. Given the tremendous focus on self-driving and autonomous vehicles today, and with the benefit of hindsight, we wonder if that was the best choice. This isn’t the only decision from Seoul that seems questionable when viewed from the present — Samsung executives famously declined to buy Andy Rubin’s new operating system for digital cameras and handsets back in late 2004, and a few weeks later Android was purchased by Google.
You should check out [Prof Han]’s YouTube channel showing videos of the car’s camera while operating in various conditions and overlaid with the lane recognition markers and other information. I’ve driven the streets of Seoul, and that alone can be a frightening experience. But [Han] manages to stretch out in the back seat, so confident in his system that he doesn’t even wear a seatbelt.
The Consumer Electronics Show was not typically a place for concept cars, and Sony aren’t known as a major automaker. However, times change, and the electric transport revolution has changed much. At the famous trade show, Sony shocked many by revealing its Vision-S concept — a running, driving, prototype electric car.
Far from a simple mockup to show off in-car entertainment or new fancy cameras, Sony’s entry into the automotive world is surprisingly complete. Recently, the Japanese tech giant has been spotted testing the vehicle on the road in Austria, raising questions about the future of the project. Let’s dive in to what Sony has shown off, and what it means for the potential of the Vision-S.
Perhaps the best-known ridesharing service, Uber has grown rapidly over the last decade. Since its founding in 2009, it has expanded into markets around the globe, and entered the world of food delivery and even helicopter transport.
One of the main headline research areas for the company was the development of autonomous cars, which would revolutionize the company’s business model by eliminating the need to pay human drivers. However, as of December, the company has announced that it it spinning off its driverless car division in a deal reportedly worth $4 billion, though that’s all on paper — Uber is trading its autonomous driving division, and a promise to invest a further $400 million, in return for a 26% share in the self-driving tech company Aurora Innovation.
Playing A Long Game
Uber’s driverless car research was handled by the internal Advanced Technologies Group, made up of 1,200 employees dedicated to working on the new technology. The push to eliminate human drivers from the ride-sharing business model was a major consideration for investors of Uber’s Initial Public Offering on the NYSE in 2019. The company is yet to post a profit, and reducing the amount of fares going to human drivers would make it much easier for the company to achieve that crucial goal.
However, Uber’s efforts have not been without incident. Tragically, in 2018, a development vehicle running in autonomous mode hit and killed a pedestrian in Tempe, Arizona. This marked the first pedestrian fatality caused by an autonomous car, and led to the suspension of on-road testing by the company. The incident revealed shortcomings in the company’s technology and processes, and was a black mark on the company moving forward.
The Advanced Technology Group (ATG) has been purchased by a Mountain View startup by the name of Aurora Innovation, Inc. The company counts several self-driving luminaries amongst its cofounders. Chris Urmson, now CEO, was a technical leader during his time at Google’s self-driving research group. Drew Bagnell worked on autonomous driving at Uber, and Sterling Anderson came to the startup from Tesla’s Autopilot program. The company was founded in 2017, and counts Hyundai and Amazon among its venture capital investors.
Aurora could also have links with Toyota, which also invested in ATG under Uber’s ownership in 2019. Unlike Uber, which solely focused on building viable robotaxis for use in limited geographical locations, the Aurora Driver, the core of the company’s technology, aims to be adaptable to everything from “passenger sedans to class-8 trucks”.
Getting rid of ATG certainly spells the end of Uber’s in-house autonomous driving effort, but it doesn’t mean they’re getting out of the game. Holding a stake in Aurora, Uber still stands to profit from early investment, and will retain access to the technology as it develops. At the same time, trading ATG off to an outside firm puts daylight between the rideshare company and any negative press from future testing incidents.
There was a time not too long ago when hacking a car more often than not involved literal hacking. Sheet metal was cut, engine cylinders were bored, and crankshafts were machined to increase piston travel. It was all in the pursuit of milking the last ounce performance out of every drop of gasoline, along with a little personal expression in the form of paint and chrome.
While it’s still possible — and encouraged — to hack cars thus, the inclusion of engine control units and other systems to our rides has created an entirely different universe of car hacking options, which Amith Reddy distilled into his very popular workshop at the 2020 Remoticon. The secret sauce behind all the hacks you can accomplish in today’s drive-by-wire cars is the Controller Area Network (CAN), the network used to connect the array of sensors, actuators, and controllers that lie under the metal and plastic of modern cars.
The leap to self-driving cars could be as game-changing as the one from horse power to engine power. If cars prove able to drive themselves better than humans do, the safety gains could be enormous: auto accidents were the #8 cause of death worldwide in 2016. And who doesn’t want to turn travel time into something either truly restful or alternatively productive?
But getting there is a big challenge, as Alfred Jones knows all too well. The Head of Mechanical Engineering at Lyft’s level-5 self-driving division, his team is building the roof racks and other gear that gives the vehicles their sensors and computational hardware. In his keynote talk at Hackaday Remoticon, Alfred Jones walks us through what each level of self-driving means, how the problem is being approached, and where the sticking points are found between what’s being tested now and a truly steering-wheel-free future.
Check out the video below, and take a deeper dive into the details of his talk.
Our smartphones are incredibly powerful computers in their own right, yet we don’t often see them directly integrated into projects. Intel Intelligent Systems Lab has done exactly that with the release OpenBot, an open source smartphone based self-driving robot.
Most of the magic happens on the smartphone, which runs an app built on TensorFlow Lite, and integrates the camera and array of sensors on the smartphone, as well as the data from ultrasonic sensors and wheel encoders on the robot. The robot itself is relatively simple, with four geared DC motors, motor drivers wired to an Arduino Nano that interfaces with an Android Phone over serial.
The app created by the Intel ISL team comes preloaded with three AI models that can do either person following, or two different modes of autonomous navigation. By connecting a Bluetooth controller to the smartphone and drive the robot around manually in your specific environment while collecting data, you can train a custom autonomous driving policy to suit your environment.
This looks like an excellent way to get a taste of autonomous robots on a small budget, while still being a viable base for more demanding applications. We’ve seen only a few smartphone based robots like DriveMyPhone and SmartiPresense, which don’t have AI capabilities, but are intended for telepresence applications. We’ve always wondered why we don’t see more projects with cellphones, so we welcome the example.