As auto manufacturers have brought self-driving features to their products, we’re told about how impressive their technologies are and just how much computing power is on board to make it happen. Thus it surprised us (and it might also surprise you too) that some level of self-driving can be performed by an Android phone. [Mankaran Singh] has the full details.
It starts with the realization that a modern smartphone contains the necessary sensors to perform basic self-driving, and then moves on to making a version of openpilot that can run on more than the few supported phones. It’s not the driver-less car of science fiction but one which performs what we think is SAE level 2 self driving, which is cruise control, lane centering, and collision warning. They take it out on the road in a little Suzuki on a busy Indian dual carriageway in the rain, and while we perhaps place more trust in meat-based driving, it does seem to perform fairly well
We write a lot about self-driving vehicles here at Hackaday, but it’s fair to say that most of the limelight has fallen upon large and well-known technology companies on the west coast of the USA. It’s worth drawing attention to other parts of the world where just as much research has gone into autonomous transport, and on that note there’s an interesting milestone from Europe. The British company Oxbotica has successfully made the first zero-occupancy on-road journey in Europe, on a public road in Oxford, UK.
The glossy promo video below the break shows the feat as the vehicle with number plates signifying its on-road legality drives round the relatively quiet roads through one of the city’s technology parks, and promises a bright future of local deliveries and urban transport. The vehicle itself is interesting, it’s a platform supplied by the Aussie outfit AppliedEV, an electric spaceframe vehicle that’s designed to provide a versatile platform for autonomous transport. As such, unlike so many of the aforementioned high-profile vehicles, it has no passenger cabin and no on-board driver to take the wheel in a calamity; instead it’s driven by Oxbotica’s technology and has their sensor pylon attached to its centre.
News reports were everywhere that an autonomous taxi operated by a company called Cruise was driving through San Francisco with no headlights. The local constabulary tried to stop the vehicle and were a bit thrown that there was no driver. Then the car moved beyond an intersection and pulled over, further bemusing the officers.
The company says the headlights were due to human error and that the car had stopped at a light and then moved to a safe stop by design. This leads to the question of how people including police officers will interact with robot vehicles.
Perhaps the best-known ridesharing service, Uber has grown rapidly over the last decade. Since its founding in 2009, it has expanded into markets around the globe, and entered the world of food delivery and even helicopter transport.
Uber’s driverless car research was handled by the internal Advanced Technologies Group, made up of 1,200 employees dedicated to working on the new technology. The push to eliminate human drivers from the ride-sharing business model was a major consideration for investors of Uber’s Initial Public Offering on the NYSE in 2019. The company is yet to post a profit, and reducing the amount of fares going to human drivers would make it much easier for the company to achieve that crucial goal.
Aurora could also have links with Toyota, which also invested in ATG under Uber’s ownership in 2019. Unlike Uber, which solely focused on building viable robotaxis for use in limited geographical locations, the Aurora Driver, the core of the company’s technology, aims to be adaptable to everything from “passenger sedans to class-8 trucks”.
Getting rid of ATG certainly spells the end of Uber’s in-house autonomous driving effort, but it doesn’t mean they’re getting out of the game. Holding a stake in Aurora, Uber still stands to profit from early investment, and will retain access to the technology as it develops. At the same time, trading ATG off to an outside firm puts daylight between the rideshare company and any negative press from future testing incidents.
Currently, if you want to use the Autopilot or Self-Driving modes on a Tesla vehicle you need to keep your hands on the wheel at all times. That’s because, ultimately, the human driver is still the responsible party. Tesla is adamant about the fact that functions which allow the car to steer itself within a lane, avoid obstacles, and intelligently adjust its speed to match traffic all constitute a driver assistance system. If somebody figures out how to fool the wheel sensor and take a nap while their shiny new electric car is hurtling down the freeway, they want no part of it.
So it makes sense that the company’s official line regarding the driver-facing camera in the Model 3 and Model Y is that it’s there to record what the driver was doing in the seconds leading up to an impact. As explained in the release notes of the June 2020 firmware update, Tesla owners can opt-in to providing this data:
Help Tesla continue to develop safer vehicles by sharing camera data from your vehicle. This update will allow you to enable the built-in cabin camera above the rearview mirror. If enabled, Tesla will automatically capture images and a short video clip just prior to a collision or safety event to help engineers develop safety features and enhancements in the future.
But [green], who’s spent the last several years poking and prodding at the Tesla’s firmware and self-driving capabilities, recently found some compelling hints that there’s more to the story. As part of the vehicle’s image recognition system, which usually is tasked with picking up other vehicles or pedestrians, they found several interesting classes that don’t seem necessary given the official explanation of what the cabin camera is doing.
If all Tesla wanted was a few seconds of video uploaded to their offices each time one of their vehicles got into an accident, they wouldn’t need to be running image recognition configured to detect distracted drivers against it in real-time. While you could make the argument that this data would be useful to them, there would still be no reason to do it in the vehicle when it could be analyzed as part of the crash investigation. It seems far more likely that Tesla is laying the groundwork for a system that could give the vehicle another way of determining if the driver is paying attention.
Wanting to lower the barrier of entry for developing software for self-driving cars, he based his design off of something you’re likely to have lying around already: a smartphone. He cites the Google Cardboard project for his inspiration, with how it made VR more accessible without needing expensive hardware. The phone is able to control the actuators and wheel motors through a custom board, which it talks to via a Bluetooth connection. And since the camera points up in the way the phone is mounted in the frame, [Piotr] came up with a really clever solution of using a mirror as a periscope so the car can see in front of itself.
The software here has two parts, though the phone app one does little more than just serve as an interface by sending off a video feed to be processed. The whole computer vision processing is done on the desktop part, and it allows [Piotr] to do some fun things like using reinforcement learning to keep the car driving as long as possible without crashing. This is achieved by making the algorithm observe the images coming from the phone and giving it negative reward whenever an accelerometer detects a collision. Another experiment he’s done is use a QR tag on top of the car, visible to a fixed overhead camera, to determine the car’s position in the room.
The tech world has a love for Messianic figures, usually high-profile CEOs of darling companies whose words are hung upon and combed through for hidden meaning, as though they had arrived from above to our venture-capital-backed prophet on tablets of stone. In the past it has been Steve Jobs or Bill Gates, now it seems to be Elon Musk who has received this treatment. Whether his companies are launching a used car into space, shooting things down tubes in the desert, or synchronised-landing used booster rockets, everybody’s talking about him. He’s a showman whose many pronouncements are always soon eclipsed by bigger ones to keep his public on the edge of their seats, and now we’ve been suckered in too, which puts us on the spot, doesn’t it.
Your Johnny Cab is almost here
The latest pearl of Muskology came in a late April presentation: that by 2020 there would be a million Tesla electric self-driving taxis on the road. It involves a little slight-of-hand in assuming that a fleet of existing Teslas will be software upgraded to be autonomous-capable and that some of them will somehow be abandoned by their current owners and end up as taxis, but it’s still a bold claim by any standard.
Here at Hackaday, we want to believe, but we’re not so sure. It’s time to have a little think about it all. It’s the start of May, so 2020 is about 7 months away. December 2020 is about 18 months away, so let’s give Tesla that timescale. 18 months to put a million self-driving taxis on the road. Can the company do it? Let’s find out.