San Francisco Sues To Keep Autonomous Cars Out Of The City

Although the arrival of self-driving cars and taxis in particular seems to be eternally ‘just around the corner’ for most of us, in an increasing number of places around the world they’re already operational, with Waymo being quite prevalent in the US. Yet despite approval by the relevant authorities, the city of San Francisco has opted to sue the state commission that approved Google’s Waymo and GM’s Cruise. Their goal? To banish these services from the streets of SF, ideally forever.

Whether they will succeed in this seems highly doubtful. Although Cruise has lost its license to operate in California after a recent fatal accident, Waymo’s track record is actually quite good. Using public information sources, there’s a case to be made that Waymo cars are significantly safer to be in or around than those driven by human operators. When contrasted with Cruise’s troubled performance, it would seem that the problem with self-driving cars isn’t so much the technology as it is the safety culture of the company around it.

Yet despite Waymo’s better-than-humans safety record, it is regarded as a ‘nuisance’, leading some to sabotage the cars. The more reasonable take would seem to be that although technology is not mature yet, it has the overwhelming advantage over human drivers that it never drives distracted or intoxicated, and can be deterministically improved and tweaked across all cars based on experiences.

These considerations have been taken into account by the state commission that has approved Waymo operating in SF, which is why legal experts note that SF case’s chances are very slim based on the available evidence.

An Android Phone Powers A Self Driving Car

As auto manufacturers have brought self-driving features to their products, we’re told about how impressive their technologies are and just how much computing power is on board to make it happen. Thus it surprised us (and it might also surprise you too) that some level of self-driving can be performed by an Android phone. [Mankaran Singh] has the full details.

It starts with the realization that a modern smartphone contains the necessary sensors to  perform basic self-driving, and then moves on to making a version of openpilot that can run on more than the few supported phones. It’s not the driver-less car of science fiction but one which performs what we think is SAE level 2 self driving, which is cruise control, lane centering, and collision warning. They take it out on the road in a little Suzuki on a busy Indian dual carriageway in the rain, and while we perhaps place more trust in meat-based driving, it does seem to perform fairly well

Self driving features are codified into a set of levels for an easy reference on what each is capable of doing. We’ve taken a look at it in the past, should you be interested.

European Roads See First Zero-Occupancy Autonomous Journey

We write a lot about self-driving vehicles here at Hackaday, but it’s fair to say that most of the limelight has fallen upon large and well-known technology companies on the west coast of the USA. It’s worth drawing attention to other parts of the world where just as much research has gone into autonomous transport, and on that note there’s an interesting milestone from Europe. The British company Oxbotica has successfully made the first zero-occupancy on-road journey in Europe, on a public road in Oxford, UK.

The glossy promo video below the break shows the feat as the vehicle with number plates signifying its on-road legality drives round the relatively quiet roads through one of the city’s technology parks, and promises a bright future of local deliveries and urban transport. The vehicle itself is interesting, it’s a platform supplied by the Aussie outfit AppliedEV, an electric spaceframe vehicle that’s designed to provide a versatile platform for autonomous transport. As such, unlike so many of the aforementioned high-profile vehicles, it has no passenger cabin and no on-board driver to take the wheel in a calamity; instead it’s driven by Oxbotica’s technology and has their sensor pylon attached to its centre.

Continue reading “European Roads See First Zero-Occupancy Autonomous Journey”

Does Your Programmer Know How Fast You Were Going?

News reports were everywhere that an autonomous taxi operated by a company called Cruise was driving through San Francisco with no headlights. The local constabulary tried to stop the vehicle and were a bit thrown that there was no driver. Then the car moved beyond an intersection and pulled over, further bemusing the officers.

The company says the headlights were due to human error and that the car had stopped at a light and then moved to a safe stop by design. This leads to the question of how people including police officers will interact with robot vehicles.

Continue reading “Does Your Programmer Know How Fast You Were Going?”

Uber Traded Away Its In-House Self-Driving Effort

Perhaps the best-known ridesharing service, Uber has grown rapidly over the last decade. Since its founding in 2009, it has expanded into markets around the globe, and entered the world of food delivery and even helicopter transport.

One of the main headline research areas for the company was the development of autonomous cars, which would revolutionize the company’s business model by eliminating the need to pay human drivers. However, as of December, the company has announced that it it spinning off its driverless car division in a deal reportedly worth $4 billion, though that’s all on paper — Uber is trading its autonomous driving division, and a promise to invest a further $400 million, in return for a 26% share in the self-driving tech company Aurora Innovation.

Playing A Long Game

Uber’s self-driving efforts have been undertaken in close partnership with Volvo in recent years.

Uber’s driverless car research was handled by the internal Advanced Technologies Group, made up of 1,200 employees dedicated to working on the new technology. The push to eliminate human drivers from the ride-sharing business model was a major consideration for investors of Uber’s Initial Public Offering on the NYSE in 2019. The company is yet to post a profit, and reducing the amount of fares going to human drivers would make it much easier for the company to achieve that crucial goal.

However, Uber’s efforts have not been without incident. Tragically, in 2018, a development vehicle running in autonomous mode hit and killed a pedestrian in Tempe, Arizona. This marked the first pedestrian fatality caused by an autonomous car, and led to the suspension of on-road testing by the company. The incident revealed shortcomings in the company’s technology and processes, and was a black mark on the company moving forward.

The Advanced Technology Group (ATG) has been purchased by a Mountain View startup by the name of Aurora Innovation, Inc. The company counts several self-driving luminaries amongst its cofounders. Chris Urmson, now CEO, was a technical leader during his time at Google’s self-driving research group. Drew Bagnell worked on autonomous driving at Uber, and Sterling Anderson came to the startup from Tesla’s Autopilot program. The company was founded in 2017, and counts Hyundai and Amazon among its venture capital investors.

Aurora could also have links with Toyota, which also invested in ATG under Uber’s ownership in 2019. Unlike Uber, which solely focused on building viable robotaxis for use in limited geographical locations, the Aurora Driver, the core of the company’s technology, aims to be adaptable to everything from “passenger sedans to class-8 trucks”.

Aurora has been developing self-driving technology to handle real-world situations since its founding in 2017. Being able to master the challenges of a crowded city will be key to succeeding in the marketplace.

Getting rid of ATG certainly spells the end of Uber’s in-house autonomous driving effort, but it doesn’t mean they’re getting out of the game. Holding a stake in Aurora, Uber still stands to profit from early investment, and will retain access to the technology as it develops. At the same time, trading ATG off to an outside firm puts daylight between the rideshare company and any negative press from future testing incidents.

Even if Aurora only retains 75% of ATG’s 1,200 employees, it’s doubling in size, and will be worth keeping an eye on in the future.

Firmware Hints That Tesla’s Driver Camera Is Watching

Currently, if you want to use the Autopilot or Self-Driving modes on a Tesla vehicle you need to keep your hands on the wheel at all times. That’s because, ultimately, the human driver is still the responsible party. Tesla is adamant about the fact that functions which allow the car to steer itself within a lane, avoid obstacles, and intelligently adjust its speed to match traffic all constitute a driver assistance system. If somebody figures out how to fool the wheel sensor and take a nap while their shiny new electric car is hurtling down the freeway, they want no part of it.

So it makes sense that the company’s official line regarding the driver-facing camera in the Model 3 and Model Y is that it’s there to record what the driver was doing in the seconds leading up to an impact. As explained in the release notes of the June 2020 firmware update, Tesla owners can opt-in to providing this data:

Help Tesla continue to develop safer vehicles by sharing camera data from your vehicle. This update will allow you to enable the built-in cabin camera above the rearview mirror. If enabled, Tesla will automatically capture images and a short video clip just prior to a collision or safety event to help engineers develop safety features and enhancements in the future.

But [green], who’s spent the last several years poking and prodding at the Tesla’s firmware and self-driving capabilities, recently found some compelling hints that there’s more to the story. As part of the vehicle’s image recognition system, which usually is tasked with picking up other vehicles or pedestrians, they found several interesting classes that don’t seem necessary given the official explanation of what the cabin camera is doing.

If all Tesla wanted was a few seconds of video uploaded to their offices each time one of their vehicles got into an accident, they wouldn’t need to be running image recognition configured to detect distracted drivers against it in real-time. While you could make the argument that this data would be useful to them, there would still be no reason to do it in the vehicle when it could be analyzed as part of the crash investigation. It seems far more likely that Tesla is laying the groundwork for a system that could give the vehicle another way of determining if the driver is paying attention.

Continue reading “Firmware Hints That Tesla’s Driver Camera Is Watching”

A Car Phone — No, Not That Kind

Autonomous vehicle development is a field of technology that remains relatively elusive to the average hacker, what with the needing a whole car and all. Instead of having to deal with such a large scale challenge, [Piotr Sokólski] has instead turned to implementing the same principles on the scale of a small radio-controlled car.

Wanting to lower the barrier of entry for developing software for self-driving cars, he based his design off of something you’re likely to have lying around already: a smartphone. He cites the Google Cardboard project for his inspiration, with how it made VR more accessible without needing expensive hardware. The phone is able to control the actuators and wheel motors through a custom board, which it talks to via a Bluetooth connection. And since the camera points up in the way the phone is mounted in the frame, [Piotr] came up with a really clever solution of using a mirror as a periscope so the car can see in front of itself.

The software here has two parts, though the phone app one does little more than just serve as an interface by sending off a video feed to be processed. The whole computer vision processing is done on the desktop part, and it allows [Piotr] to do some fun things like using reinforcement learning to keep the car driving as long as possible without crashing. This is achieved by making the algorithm observe the images coming from the phone and giving it negative reward whenever an accelerometer detects a collision. Another experiment he’s done is use a QR tag on top of the car, visible to a fixed overhead camera, to determine the car’s position in the room.

This might not be the first time someone’s made a scaled down model of a self-driving vehicle, though it’s one of the most cleverly-designed ones, and it’s certainly much simpler than trying to do it on a full-sized car in your garage.

Continue reading “A Car Phone — No, Not That Kind”