Fatalities Vs False Positives: The Lessons From The Tesla And Uber Crashes

In one bad week in March, two people were indirectly killed by automated driving systems. A Tesla vehicle drove into a barrier, killing its driver, and an Uber vehicle hit and killed a pedestrian crossing the street. The National Transportation Safety Board’s preliminary reports on both accidents came out recently, and these bring us as close as we’re going to get to a definitive view of what actually happened. What can we learn from these two crashes?

There is one outstanding factor that makes these two crashes look different on the surface: Tesla’s algorithm misidentified a lane split and actively accelerated into the barrier, while the Uber system eventually correctly identified the cyclist crossing the street and probably had time to stop, but it was disabled. You might say that if the Tesla driver died from trusting the system too much, the Uber fatality arose from trusting the system too little.

But you’d be wrong. The forward-facing radar in the Tesla should have prevented the accident by seeing the barrier and slamming on the brakes, but the Tesla algorithm places more weight on the cameras than the radar. Why? For exactly the same reason that the Uber emergency-braking system was turned off: there are “too many” false positives and the result is that far too often the cars brake needlessly under normal driving circumstances.

The crux of the self-driving at the moment is precisely figuring out when to slam on the brakes and when not. Brake too often, and the passengers are annoyed or the car gets rear-ended. Brake too infrequently, and the consequences can be worse. Indeed, this is the central problem of autonomous vehicle safety, and neither Tesla nor Uber have it figured out yet.

Continue reading “Fatalities Vs False Positives: The Lessons From The Tesla And Uber Crashes”

Books You Should Read: Sunburst And Luminary, An Apollo Memoir

The most computationally intense part of an Apollo mission was the moon landing itself, requiring both real-time control and navigation of the Lunar Module (LM) through a sequence of programs known as the P60’s. Data from radar, inertial navigation, and optical data sighted-off by the LM commander himself were fed into the computer in what we’d call today ‘data fusion.’

The guy who wrote that code is Don Eyles and the next best thing to actually hanging out with Don is to read his book. Don’s book reads as if you are at a bar sitting across the table listening to his incredible life story. Its personal, hilarious, stressful, fascinating, and more importantly for those of us who are fans of Hackaday, it’s relatable.

Continue reading “Books You Should Read: Sunburst And Luminary, An Apollo Memoir”

General Purpose I/O: How To Get More

The first program anyone writes for a microcontroller is the blinking LED which involves toggling a general-purpose input/output (GPIO) on and off. Consequently, the same GPIO can be used to read digital bits as well. A traditional microcontroller like the 8051 is available in DIP packages ranging from 20 pins to 40 pins. Some trade the number of GPIOs for compactness while other devices offer a larger number of GPIOs at the cost of complexity in fitting the part into your design. In this article, we take a quick look at applications that require a larger number of GPIOs and traditional solutions for the problem.

A GPIO is a generic pin on an integrated circuit or computer board whose behavior, including whether it is an input or output pin, is controllable by the user at runtime. See the internal diagram of the GPIO circuit for the ATmega328 for reference.

Simply put, each GPIO has a latch connected to a drive circuit with transistors for the output part and another latch for the input part. In the case of the ATmega328, there is a direction register as well, whereas, in the case of the 8051, the output register serves as the direction register where writing a 1 to it sets it in output mode.

The important thing to note here is that since all the circuits are on the same piece of silicon, the operations are relatively fast. Having all the latches and registers on the same bus means it takes just one instruction to write or read a byte from any GPIO register.
Continue reading “General Purpose I/O: How To Get More”

Hair-Raising Tales Of Electrostatic Generators

We tend to think of electricity as part of the modern world. However, Thales of Mietus recorded information about static electricity around 585 BC.  This Greek philosopher found that rubbing amber with fur would cause the amber to attract lightweight objects like feathers. Interestingly enough, a few hundred years later, the aeolipile — a crude steam engine sometimes called Hero’s engine — appeared. If the ancients had put the two ideas together, they could have invented the topic of this post: electrostatic generators. As far as we know, they didn’t.

It would be 1663 before Otto von Guericke experimented with a sulfur globe rubbed by hand. This led to Isaac Newton suggesting glass globes and a host of other improvements from other contributors ranging from a woolen pad to a collector electrode. By 1746, William Watson had a machine consisting of multiple glass globes, a sword, and a gun barrel. Continue reading “Hair-Raising Tales Of Electrostatic Generators”

Getting Good At FPGAs: Real World Pipelining

Parallelism is your friend when working with FPGAs. In fact, it’s often the biggest benefit of choosing an FPGA. The dragons hiding in programmable logic usually involve timing — chaining together numerous logic gates certainly affects clock timing. Earlier, I looked at how to split up logic to take better advantage of parallelism inside an FPGA. Now I’m going to walk through a practical example by modeling some functions. Using Verilog with some fake delays we can show how it all works. You should follow along with a Verilog simulator, I’m using EDAPlayground which runs in your browser. The code for this entire article is been pre-loaded into the simulator.

If you’re used to C syntax, chances are good you’ll be able to read simple Verilog. If you already use Verilog mostly for synthesis, you may not be familiar with using it to model delays. That’s important here because the delay through gates is what motivates us to break up a lot of gates into a pipeline to start with. You use delays in test benches, but in that context they mostly just cause the simulator to pause a bit before introducing more stimulus. So it makes sense to start with a bit of background on delays.

Continue reading “Getting Good At FPGAs: Real World Pipelining”

Hacking When It Counts: The Magnetron Goes To War

In 1940, England was in a dangerous predicament. The Nazi war machine had been sweeping across Europe for almost two years, claiming countries in a crescent from Norway to France and cutting off the island from the Continent. The Battle of Britain was raging in the skies above the English Channel and southern coast of the country, while the Blitz ravaged London with a nightly rain of bombs and terror. The entire country was mobilized, prepared for Hitler’s inevitable invasion force to sweep across the Channel and claim another victim.

We’ve seen before that no idea that could possibly help turn the tide was considered too risky or too wild to take a chance on. Indeed, many of the ideas that sprang from the fertile and desperate minds of British inventors went on to influence the course of the war in ways they could never have been predicted. But there was one invention that not only influenced the war but has a solid claim on being its key invention, one without which the outcome of the war almost certainly would have been far worse, and one that would become a critical technology of the post-war era that would lead directly to innovations in communications, material science, and beyond. And the risks taken to develop this idea, the cavity magnetron, and field usable systems based on it are breathtaking in their scope and audacity. Here’s how the magnetron went to war.

Continue reading “Hacking When It Counts: The Magnetron Goes To War”

VCF East 2018: SDR On The Altair 8800

You’d be forgiven if you thought software defined radio (SDR) was a relatively recent discovery. After all, few outside of the hardcore amateur radio circles were even familiar with the concept until it was discovered that cheap USB TV tuners could be used as fairly decent receivers from a few hundred MHz all the way up into the GHz range. The advent of the RTL-SDR project in 2012 brought the cost of entry level SDR hardware from hundreds of dollars to tens of dollars effectively overnight. Today there’s more hackers cruising the airwaves via software trickery than there’s ever been before.

Continue reading “VCF East 2018: SDR On The Altair 8800”