The Predictability Problem with Self-Driving Cars

A law professor and an engineering professor walk into a bar. What comes out is a nuanced article on a downside of autonomous cars, and how to deal with it. The short version of their paper: self-driving cars need to be more predictable to humans in order to coexist.

We share living space with a lot of machines. A good number of them are mobile and dangerous but under complete human control: the car, for instance. When we want to know what another car at an intersection is going to do, we think about the driver of the car, and maybe even make eye contact to see that they see us. We then think about what we’d do in their place, and the traffic situation gets negotiated accordingly.

When its self-driving car got into an accident in February, Google replied that “our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.” Apparently, so did the car, right before it drove out in front of an oncoming bus. The bus driver didn’t expect the car to pull (slowly) into its lane, either.

All of the other self-driving car accidents to date have been the fault of other drivers, and the authors think this is telling. If you unexpectedly brake all the time, you can probably expect to eventually get hit from behind. If people can’t read your car’s AI’s mind, you’re gonna get your fender bent.

The paper’s solution is to make autonomous vehicles more predictable, and they mention a number of obvious solutions, from “I-sense-you” lights to inter-car communication. But then there are aspects we hadn’t thought about: specific markings that indicate the AIs capabilities, for instance. A cyclist signalling a left turn would really like to know if the car behind has the new bicyclist-handsignal-recognition upgrade before entering the lane. The ability to put your mind into the mind of the other car is crucial, and requires tons of information about the driver.

All of this may require and involve legislation. Intent and what all parties to an accident “should have known” are used in court to apportion blame in addition to the black-and-white of the law. When one of the parties is an AI, this gets murkier. How should you know what the algorithm should have been thinking? This is far from a solved problem, and it’s becoming more relevant.

We’ve written on the ethics of self-driving cars before, but simply in terms of their decision-making ability. This paper brings home the idea that we also need to be able to understand what they’re thinking, which is as much a human-interaction and legal problem as it is technological.

[Headline image: Google Self-Driving Car Project]

RFID Lock Keeps Your Bike Safe

What do you do with an RFID chip implanted in your body? If you are [gmendez3], you build a bike lock that responds to your chip. The prototype uses MDF to create a rear wheel immobilizer. However, [gmendez3] plans on building a version using aluminum.

For the electronics, of course, there’s an Arduino. There’s also an RC522 RFID reader. We couldn’t help but think of the Keyduino for this application. When the system is locked, the Arduino drives a servo to engage the immobilizer. To free your rear wheel, simply read your implanted chip. The Arduino then commands the servo to disengage the immobilizer. You can see the system in operation in the video below.

Continue reading “RFID Lock Keeps Your Bike Safe”

Mrs. Penny’s Driving School — Hardware Workshop in Dallas

In case you haven’t noticed, the Hackaday community is making more of an effort to be a community AFK. We’re at VCF East this weekend, have the Hackaday World Create Day quickly approaching, Hackaday | Belgrade a few days ago, and Hackaday Toronto next week just to name a few in close proximity to this post.

As promised, or threatened, depending on which end of the stick you’re on I will be teaching an electronics class at the Dallas Makerspace every 3rd Saturday of the month. The goal of these classes is to help you overcome the barrier between a hardware idea and having that hardware in your hand. I’m not an expert in PCB design or layout, but I’ve found more ways to do it wrong than I’d probably admit too and this is my way of sharing what I’ve painfully learned through trial and error. At the time of writing this article there are still a few spots available in the first class, follow the above link for tickets.

Images of my failed hopes and dreams wonderfully captured courtesy of [Krissy Heishman]

Class 1

In our first 6 hour session we’ll take a basic, high-level idea and work our way down. For example: our first project will be an AVR development board. This is something common enough that everyone will know what it is (an Arduino is an AVR development board, just in case my mom is reading this). We won’t be making an Arduino clone part-for-part but taking the Arduino idea and making it our own custom board. Maybe we add some terminal blocks instead of DuPont headers or perhaps we want a real time clock and a slide potentiometer on the board. We can do that if we want, you can’t stop us.

So class number 1 is a crash course in Eagle schematic capture and PCB layout. Since this is only 6 hours worth of class time and we need to have boards and parts ordered when we leave we won’t be getting too complicated with our design.

Class 2

By the time we meet for our second session we should have taken delivery of our shiny new PCBs and our parts order should have long since been delivered from the distributor (Mouser is more or less an hour drive from the Dallas Makerspace, not that we’ll pick the parts up at will-call for this project, but it’s nice to have the option). We will spend the second 6 hour session assembling and testing our boards. If we need to make changes to our boards we can talk about that as a part of the design process. Depending on how long assembly takes we can brainstorm some ideas for the next round of Mrs. Penny’s Driving School classes which will continue the following 3rd Saturday of the month.

What’s a Piezo Optomechanical Circuit?

Ever hear of a piezo-optomechanical circuit? We hadn’t either. Let’s break it down. Piezo implies some transducer that converts motion to and from energy. Opto implies light. Mechanical implies…well, mechanics. The device, from National Institute of Standards and Technology (NIST),  converts signals among optical, acoustic and radio waves. They claim a system based on this design could move and store information in future computers.

At the heart of this circuit is an optomechanical cavity, in the form of a suspended nanoscale beam. Within the beam are a series of holes that act as mirrors for very specific photons. The photons bounce back and forth thousands of times before escaping the cavity. Simultaneously, the nanoscale beam confines phonons, that is, mechanical vibrations. The photons and phonons exchange energy. Vibrations of the beam influence the buildup of photons and the photons influence the mechanical vibrations. The strength of this mutual interaction, or coupling, is one of the largest reported for an optomechanical system.

In addition to the cavities, the device includes acoustic waveguides. By channeling phonons into the optomechanical device, the device can manipulate the motion of the nanoscale beam directly and, thus, change the properties of the light trapped in the device.  An “interdigitated transducer” (IDT), which is a type of piezoelectric transducer like the ones used in surface wave devices, allows linking radio frequency electromagnetic waves, light, and acoustic waves.

The work appeared in Nature Photonics and was also the subject of a presentation at the March 2016 meeting of the American Physical Society. We’ve covered piezo transducers before, and while we’ve seen some unusual uses, we’ve never covered anything this exotic.

Stealing 3D Prints By Sound

In the open hardware world, we like to share 3D design files so that our friends and (global) neighbors can use and improve them. But we’ve all printed things from time to time that we’d like to keep secret. At least this is the premise behind this article in Science which proposes a novel method of 3D-printer-based industrial espionage: by recording the sound of the stepper motors and re-creating the toolpath.

Unfortunately, the article is behind a paywall so we’re short on the details, but everyone who’s played the Imperial March on their steppers has probably got the basic outline in their mind. Detecting the audio peak corresponding to a step pulse should be fairly easy. Disentangling the motions of two axes would be a bit harder, but presumably can be done based on different room-acoustic filtering of the two motors. Direction is the biggest question mark for us, but a stepper probably has a slightly audible glitch when reversing. Keeping track of these reversals could do the trick.

What do you think? Anyone know how they did it? Does someone with access to the full article want to write us up a summary in the comments?

[Thanks LVfire via Ars Technica]

[Edit: We were sent a copy of the full article (thanks [PersonUnknown]!) and it doesn’t explain any technical details at all. Save yourself the effort, and have fun speculating, because reading the article won’t help.]

A $1000 Tiny Personal Satellite

If you ever read any old magazines, you might be surprised at how inexpensive things used to be. A U.S. postage stamp was six cents, a gallon of gas was $0.34, and the same amount of milk was $1.07. Everything is relative, though. The average household income back then was under $8,000 a year (compared to over $53,000 a year in 2014). So as a percentage of income, that milk actually cost about seven bucks.

The same is true of getting into orbit. Typical costs today just to get something into orbit has gone from–no pun intended–astronomical, to pretty reasonable. Lifting a pound of mass on the Space Shuttle cost about $10,000. On an Atlas V, it costs about $6,000. A Falcon Heavy (when it launches) will drop the cost to around $1,000 or so. Of course, that’s just the launch costs. You still have to pay for whatever you want to put up there. Developing a satellite can be expensive. Very expensive.

Continue reading “A $1000 Tiny Personal Satellite”

Apple Aftermath: Senate Entertains A New Encryption Bill

If you recall, there was a recent standoff between Apple and the U. S. Government regarding unlocking an iPhone. Senators Richard Burr and Dianne Feinstein have a “discussion draft” of a bill that appears to require companies to allow the government to court order decryption.

Here at Hackaday, we aren’t lawyers, so maybe we aren’t the best source of legislative commentary. However, on the face of it, this seems a bit overreaching. The first part of the proposed bill is simple enough: any “covered entity” that receives a court order for information must provide it in intelligible form or provide the technical assistance necessary to get the information in intelligible form. The problem, of course, is what if you can’t? A covered entity, by the way, is anyone from a manufacturer, to a software developer, a communications service, or a provider of remote computing or storage.

There are dozens of services (backup comes to mind) where only you have the decryption keys and there is nothing reasonable the provider can do to get your data if you lose your keys. That’s actually a selling point for their service. You might not be anxious to backup your hard drive if you knew the vendor could browse your data when they wanted to do so.

The proposed bill has some other issues, too. One section states that nothing in the document is meant to require or prohibit a specific design or operating system. However, another clause requires that covered entities provide products and services that are capable of complying with the rule.

A broad reading of this is troubling. If this were law, entire systems that don’t allow the provider or vendor to decrypt your data could be illegal in the U. S. Whole classes of cybersecurity techniques could become illegal, too. For example, many cryptography systems use the property of forward secrecy by generating unrecorded session keys. For example, consider an SSH session. If someone learns your SSH key, they can listen in or interfere with your SSH sessions. However, they can’t take recordings of your previous sessions and decode them. The mechanism is a little different between SSHv1 (which you shouldn’t be using) and SSHv2. If you are interested in the gory details for SSHv2, have a look at section 9.3.7 of RFC 4251.

In all fairness, this isn’t a bill yet. It is a draft and given some of the definitions in section 4, perhaps they plan to expand it so that it makes more sense, or – at least – is more practical. If not, then it seems to be an indication that we need legislators that understand our increasingly technical world and have some understanding of how the new economy works. After all, we’ve seen this before, right? Many countries are all too happy to enact and enforce tight banking privacy laws to encourage deposits from people who want to hide their money. What makes you think that if the U. S. weakens the ability of domestic companies to make data private, that the business of concealing data won’t just move offshore, too?

If you were living under a rock and missed the whole Apple and FBI controversy, [Elliot] can catch you up. Or, you can see what [Brian] thought about Apple’s response to the FBI’s demand.