33C3: If You Can’t Trust Your Computer, Who Can You Trust?

It’s a sign of the times: the first day of the 33rd Chaos Communications Congress (33C3) included two talks related to assuring that your own computer wasn’t being turned against you. The two talks are respectively practical and idealistic, realizable today and a work that’s still in the idea stage.

In the first talk, [Trammell Hudson] presented his Heads open-source firmware bootloader and minimal Linux for laptops and servers. The name is a gag: the Tails Linux distribution lets you operate without leaving any trace, while Heads lets you run a system that you can be reasonably sure is secure.

It uses coreboot, kexec, and QubesOS, cutting off BIOS-based hacking tools at the root. If you’re worried about sketchy BIOS rootkits, this is a solution. (And if you think that this is paranoia, you haven’t been following the news in the last few years, and probably need to watch this talk.) [Trammell]’s Heads distribution is a collection of the best tools currently available, and it’s something you can do now, although it’s not going to be easy.

Carrying out the ideas fleshed out in the second talk is even harder — in fact, impossible at the moment. But that’s not to say that it’s not a neat idea. [Jaseg] starts out with the premise that the CPU itself is not to be trusted. Again, this is sadly not so far-fetched these days. Non-open blobs of firmware abound, and if you’re really concerned with the privacy of your communications, you don’t want the CPU (or Intel’s management engine) to get its hands on your plaintext.

[Jaseg]’s solution is to interpose a device, probably made with a reasonably powerful FPGA and running open-source, inspectable code, between the CPU and the screen and keyboard. For critical text, like e-mail for example, the CPU will deal only in ciphertext. The FPGA, via graphics cues, will know which region of the screen is to be decrypted, and will send the plaintext out to the screen directly. Unless someone’s physically between the FPGA and your screen or keyboard, this should be unsniffable.

As with all early-stage ideas, the devil will be in the details here. It’s not yet worked out how to know when the keyboard needs to be encoded before passing the keystrokes on to the CPU, for instance. But the idea is very interesting, and places the trust boundary about as close to the user as possible, at input and output.

The Story of Kickstarting the OpenMV

Robots are the ‘it’ thing right now, computer vision is a hot topic, and microcontrollers have never been faster. These facts lead inexorably to the OpenMV, an embedded computer vision module that bills itself as the ‘Arduino of Machine Vision.’

The original OpenMV was an entry for the first Hackaday Prize, and since then the project has had a lot of success. There are tons of followers, plenty of users, and the project even had a successful Kickstarter. That last bit of info is fairly contentious — while the Kickstarter did meet the minimum funding level, there were a lot of problems bringing this very cool product to market. Issues with suppliers and community management were the biggest problems, but the team behind OpenMV eventually pulled it off.

At the 2016 Hackaday SuperConference, Kwabena Agyeman, one of the project leads for the OpenMV, told the story about bringing the OpenMV to market:

Continue reading “The Story of Kickstarting the OpenMV”

33C3: Understanding Mobile Messaging and its Security

If you had to explain why you use one mobile messaging service over another to your grandmother, would you be able to? Does she even care about forward secrecy or the difference between a private and public key is? Maybe she would if she understood the issues in relation to “normal” human experiences: holding secret discussions behind closed doors and sending letters wrapped in envelopes.

Or maybe your grandmother is the type who’d like to completely re-implement the messaging service herself, open source and verifiably secure. Whichever grandma you’ve got, she should watch [Roland Schilling] and [Frieder Steinmetz]’s talk where they give both a great introduction into what you might want out of a secure messaging system, and then review what they found while tearing apart Threema, a mobile messaging service that’s popular in Germany. Check out the slides (PDF). And if that’s not enough, they provided the code to back it up: an open workalike of the messaging service itself.

This talk makes a great introduction, by counterexample, to the way that other messaging applications work. The messaging service is always in the middle of a discussion, and whether they’re collecting metadata about you and your conversations to use for their own marketing purposes (“Hiya, Whatsapp!”) or not, it’s good to see how a counterexample could function.

The best quote from the talk? “Cryptography is rarely, if ever, the solution to a security problem. Cryptography is a translation mechanism, usually converting a communications security problem into a key management problem.” Any channel can be made secure if all parties have enough key material. The implementation details of getting those keys around, making sure that the right people have the right keys, and so on, are the details in which the devil lives. But these details matter, and as mobile messaging is a part of everyday life, it’s important that the workings are transparently presented to the users. This talk does a great job on the demystification front.

33C3: Breaking IoT Locks

Fast-forward to the end of the talk, and you’ll hear someone in the audience ask [Ray] “Are there any Bluetooth locks that you can recommend?” and he gets to answer “nope, not really.” (If this counts as a spoiler for a talk about the security of three IoT locks at a hacker conference, you need to get out more.)

btle_lockUnlocking a padlock with your cellphone isn’t as crazy as it sounds. The promise of Internet-enabled locks is that they can allow people one-time use or limited access to physical spaces, as easily as sending them an e-mail. Unfortunately, it also opens up additional attack surfaces. Lock making goes from being a skill that involves clever mechanical design and metallurgy, to encryption and secure protocols.

master_jtagIn this fun talk, [Ray] looks at three “IoT” locks. One, he throws out on mechanical grounds once he’s gotten it open — it’s a $100 lock that’s as easily shimmable as that $4 padlock on your gym locker. The other, a Master lock, has a new version of a 2012 vulnerability that [Ray] pointed out to Master: if you move a magnet around the outside the lock, it actuates the motor within, unlocking it. The third, made by Kickstarter company Noke, was at least physically secure, but fell prey to an insecure key exchange protocol.

Along the way, you’ll get some advice on how to quickly and easily audit your own IoT devices. That’s worth the price of admission even if you like your keys made out of metal instead of bits. And one of the more refreshing points, given the hype of some IoT security talks these days, was the nuanced approach that [Ray] took toward what counts as a security problem because it’s exploitable by someone else, rather than vectors that are only “exploitable” by the device’s owner. We like to think of those as customization options.

Open Source Art Encourages Society to Think Inclusively

Kate Reed has a vision for elevating the less talked about parts of ourselves, and of society. Through her art, she wants people to think about a part of themselves that makes them feel invisible, and to anonymously share that with the community around them. The mechanism for this is Invisible, a campaign to place translucent sculptures in public places around the world. The approach that she has taken to the project is very interesting — she’s giving the art away to empower the campaign. Check out her talk from the Hackaday SuperConference.

Continue reading “Open Source Art Encourages Society to Think Inclusively”

Ken Shirriff Takes Us Inside the IC, For Fun

[Ken Shirriff] has seen the insides of more integrated circuits than most people have seen bellybuttons. (This is an exaggeration.) But the point is, where we see a crazy jumble of circuitry, [Ken] sees a riddle to be solved, and he’s got a method that guides him through the madness.

In his talk at the 2016 Hackaday SuperConference, [Ken] stepped the audience through a number of famous chips, showing how he approaches them and how you could do the same if you wanted to, or needed to. Reading an IC from a photo is not for the faint of heart, but with a little perseverance, it can give you the keys to the kingdom. We’re stoked that [Ken] shared his methods with us, and gave us some deeper insight into a handful of classic silicon, from the Z80 processor to the 555 timer and LM7805 voltage regulator, and beyond.

Continue reading “Ken Shirriff Takes Us Inside the IC, For Fun”

33C3: Chris Gerlinsky Cracks Pay TV

People who have incredible competence in a wide range of fields are rare, and it can appear deceptively simple when they present their work. [Chris Gerlinksy]’s talk on breaking the encryption used on satellite and cable pay TV set-top boxes was like that. (Download the slides, as PDF.) The end result of his work is that he gets to watch anything on pay TV, but getting to watch free wrestling matches is hardly the point of an epic hack like this.

The talk spans hardware reverse engineering of the set-top box itself, chip decapping, visual ROM recovery, software reverse analysis, chip glitching, creation of custom glitching hardware, several levels of crypto, and a lot of very educated guessing. Along the way, you’ll learn everything there is to know about how broadcast streams are encrypted and delivered. Watch this talk now.

Some of the coolest bits:

  • Reading out the masked ROM from looking at it with a microscope never fails to amaze us.
  • A custom chip-glitcher rig was built, and is shown in a few iterations, finally ending up in a “fancy” project box. But it’s the kind of thing you could build at home: a microcontroller controlling a switch on a breadboard.
  • The encoder chip stores its memory in RAM: [Chris] uses a beautiful home-brew method of desoldering the power pins, connecting them up to a battery, and desoldering the chip from the board for further analysis.
  • The chip runs entirely in RAM, forcing [Chris] to re-glitch the chip and insert his payload code every time it resets. And it resets a lot, because the designers added reset vectors between the bytes of the desired keys. Very sneaky.
  • All of this was done by sacrificing only one truckload of set-top boxes.

Our jaw dropped repeatedly during this presentation. Go watch it now.