33C3: If You Can’t Trust Your Computer, Who Can You Trust?

It’s a sign of the times: the first day of the 33rd Chaos Communications Congress (33C3) included two talks related to assuring that your own computer wasn’t being turned against you. The two talks are respectively practical and idealistic, realizable today and a work that’s still in the idea stage.

In the first talk, [Trammell Hudson] presented his Heads open-source firmware bootloader and minimal Linux for laptops and servers. The name is a gag: the Tails Linux distribution lets you operate without leaving any trace, while Heads lets you run a system that you can be reasonably sure is secure.

It uses coreboot, kexec, and QubesOS, cutting off BIOS-based hacking tools at the root. If you’re worried about sketchy BIOS rootkits, this is a solution. (And if you think that this is paranoia, you haven’t been following the news in the last few years, and probably need to watch this talk.) [Trammell]’s Heads distribution is a collection of the best tools currently available, and it’s something you can do now, although it’s not going to be easy.

Carrying out the ideas fleshed out in the second talk is even harder — in fact, impossible at the moment. But that’s not to say that it’s not a neat idea. [Jaseg] starts out with the premise that the CPU itself is not to be trusted. Again, this is sadly not so far-fetched these days. Non-open blobs of firmware abound, and if you’re really concerned with the privacy of your communications, you don’t want the CPU (or Intel’s management engine) to get its hands on your plaintext.

[Jaseg]’s solution is to interpose a device, probably made with a reasonably powerful FPGA and running open-source, inspectable code, between the CPU and the screen and keyboard. For critical text, like e-mail for example, the CPU will deal only in ciphertext. The FPGA, via graphics cues, will know which region of the screen is to be decrypted, and will send the plaintext out to the screen directly. Unless someone’s physically between the FPGA and your screen or keyboard, this should be unsniffable.

As with all early-stage ideas, the devil will be in the details here. It’s not yet worked out how to know when the keyboard needs to be encoded before passing the keystrokes on to the CPU, for instance. But the idea is very interesting, and places the trust boundary about as close to the user as possible, at input and output.

33C3: Understanding Mobile Messaging And Its Security

If you had to explain why you use one mobile messaging service over another to your grandmother, would you be able to? Does she even care about forward secrecy or the difference between a private and public key is? Maybe she would if she understood the issues in relation to “normal” human experiences: holding secret discussions behind closed doors and sending letters wrapped in envelopes.

Or maybe your grandmother is the type who’d like to completely re-implement the messaging service herself, open source and verifiably secure. Whichever grandma you’ve got, she should watch [Roland Schilling] and [Frieder Steinmetz]’s talk where they give both a great introduction into what you might want out of a secure messaging system, and then review what they found while tearing apart Threema, a mobile messaging service that’s popular in Germany. Check out the slides (PDF). And if that’s not enough, they provided the code to back it up: an open workalike of the messaging service itself.

This talk makes a great introduction, by counterexample, to the way that other messaging applications work. The messaging service is always in the middle of a discussion, and whether they’re collecting metadata about you and your conversations to use for their own marketing purposes (“Hiya, Whatsapp!”) or not, it’s good to see how a counterexample could function.

The best quote from the talk? “Cryptography is rarely, if ever, the solution to a security problem. Cryptography is a translation mechanism, usually converting a communications security problem into a key management problem.” Any channel can be made secure if all parties have enough key material. The implementation details of getting those keys around, making sure that the right people have the right keys, and so on, are the details in which the devil lives. But these details matter, and as mobile messaging is a part of everyday life, it’s important that the workings are transparently presented to the users. This talk does a great job on the demystification front.

33C3: Breaking IoT Locks

Fast-forward to the end of the talk, and you’ll hear someone in the audience ask [Ray] “Are there any Bluetooth locks that you can recommend?” and he gets to answer “nope, not really.” (If this counts as a spoiler for a talk about the security of three IoT locks at a hacker conference, you need to get out more.)

btle_lockUnlocking a padlock with your cellphone isn’t as crazy as it sounds. The promise of Internet-enabled locks is that they can allow people one-time use or limited access to physical spaces, as easily as sending them an e-mail. Unfortunately, it also opens up additional attack surfaces. Lock making goes from being a skill that involves clever mechanical design and metallurgy, to encryption and secure protocols.

master_jtagIn this fun talk, [Ray] looks at three “IoT” locks. One, he throws out on mechanical grounds once he’s gotten it open — it’s a $100 lock that’s as easily shimmable as that $4 padlock on your gym locker. The other, a Master lock, has a new version of a 2012 vulnerability that [Ray] pointed out to Master: if you move a magnet around the outside the lock, it actuates the motor within, unlocking it. The third, made by Kickstarter company Noke, was at least physically secure, but fell prey to an insecure key exchange protocol.

Along the way, you’ll get some advice on how to quickly and easily audit your own IoT devices. That’s worth the price of admission even if you like your keys made out of metal instead of bits. And one of the more refreshing points, given the hype of some IoT security talks these days, was the nuanced approach that [Ray] took toward what counts as a security problem because it’s exploitable by someone else, rather than vectors that are only “exploitable” by the device’s owner. We like to think of those as customization options.

33C3 Starts Tomorrow: We Won’t Be Sleeping For Four Days

Possibly the greatest hacker show on Earth, the 33rd annual Chaos Communication Congress (33C3) begins Tuesday morning in Hamburg, Germany. And Hackaday will be there! Contributing Editor [Elliot Williams] is taking the night train up and will be trying to take it all in for you. The schedule looks tremendous.

If you can’t make it, don’t fret. There will be live streaming, and the talks are usually available in preliminary edit for viewing or download just a few minutes after they finish. It’s even cooler to watch the talks with friends, though. Every hackerspace with a video projector could be playing along, live or after the fact. Pick some cool talks and have a “movie night”.

elliot_williams_head_2_square_fuzzIf you’re going to be in Hamburg, and you want to show us something cool, tell us that something is NOTAHACK!1!! in person, or even just say “Hi”, we’ll be wandering around from talk to talk and session to session just like you, only with a backpack full of Hackaday stickers.

If there’s anything you think we should see, post up in the comments. If there’s enough call for it, we’ll have a Hackaday meetup once we can figure out a good time and location. Bring us a cool hack, and we’ll document it on the spot! Our DECT phone number is 2475.

Hackaday Links Column Banner

Hackaday Links: December 4, 2016

The Chaos Communication Congress is growing! Actually, it’s not, but there may be an ‘overflow venue’ for everyone who didn’t get a ticket. There’s a slack up for people who didn’t get a ticket to 33C3 but would still like to rent a venue, set up some tables, stream some videos, and generally have a good time.

Need to test a lot of batteries? Have one of those magnetic parts tray/dish things sitting around? This is freakin’ brilliant. Put your batteries vertically in a metal dish, clip one lead of a meter to the dish and probe each battery with the other lead of your meter.

Pravda reports the USS Zumwalt and HMS Duncan – the most technologically advanced ships in the US and Royal Navies – have turned into, ‘useless tin cans due to China’, with ‘Microchips made in China putting the vessels out of action’. Again, Pravda reports this, so don’t worry. In other news, someone found a few USB drives in a parking lot in Norfolk, Virginia.

Here’s how discourse goes on the Internet. Someone does something. Everyone says it’s stupid. We wait a few days or weeks. Someone posts something on Medium telling everyone it’s actually okay. Public opinion is muddled until the actual issue being discussed is rendered technologically irrelevant. For the newest MacBook Pro, we’re currently at stage 3 and ‘it’s kind of great for hackers’. Now if only we knew how to make USB-C ports work with microcontrollers…

If you have a Prusa i3, here’s a free gift: a Spitfire. The files to print a remote control, 973mm Spitfire Mk XVI are now free for Prusa i3 and i3 Mk 2 owners. Why? Because it’s cool, duh, and [Stephan Dokupil] and [Patrik Svida], the guys behind the Spitfire and other 3D printed RC planes, are also in Czech. Now all we need are Czech roundel stickers.