Here’s How The Precursor Protects Your Privacy

At some point, you will find yourself asking – is my device actually running the code I expect it to? [bunnie] aka [Andrew Huang] is passionate about making devices you can fundamentally, deeply trust, and his latest passion project is the Precursor communicator.

At the heart of it is an FPGA, and Precursor’s CPU is created out of the gates of that FPGA. This and a myriad of other design decisions make the Precursor fundamentally hard to backdoor, and you don’t have to take [bunnie]’s word for it — he’s made an entire video going through the architecture, boot protections and guarantees of the Precursor, teaching us what goes into a secure device that’s also practical to use.

Screenshot from the video, showing a diagram of how precursor's software and hardware components relate to each other If you can’t understand how your device works, your trust in it might be misplaced. In the hour long video, [bunnie] explains the entire stack, from the lower levels of hardware to root keys used to sign and verify the integrity of your OS, along the way demonstrating how you can verify that things haven’t gone wrong.

He makes sure to point out aspects you’d want to be cautious of, from physical security limitations to toolchain nuances. If you’re not up for a video, you can always check out the Precursor wiki, which has a treasure trove of information on the device’s security model.

As you might’ve already learned, it’s not enough for hardware to be open-source in order to be trustworthy. While open-source silicon designs are undoubtedly the future, their security guarantees only go so far.

Whether it’s esoteric hard drive firmware backdoors, weekend projects turning your WiFi card into a keylogger, or rootkits you can get on store-bought Lenovo laptops, hell, even our latest This Week In Security installment has two fun malware examples – there’s never a shortage of parties interested in collecting as much data as possible.

33C3: If You Can’t Trust Your Computer, Who Can You Trust?

It’s a sign of the times: the first day of the 33rd Chaos Communications Congress (33C3) included two talks related to assuring that your own computer wasn’t being turned against you. The two talks are respectively practical and idealistic, realizable today and a work that’s still in the idea stage.

In the first talk, [Trammell Hudson] presented his Heads open-source firmware bootloader and minimal Linux for laptops and servers. The name is a gag: the Tails Linux distribution lets you operate without leaving any trace, while Heads lets you run a system that you can be reasonably sure is secure.

It uses coreboot, kexec, and QubesOS, cutting off BIOS-based hacking tools at the root. If you’re worried about sketchy BIOS rootkits, this is a solution. (And if you think that this is paranoia, you haven’t been following the news in the last few years, and probably need to watch this talk.) [Trammell]’s Heads distribution is a collection of the best tools currently available, and it’s something you can do now, although it’s not going to be easy.

Carrying out the ideas fleshed out in the second talk is even harder — in fact, impossible at the moment. But that’s not to say that it’s not a neat idea. [Jaseg] starts out with the premise that the CPU itself is not to be trusted. Again, this is sadly not so far-fetched these days. Non-open blobs of firmware abound, and if you’re really concerned with the privacy of your communications, you don’t want the CPU (or Intel’s management engine) to get its hands on your plaintext.

[Jaseg]’s solution is to interpose a device, probably made with a reasonably powerful FPGA and running open-source, inspectable code, between the CPU and the screen and keyboard. For critical text, like e-mail for example, the CPU will deal only in ciphertext. The FPGA, via graphics cues, will know which region of the screen is to be decrypted, and will send the plaintext out to the screen directly. Unless someone’s physically between the FPGA and your screen or keyboard, this should be unsniffable.

As with all early-stage ideas, the devil will be in the details here. It’s not yet worked out how to know when the keyboard needs to be encoded before passing the keystrokes on to the CPU, for instance. But the idea is very interesting, and places the trust boundary about as close to the user as possible, at input and output.

32C3: Towards Trustworthy X86 Laptops

Security assumes there is something we can trust; a computer encrypting something is assumed to be trustworthy, and the computer doing the decrypting is assumed to be trustworthy. This is the only logical mindset for anyone concerned about security – you don’t have to worry about all the routers handling your data on the Internet, eavesdroppers, or really anything else. Security breaks down when you can’t trust the computer doing the encryption. Such is the case today. We can’t trust our computers.

In a talk at this year’s Chaos Computer Congress, [Joanna Rutkowska] covered the last few decades of security on computers – Tor, OpenVPN, SSH, and the like. These are, by definition, meaningless if you cannot trust the operating system. Over the last few years, [Joanna] has been working on a solution to this in the Qubes OS project, but everything is built on silicon, and if you can’t trust the hardware, you can’t trust anything.

And so we come to an oft-forgotten aspect of computer security: the BIOS, UEFI, Intel’s Management Engine, VT-d, Boot Guard, and the mess of overly complex firmware found in a modern x86 system. This is what starts the chain of trust for the entire computer, and if a computer’s firmware is compromised it is safe to assume the entire computer is compromised. Firmware is also devilishly hard to secure: attacks against write protecting a tiny Flash chip have been demonstrated. A Trusted Platform Module could compare the contents of a firmware, and unlock it if it is found to be secure. This has also been shown to be vulnerable to attack. Another method of securing a computer’s firmware is the Core Root of Trust for Measurement, which compares firmware to an immutable ROM-like memory. The specification for the CRTM doesn’t say where this memory is, though, and until recently it has been implemented in a tiny Flash chip soldered to the motherboard. We’re right back to where we started, then, with an attacker simply changing out the CRTM chip along with the chip containing the firmware.

But Intel has an answer to everything, and to the house of cards for firmware security, Intel introduced their Management Engine. This is a small microcontroller running on every Intel CPU all the time that has access to RAM, WiFi, and everything else in a computer. It is security through obscurity, though. Although the ME can elevate privileges of components in the computer, nobody knows how it works. No one has the source code for the operating system running on the Intel ME, and the ME is an ideal target for a rootkit.

trustedstickIs there hope for a truly secure laptop? According to [Joanna], there is hope in simply not trusting the BIOS and other firmware. Trust therefore comes from a ‘trusted stick’ – a small memory stick that contains a Flash chip that verifies the firmware of a computer independently of the hardware in a computer.

This, with open source firmwares like coreboot are the beginnings of a computer that can be trusted. While the technology for a device like this could exist, it will be a while until something like this will be found in the wild. There’s still a lot of work to do, but at least one thing is certain: secure hardware doesn’t exist, but it can be built. Whether secure hardware comes to pass is another thing entirely.

You can watch [Joanna]’s talk on the 32C3 streaming site.