Physical Security For Desktop Computers

There’s a truism in the security circles that says physical security is security. It doesn’t matter how many bits you’ve encrypted your password with, which elliptic curve you’ve used in your algorithm, or if you use a fingerprint, retina scan, or face print for a second factor of authentication. If someone has physical access to a device, all these protections are just road bumps in the way of getting your data. Physical access to a machine means all that data is out in the open, and until now there’s nothing you could do to stop it.

This week at Black Hat Europe, Design-Shift introduced ORWL, a computer that provides the physical security to all the data sitting on your computer.

The first line of protection for the data stuffed into the ORWL is unique key fob radio. This electronic key fob is simply a means of authentication for the ORWL – without it, ORWL simply stays in its sleep mode. If the user walks away from the computer, the USB ports are shut down, and the HDMI output is disabled. While this isn’t a revolutionary feature – something like this can be installed on any computer – that’s not the biggest trick ORWL has up its sleeve.

ORWL2The big draw to the ORWL is a ‘honeycomb mesh’ that completely covers every square inch of circuit board. This honeycomb mesh is simply a bit of plastic that screws on to the ORWL PCB and connects dozens of electronic traces embedded in this board to a secure microcontroller. If these traces are broken – either through taking the honeycomb shell off or by breaking it wide open, the digital keys that unlock the computer are erased.

The ORWL specs are what you would expect from a bare-bones desktop computer: Intel Skylake mobile processors, Intel graphics, a choice of 4 or 8GB of RAM, 64 to 512GB SSD. WiFi, two USB C ports, and an HDMI port provide all the connections to the outside world.

While this isn’t a computer for everyone, and it may not even a very large deployment, it is an interesting challenge. Physical security rules over all, and it would be very interesting to see what sort of attack can be performed on the ORWL to extract all the data hidden away behind an electronic mesh. Short of breaking the digital key hidden on a key fob, the best attack might just be desoldering the chips for the SSD and transplanting them into a platform more amenable to reading them.

In any event, ORWL is an interesting device if only for being one of the few desktop computers to tackle the problem of physical security. As with any computer, if you have physical access to a device, you have access to all the data on the device; we just don’t know how to get the data off one of these tiny computers.

Video below.

Continue reading “Physical Security For Desktop Computers”

Your Unhashable Fingerprints Secure Nothing

Passwords are crap. Nobody picks good ones, when they do they re-use them across sites, and if you use even a trustworthy password manager, they’ll get hacked too. But you know what’s worse than a password? A fingerprint. Fingerprints have enough problems with them that they should never be used anywhere a password would be.

Passwords are supposed to be secret, like the name of your childhood pet. In contrast, you carry your fingers around with you out in the open nearly everywhere you go. Passwords also need to be revocable. In the case that your password does get revealed, it’s great to be able to simply pick another one. You don’t want to have to revoke your fingers. Finally, and this is the kicker, you want your password to be hashable, in order to protect the password database itself from theft.

In the rest of the article, I’ll make each of these three cases, and hopefully convince you that using fingerprints in place of a password is even more broken than using a password in the first place. (You listening Apple and Google? No, I didn’t think you were.)

Continue reading “Your Unhashable Fingerprints Secure Nothing”

Stegosploit: Owned By A JPG

We’re primarily hardware hackers, but every once in a while we see a software hack that really tickles our fancy. One such hack is Stegosploit, by [Saumil Shah]. Stegosploit isn’t really an exploit, so much as it’s a means of delivering exploits to browsers by hiding them in pictures. Why? Because nobody expects a picture to contain executable code.

stegosploit_diagram[Saumil] starts off by packing the real exploit code into an image. He demonstrates that you can do this directly, by encoding characters of the code in the color values of the pixels. But that would look strange, so instead the code is delivered steganographically by spreading the bits of the characters that represent the code among the least-significant bits in either a JPG or PNG image.

OK, so the exploit code is hidden in the picture. Reading it out is actually simple: the HTML canvas element has a built-in getImageData() method that reads the (numeric) value of a given pixel. A little bit of JavaScript later, and you’ve reconstructed your code from the image. This is sneaky because there’s exploit code that’s now runnable in your browser, but your anti-virus software won’t see it because it wasn’t ever written out — it was in the image and reconstructed on the fly by innocuous-looking “normal” JavaScript.

232115_1366x1792_scrotAnd here’s the coup de grâce. By packing HTML and JavaScript into the header data of the image file, you can end up with a valid image (JPG or PNG) file that will nonetheless be interpreted as HTML by a browser. The simplest way to do this is send your file myPic.JPG from the webserver with a Content-Type: text/html HTTP header. Even though it’s a totally valid image file, with an image file extension, a browser will treat it as HTML, render the page and run the script it finds within.

The end result of this is a single image that the browser thinks is HTML with JavaScript inside it, which displays the image in question and at the same time unpacks the exploit code that’s hidden in the shadows of the image and runs that as well. You’re owned by a single image file! And everything looks normal.

We like this because it combines two sweet tricks in one hack: steganography to deliver the exploit code, and “polyglot” files that can be read two ways, depending on which application is doing the reading. A quick tag-search of Hackaday will dig up a lot on steganography here, but polyglot files are a relatively new hack.

[Ange Ablertini] is the undisputed master of packing one file type inside another, so if you want to get into the nitty-gritty of [Ange]’s style of “polyglot” file types, watch his talk on “Funky File Formats” (YouTube). You’ll never look at a ZIP file the same again.

Sweet hack, right? Who says the hardware guys get to have all the fun?

IPhone Jailbreak Hackers Await $1M Bounty

According to Motherboard, some unspecified (software) hacker just won a $1 million bounty for an iPhone exploit. But this is no ordinary there’s-a-glitch-in-your-Javascript bug bounty.

On September 21, “Premium” 0day startup Zerodium put out a call for a chain of exploits, starting with a browser, that enables the phone to be remotely jailbroken and arbitrary applications to be installed with root / administrator permissions. In short, a complete remote takeover of the phone. And they offered $1 million. A little over a month later, it looks like they’ve got their first claim. The hack has yet to be verified and the payout is actually made.

But we have little doubt that the hack, if it’s actually been done, is worth the money. The NSA alone has a $25 million annual budget for buying 0days and usually spends that money on much smaller bits and bobs. This hack, if it works, is huge. And the NSA isn’t the only agency that’s interested in spying on folks with iPhones.

Indeed, by bringing something like this out into the open, Zerodium is creating a bidding war among (presumably) adversarial parties. We’re not sure about the ethics of all this (OK, it’s downright shady) but it’s not currently illegal and by pitting various spy agencies (presumably) against each other, they’re almost sure to get their $1 million back with some cream on top.

We’ve seen a lot of bug bounty programs out there. Tossing “firmname bug bounty” into a search engine of your choice will probably come up with a hit for most firmnames. A notable exception in Silicon Valley? Apple. They let you do their debugging work for free. How long this will last is anyone’s guess, but if this Zerodium deal ends up being for real, it looks like they’re severely underpaying.

And if you’re working on your own iPhone remote exploits, don’t be discouraged. Zerodium still claims to have money for two more $1 million payouts. (And with that your humble author shrugs his shoulders and turns the soldering iron back on.)

EFF Granted DMCA Exemption: Hacking Your Own Car Is Legal For Now

The Digital Millennium Copyright Act (DMCA) is a horrible piece of legislation that we’ve been living with for sixteen years now. In addition to establishing a de-facto copyright for the design of boat hulls (don’t get us started!), the DMCA includes a Section 1201 which criminalizes defeating encryption in cases where such could be used to break copyright law.

Originally intended to stop the rampant copying of music in the Napster era, it’s been abused to prevent users from re-filling their inkjet cartridges and to cover up rootkits. In short, it’s scope has vastly exceeded its original aims. And we take it personally, because we like to take stuff apart and see how it works.

EFF_LogoThe only bright light in this otherwise dark, dark tunnel is the possibility to petition for exemptions to Section 1201 for certain devices and purposes. Just a few days ago, the EFF won a slew of DMCA exemptions, including the contentious exemption for bypassing automobiles’ encryption to check out what’s going on in the car’s firmware. The obvious relevance of the ability for researchers to inspect cars’ firmware in light of the VW scandal may have helped overcome strong pushback from the car manufacturers and the EPA.

The other exemption that caught our eye was the renewal of protection for people who need to hack old video games to keep them playable, jailbreak phones so that you can run an operating system of your choosing on it, and even the right to copy content from a DVD for remixes and excerpts.

This is all good stuff, but it’s a little bit sad that the EFF has to beg every three years to enable us all to do something that wasn’t illegal until the DMCA was written. But don’t take my word for it, have a listen to Cory Doctorow’s much more eloquent rant.

(Banner image courtesy [Kristoffer Smith], who we covered on car hacking way back when.)

A More Correct Horse Battery Staple

Passwords are terrible. The usual requirements of a number, capital letter, or punctuation mark force users to create unmemorable passwords, leading to post-it notes; the techniques that were supposed to make passwords more secure actually make us less secure, and yes, there is an xkcd for it.

[Randall Munroe] did offer us a solution: a Correct Horse Battery Staple. By memorizing a long phrase, a greater number of bits are more easily encoded in a user’s memory, making a password much harder to crack. ‘Correct Horse Battery Staple’ only provides a 44-bit password, though, and researchers at the University of Southern California have a better solution: prose and poetry. Just imagine what a man from Nantucket will do to a battery staple.

In their paper, the researchers set out to create random, memorable 60-bit passwords in an English word sequence. First, they created an xkcd password generator with a 2048-word dictionary to create passwords such as ‘photo bros nan plain’ and ’embarrass debating gaskell jennie’. This produced the results you would expect from a webcomic. The best ‘alternative’ result was found when creating poetry: passwords like “Sophisticated potentates / misrepresenting Emirates” and “The supervisor notified / the transportation nationwide” produced a 60-bit password that was at least as memorable as the xkcd method.

Image credit xkcd

How To Control Siri Through Headphone Wires

Last week saw the revelation that you can control Siri and Google Now from a distance, using high power transmitters and software defined radios. Is this a risk? No, it’s security theatre, the fine art of performing an impractical technical achievement while disclosing these technical vulnerabilities to the media to pad a CV. Like most security vulnerabilities it is very, very cool and enough details have surfaced that this build can be replicated.

The original research paper, published by researchers [Chaouki Kasmi] and [Jose Lopes Esteves] attacks the latest and greatest thing to come to smartphones, voice commands. iPhones and Androids and Windows Phones come with Siri and Google Now and Cortana, and all of these voice services can place phone calls, post something to social media, or launch an application. The trick to this hack is sending audio to the microphone without being heard.

googleThe ubiquitous Apple earbuds have a single wire for a microphone input, and this is the attack vector used by the researchers. With a 50 Watt VHF power amplifier (available for under $100, if you know where to look), a software defined radio with Tx capability ($300), and a highly directional antenna (free clothes hangers with your dry cleaning), a specially crafted radio message can be transmitted to the headphone wire, picked up through the audio in of the phone, and understood by Siri, Cortana, or Google Now.

There is of course a difference between a security vulnerability and a practical and safe security vulnerability. Yes, for under $400 and the right know-how, anyone could perform this technological feat on any cell phone. This feat comes at the cost of discovery; because of the way the earbud cable is arranged, the most efficient frequency varies between 80 and 108 MHz. This means a successful attack would sweep through the band at various frequencies; not exactly precision work. The power required for this attack is also intense – about 25-30 V/m, about the limit for human safety. But in the world of security theatre, someone with a backpack, carrying around a long Yagi antenna, pointing it at people, and having FM radios cut out is expected.

Of course, the countermeasures to this attack are simple: don’t use Siri or Google Now. Leaving Siri enabled on a lock screen is a security risk, and most Androids disable Google Now on the lock screen by default. Of course, any decent set of headphones would have shielding in the cable, making inducing a current in the microphone wire even harder. The researchers are at the limits of what is acceptable for human safety with the stock Apple earbuds. Anything more would be seriously, seriously dumb.