Physical Security For Desktop Computers

There’s a truism in the security circles that says physical security is security. It doesn’t matter how many bits you’ve encrypted your password with, which elliptic curve you’ve used in your algorithm, or if you use a fingerprint, retina scan, or face print for a second factor of authentication. If someone has physical access to a device, all these protections are just road bumps in the way of getting your data. Physical access to a machine means all that data is out in the open, and until now there’s nothing you could do to stop it.

This week at Black Hat Europe, Design-Shift introduced ORWL, a computer that provides the physical security to all the data sitting on your computer.

The first line of protection for the data stuffed into the ORWL is unique key fob radio. This electronic key fob is simply a means of authentication for the ORWL – without it, ORWL simply stays in its sleep mode. If the user walks away from the computer, the USB ports are shut down, and the HDMI output is disabled. While this isn’t a revolutionary feature – something like this can be installed on any computer – that’s not the biggest trick ORWL has up its sleeve.

ORWL2The big draw to the ORWL is a ‘honeycomb mesh’ that completely covers every square inch of circuit board. This honeycomb mesh is simply a bit of plastic that screws on to the ORWL PCB and connects dozens of electronic traces embedded in this board to a secure microcontroller. If these traces are broken – either through taking the honeycomb shell off or by breaking it wide open, the digital keys that unlock the computer are erased.

The ORWL specs are what you would expect from a bare-bones desktop computer: Intel Skylake mobile processors, Intel graphics, a choice of 4 or 8GB of RAM, 64 to 512GB SSD. WiFi, two USB C ports, and an HDMI port provide all the connections to the outside world.

While this isn’t a computer for everyone, and it may not even a very large deployment, it is an interesting challenge. Physical security rules over all, and it would be very interesting to see what sort of attack can be performed on the ORWL to extract all the data hidden away behind an electronic mesh. Short of breaking the digital key hidden on a key fob, the best attack might just be desoldering the chips for the SSD and transplanting them into a platform more amenable to reading them.

In any event, ORWL is an interesting device if only for being one of the few desktop computers to tackle the problem of physical security. As with any computer, if you have physical access to a device, you have access to all the data on the device; we just don’t know how to get the data off one of these tiny computers.

Video below.

Continue reading “Physical Security For Desktop Computers”

Your Unhashable Fingerprints Secure Nothing

Passwords are crap. Nobody picks good ones, when they do they re-use them across sites, and if you use even a trustworthy password manager, they’ll get hacked too. But you know what’s worse than a password? A fingerprint. Fingerprints have enough problems with them that they should never be used anywhere a password would be.

Passwords are supposed to be secret, like the name of your childhood pet. In contrast, you carry your fingers around with you out in the open nearly everywhere you go. Passwords also need to be revocable. In the case that your password does get revealed, it’s great to be able to simply pick another one. You don’t want to have to revoke your fingers. Finally, and this is the kicker, you want your password to be hashable, in order to protect the password database itself from theft.

In the rest of the article, I’ll make each of these three cases, and hopefully convince you that using fingerprints in place of a password is even more broken than using a password in the first place. (You listening Apple and Google? No, I didn’t think you were.)

Continue reading “Your Unhashable Fingerprints Secure Nothing”

A More Correct Horse Battery Staple

Passwords are terrible. The usual requirements of a number, capital letter, or punctuation mark force users to create unmemorable passwords, leading to post-it notes; the techniques that were supposed to make passwords more secure actually make us less secure, and yes, there is an xkcd for it.

[Randall Munroe] did offer us a solution: a Correct Horse Battery Staple. By memorizing a long phrase, a greater number of bits are more easily encoded in a user’s memory, making a password much harder to crack. ‘Correct Horse Battery Staple’ only provides a 44-bit password, though, and researchers at the University of Southern California have a better solution: prose and poetry. Just imagine what a man from Nantucket will do to a battery staple.

In their paper, the researchers set out to create random, memorable 60-bit passwords in an English word sequence. First, they created an xkcd password generator with a 2048-word dictionary to create passwords such as ‘photo bros nan plain’ and ’embarrass debating gaskell jennie’. This produced the results you would expect from a webcomic. The best ‘alternative’ result was found when creating poetry: passwords like “Sophisticated potentates / misrepresenting Emirates” and “The supervisor notified / the transportation nationwide” produced a 60-bit password that was at least as memorable as the xkcd method.

Image credit xkcd

How To Control Siri Through Headphone Wires

Last week saw the revelation that you can control Siri and Google Now from a distance, using high power transmitters and software defined radios. Is this a risk? No, it’s security theatre, the fine art of performing an impractical technical achievement while disclosing these technical vulnerabilities to the media to pad a CV. Like most security vulnerabilities it is very, very cool and enough details have surfaced that this build can be replicated.

The original research paper, published by researchers [Chaouki Kasmi] and [Jose Lopes Esteves] attacks the latest and greatest thing to come to smartphones, voice commands. iPhones and Androids and Windows Phones come with Siri and Google Now and Cortana, and all of these voice services can place phone calls, post something to social media, or launch an application. The trick to this hack is sending audio to the microphone without being heard.

googleThe ubiquitous Apple earbuds have a single wire for a microphone input, and this is the attack vector used by the researchers. With a 50 Watt VHF power amplifier (available for under $100, if you know where to look), a software defined radio with Tx capability ($300), and a highly directional antenna (free clothes hangers with your dry cleaning), a specially crafted radio message can be transmitted to the headphone wire, picked up through the audio in of the phone, and understood by Siri, Cortana, or Google Now.

There is of course a difference between a security vulnerability and a practical and safe security vulnerability. Yes, for under $400 and the right know-how, anyone could perform this technological feat on any cell phone. This feat comes at the cost of discovery; because of the way the earbud cable is arranged, the most efficient frequency varies between 80 and 108 MHz. This means a successful attack would sweep through the band at various frequencies; not exactly precision work. The power required for this attack is also intense – about 25-30 V/m, about the limit for human safety. But in the world of security theatre, someone with a backpack, carrying around a long Yagi antenna, pointing it at people, and having FM radios cut out is expected.

Of course, the countermeasures to this attack are simple: don’t use Siri or Google Now. Leaving Siri enabled on a lock screen is a security risk, and most Androids disable Google Now on the lock screen by default. Of course, any decent set of headphones would have shielding in the cable, making inducing a current in the microphone wire even harder. The researchers are at the limits of what is acceptable for human safety with the stock Apple earbuds. Anything more would be seriously, seriously dumb.

How The NSA Can Read Your Emails

Since [Snowden]’s release of thousands of classified documents in 2013, one question has tugged at the minds of security researchers: how, exactly, did the NSA apparently intercept VPN traffic, and decrypt SSH and HTTP, allowing the NSA to read millions of personal, private emails from persons around the globe? Every guess is invariably speculation, but a paper presented at the ACM Conference on Computer and Communications Security might shed some light on how the NSA appears to have broken some of the most widespread encryption used on the Internet (PDF).

The relevant encryption discussed in the paper is Diffie–Hellman key exchange (D-H), the encryption used for HTTPS, SSH, and VPN. D-H relies on a shared very large prime number. By performing many, many computations, an attacker could pre-compute a ‘crack’ on an individual prime number, then apply a relatively small computation to decrypt any individual message that uses that prime number. If all applications used a different prime number, this wouldn’t be a problem. This is the difference between cryptography theory and practice; 92% of the top 1 Million Alexa HTTPS domains use the same two prime numbers for D-H. An attacker could pre-compute a crack on those two prime numbers and consequently be able to read nearly all Internet traffic through those servers.

This sort of attack was discussed last spring by the usual security researchers, and in that time the researchers behind the paper have been hard at work. The earlier discussion focused on 512-bit D-H primes and the LogJam exploit. Since then, the researchers have focused on the possibility of cracking longer 768- and 1024-bit D-H primes. They conclude that someone with the resources of cracking a single 1024-bit prime would allow an attacker to decrypt 66% of IPsec VPNs and 26% of SSH servers.

There is a bright side to this revelation: the ability to pre-compute the ‘crack’ on these longer primes is a capability that can only be attained by nation states as it’s on a scale that has been compared to cracking Enigma during WWII. The hardware alone to accomplish this would cost millions of dollars, and although this computation could be done faster with dedicated ASICs or other specialized hardware, this too would require an enormous outlay of cash. The downside to this observation is, of course, the capability to decrypt the most prevalent encryption protocols may be in the hands of our governments. This includes the NSA, China, and anyone else with hundreds of millions of dollars to throw at a black project.

Get Your Internet Out Of My Things

2014 was the year that the Internet of Things (IoT) reached the “Peak of Inflated Expectations” on the Gartner Hype Cycle. By 2015, it had only moved a tiny bit, towards the “Trough of Disillusionment”. We’re going to try to push it over the edge.

emerging-tech-hc.png;wa0131df2b233dcd17Depending on whom you ask, the IoT seems to mean that whatever the thing is, it’s got a tiny computer inside with an Internet connection and is sending or receiving data autonomously. Put a computer in your toaster and hook it up to the Internet! Your thermostat? Hook it up to the Internet!? Yoga mat? Internet! Mattress pad? To the Intertubes!

Snark aside, to get you through the phase of inflated expectations and on down into disillusionment, we’re going to use just one word: “security”. (Are you disillusioned yet? We’re personally bummed out anytime anyone says “security”. It’s a lot like saying “taxes” or “dentist’s appointment”, in that it means that we’re going to have to do something unpleasant but necessary. It’s a reality-laden buzzkill.)

Continue reading “Get Your Internet Out Of My Things”

A White Hat Virus For The Internet Of Things

The Internet of Things is going gangbusters, despite no one knowing exactly what it will be used for. There’s more marketing money being thrown at IoT paraphernalia than a new soda from Pepsi. It’s a new technology, and with that comes a few problems: these devices are incredibly insecure, and you only need to look at a few CCTV camera streams available online for proof of that.

The obvious solution to vulnerable Internet of Things things would be to get people to change the login credentials on their devices, but that has proven to be too difficult for most of the population. A better solution, if questionable in its intentions, would be a virus that would close all those open ports on routers, killing Telnet, and reminding users to change their passwords. Symantec has found such a virus. It’s called Wifatch, and it bends the concept of malware into a force for good.

Wifatch is a bit of code that slips through the back door of routers and other IoT devices, closes off Telnet to prevent further infection, and leaves a message telling the owner to change the password and update the device firmware. Wifatch isn’t keeping any secrets, either: most of the code is written in unobfuscated Perl, and there are debug messages that enable easy analysis of the code. This is code that’s meant to be taken apart, and code that includes a comment directed at NSA and FBI agents:

To any NSA and FBI agents reading this: please consider whether defending
the US Constitution against all enemies, foreign or domestic, requires you
to follow Snowden's example.

Although the designer of Wifatch left all the code out in the open, and is arguably doing good, there is a possible dark side to this white hat virus. Wifatch connects to a peer-to-peer network that is used to distribute threat updates. With backdoors in the code, the author of Wifatch could conceivably turn the entire network of Wifatch-infected devices into a personal botnet.

While Wifatch is easily removed from a router with a simple restart, and re-infection can be prevented by changing the default passwords, this is an interesting case of virtual vigilantism. It may not be the best way to tell people they need to change the password on their router, but it’s hard to argue with results.

[Image source: header, thumb]