32C3: Shopshifting — Breaking Credit Card Payment Systems

Credit card payment systems touch all of our lives, and because of this there’s a lot riding on the security of that technology. The best security research looks into a widely deployed system and finds the problems before the bad guys do. The most entertaining security presentations end up finding face-palmingly bad practices and having a good laugh along the way. The only way to top that off is with live demos. [Karsten Nohl], [Fabian Bräunlein], and [dexter] gave a talk on the security of credit-card payment systems at the 32nd annual Chaos Communications Congress (32C3) that covers all the bases.

While credit card systems themselves have been quite well-scrutinized, the many vendor payment networks that connect the individual terminals haven’t. The end result of this research is that it is possible to steal credit card PINs and remotely refund credits to different cards — even for purchases that have never been made. Of course, the researchers demonstrate stealing money from themselves, but the proof of concept is solid. How they broke two separate payment systems is part hardware hacking, part looking-stuff-up-on-the-Internet, and part just being plain inquisitive.

Continue reading “32C3: Shopshifting — Breaking Credit Card Payment Systems”

32C3: Towards Trustworthy X86 Laptops

Security assumes there is something we can trust; a computer encrypting something is assumed to be trustworthy, and the computer doing the decrypting is assumed to be trustworthy. This is the only logical mindset for anyone concerned about security – you don’t have to worry about all the routers handling your data on the Internet, eavesdroppers, or really anything else. Security breaks down when you can’t trust the computer doing the encryption. Such is the case today. We can’t trust our computers.

In a talk at this year’s Chaos Computer Congress, [Joanna Rutkowska] covered the last few decades of security on computers – Tor, OpenVPN, SSH, and the like. These are, by definition, meaningless if you cannot trust the operating system. Over the last few years, [Joanna] has been working on a solution to this in the Qubes OS project, but everything is built on silicon, and if you can’t trust the hardware, you can’t trust anything.

And so we come to an oft-forgotten aspect of computer security: the BIOS, UEFI, Intel’s Management Engine, VT-d, Boot Guard, and the mess of overly complex firmware found in a modern x86 system. This is what starts the chain of trust for the entire computer, and if a computer’s firmware is compromised it is safe to assume the entire computer is compromised. Firmware is also devilishly hard to secure: attacks against write protecting a tiny Flash chip have been demonstrated. A Trusted Platform Module could compare the contents of a firmware, and unlock it if it is found to be secure. This has also been shown to be vulnerable to attack. Another method of securing a computer’s firmware is the Core Root of Trust for Measurement, which compares firmware to an immutable ROM-like memory. The specification for the CRTM doesn’t say where this memory is, though, and until recently it has been implemented in a tiny Flash chip soldered to the motherboard. We’re right back to where we started, then, with an attacker simply changing out the CRTM chip along with the chip containing the firmware.

But Intel has an answer to everything, and to the house of cards for firmware security, Intel introduced their Management Engine. This is a small microcontroller running on every Intel CPU all the time that has access to RAM, WiFi, and everything else in a computer. It is security through obscurity, though. Although the ME can elevate privileges of components in the computer, nobody knows how it works. No one has the source code for the operating system running on the Intel ME, and the ME is an ideal target for a rootkit.

trustedstickIs there hope for a truly secure laptop? According to [Joanna], there is hope in simply not trusting the BIOS and other firmware. Trust therefore comes from a ‘trusted stick’ – a small memory stick that contains a Flash chip that verifies the firmware of a computer independently of the hardware in a computer.

This, with open source firmwares like coreboot are the beginnings of a computer that can be trusted. While the technology for a device like this could exist, it will be a while until something like this will be found in the wild. There’s still a lot of work to do, but at least one thing is certain: secure hardware doesn’t exist, but it can be built. Whether secure hardware comes to pass is another thing entirely.

You can watch [Joanna]’s talk on the 32C3 streaming site.

Capture The Flag With Lightsabers

There’s a great game of capture-the-flag that takes place every year at HITCON. This isn’t your childhood neighborhood’s capture-the-flag in the woods with real flags, though. In this game the flags are on secured servers and it’s the other team’s mission to break into the servers in whatever way they can to capture the flag. This year, though, the creators of the game devised a new scoreboard for keeping track of the game: a lightsaber.

In this particular game, each team has a server that they have to defend. At the same time, each team attempts to gain access to the other’s server. This project uses a lightsaber stand that turns the lightsabers into scoreboards for the competition at the 2015 Hacks In Taiwan Conference. It uses a cheap OpenWRT Linux Wi-Fi/Ethernet development board, LinkIt Smart 7688 which communicates with a server. Whenever a point is scored, the lightsaber illuminates and a sound effect is played. The lightsabers themselves are sourced from a Taiwanese lightsabersmith and are impressive pieces of technology on their own. As a bonus the teams will get to take them home with them.

While we doubt that this is more forced product integration advertisement from Disney, it certainly fits in with the theme of the game. Capture-the-flag contests like this are great ways to learn about cyber security and how to defend your own equipment from real-world attacks. There are other games going on all around the world if you’re looking to get in on the action.

Continue reading “Capture The Flag With Lightsabers”

Biometric Bracelet Electrifies You To Unlock Your Tablet

Researchers [Christian Holz] and [Marius Knaust] have come up with a cool new way to authenticate you to virtually any touchscreen device. This clever idea couples a biometric sensor and low-data-rate transmitter in a wearable wrist strap that talks to the touch screen by electrifying you.

Specifically the strap has electrodes that couple a 50V, 150kHz signal through your finger, to the touchscreen. The touchscreen picks up both your finger’s location through normal capacitive-sensing methods and the background signal that’s transmitted by the “watch”. This background signal is modulated on and off, transmitting your biometric data.

The biometric data itself is the impedance through your wrist from one electrode to another. With multiple electrodes encircling your wrist, they end up with something like a CAT scan of your wrist’s resistance. Apparently this is unique enough to be used as a biometric identifier. (We’re surprised.)

Continue reading “Biometric Bracelet Electrifies You To Unlock Your Tablet”

Hacker Uncovers Security Holes At CSL Dualcom

CSL Dualcom, a popular maker of security systems in England, is disputing claims from [Cybergibbons] that their CS2300-R model is riddled with holes. The particular device in question is a communications link that sits in between an alarm system and their monitoring facility. Its job is to allow the two systems to talk to each other via internet, POT lines or cell towers. Needless to say, it has some heavy security features built in to prevent alarm_01tampering. It appears, however, that the security is not very secure. [Cybergibbons] methodically poked and prodded the bits and bytes of the CS2300-R until it gave up its secrets. It turns out that the encryption it uses is just a few baby steps beyond a basic Caesar Cipher.

A Caesar Cipher just shifts data by a numeric value. The value is the cipher key. For example, the code IBDLBEBZ is encrypted with a Caesar Cipher. It doesn’t take very much to see that a shift of “1” would reveal HACKADAY. This…is not security, and is equivalent to a TSA lock, if that. The CS2300-R takes the Caesar Cipher and modifies it so that the cipher key changes as you move down the data string. [Cybergibbons] was able to figure out how the key changed, which revealed, as he put it – ‘the keys to the kingdom’.

There’s a lot more to the story. Be sure to read his detailed report (pdf) and let us know what you think in the comments below.

We mentioned that CSL Dualcom is disputing the findings. Their response can be read here.

Physical Security For Desktop Computers

There’s a truism in the security circles that says physical security is security. It doesn’t matter how many bits you’ve encrypted your password with, which elliptic curve you’ve used in your algorithm, or if you use a fingerprint, retina scan, or face print for a second factor of authentication. If someone has physical access to a device, all these protections are just road bumps in the way of getting your data. Physical access to a machine means all that data is out in the open, and until now there’s nothing you could do to stop it.

This week at Black Hat Europe, Design-Shift introduced ORWL, a computer that provides the physical security to all the data sitting on your computer.

The first line of protection for the data stuffed into the ORWL is unique key fob radio. This electronic key fob is simply a means of authentication for the ORWL – without it, ORWL simply stays in its sleep mode. If the user walks away from the computer, the USB ports are shut down, and the HDMI output is disabled. While this isn’t a revolutionary feature – something like this can be installed on any computer – that’s not the biggest trick ORWL has up its sleeve.

ORWL2The big draw to the ORWL is a ‘honeycomb mesh’ that completely covers every square inch of circuit board. This honeycomb mesh is simply a bit of plastic that screws on to the ORWL PCB and connects dozens of electronic traces embedded in this board to a secure microcontroller. If these traces are broken – either through taking the honeycomb shell off or by breaking it wide open, the digital keys that unlock the computer are erased.

The ORWL specs are what you would expect from a bare-bones desktop computer: Intel Skylake mobile processors, Intel graphics, a choice of 4 or 8GB of RAM, 64 to 512GB SSD. WiFi, two USB C ports, and an HDMI port provide all the connections to the outside world.

While this isn’t a computer for everyone, and it may not even a very large deployment, it is an interesting challenge. Physical security rules over all, and it would be very interesting to see what sort of attack can be performed on the ORWL to extract all the data hidden away behind an electronic mesh. Short of breaking the digital key hidden on a key fob, the best attack might just be desoldering the chips for the SSD and transplanting them into a platform more amenable to reading them.

In any event, ORWL is an interesting device if only for being one of the few desktop computers to tackle the problem of physical security. As with any computer, if you have physical access to a device, you have access to all the data on the device; we just don’t know how to get the data off one of these tiny computers.

Video below.

Continue reading “Physical Security For Desktop Computers”

Your Unhashable Fingerprints Secure Nothing

Passwords are crap. Nobody picks good ones, when they do they re-use them across sites, and if you use even a trustworthy password manager, they’ll get hacked too. But you know what’s worse than a password? A fingerprint. Fingerprints have enough problems with them that they should never be used anywhere a password would be.

Passwords are supposed to be secret, like the name of your childhood pet. In contrast, you carry your fingers around with you out in the open nearly everywhere you go. Passwords also need to be revocable. In the case that your password does get revealed, it’s great to be able to simply pick another one. You don’t want to have to revoke your fingers. Finally, and this is the kicker, you want your password to be hashable, in order to protect the password database itself from theft.

In the rest of the article, I’ll make each of these three cases, and hopefully convince you that using fingerprints in place of a password is even more broken than using a password in the first place. (You listening Apple and Google? No, I didn’t think you were.)

Continue reading “Your Unhashable Fingerprints Secure Nothing”