Hacker Uncovers Security Holes At CSL Dualcom

CSL Dualcom, a popular maker of security systems in England, is disputing claims from [Cybergibbons] that their CS2300-R model is riddled with holes. The particular device in question is a communications link that sits in between an alarm system and their monitoring facility. Its job is to allow the two systems to talk to each other via internet, POT lines or cell towers. Needless to say, it has some heavy security features built in to prevent alarm_01tampering. It appears, however, that the security is not very secure. [Cybergibbons] methodically poked and prodded the bits and bytes of the CS2300-R until it gave up its secrets. It turns out that the encryption it uses is just a few baby steps beyond a basic Caesar Cipher.

A Caesar Cipher just shifts data by a numeric value. The value is the cipher key. For example, the code IBDLBEBZ is encrypted with a Caesar Cipher. It doesn’t take very much to see that a shift of “1” would reveal HACKADAY. This…is not security, and is equivalent to a TSA lock, if that. The CS2300-R takes the Caesar Cipher and modifies it so that the cipher key changes as you move down the data string. [Cybergibbons] was able to figure out how the key changed, which revealed, as he put it – ‘the keys to the kingdom’.

There’s a lot more to the story. Be sure to read his detailed report (pdf) and let us know what you think in the comments below.

We mentioned that CSL Dualcom is disputing the findings. Their response can be read here.

Defeating Chip And PIN With Bits Of Wire

One of many ways that Americans are ridiculed by the rest of the world is that they don’t have chip and PIN on their credit cards yet; US credit card companies have been slow to bring this technology to millions of POS terminals across the country. Making the transition isn’t easy because until the transition is complete, the machines have to accept both magnetic stripes and chip and PIN.

This device can disable chip and PIN, wirelessly, by forcing the downgrade to magstripe. [Samy Kamkar] created the MagSpoof to explore the binary patterns on the magnetic stripe of his AmEx card, and in the process also created a device that works with drivers licenses, hotel room keys, and parking meters.

magspoofThe electronics for the MagSpoof are incredibly simple. Of course a small microcontroller is necessary for this build, and for the MagSpoof, [Samy] used the ATtiny85 for the ‘larger’ version (still less than an inch square). A smaller, credit card-sized version used an ATtiny10. The rest of the schematic is just an H-bridge and a coil of magnet wire – easy enough for anyone with a soldering iron to put together on some perfboard.
By pulsing the H-bridge and energizing the coil of wire, the MagSpoof emulates the swipe of a credit card – it’s all just magnetic fields reversing direction in a very particular pattern. Since the magnetic pattern on any credit card can be easily read, and [Samy] demonstrates that this is possible with some rust and the naked eye anyway, it’s a simple matter to clone a card by building some electronics.

[Samy] didn’t stop there, though. By turning off the bits that state that the card has a chip onboard, his device can bypass the chip and PIN protection. If you’re very careful with a magnetized needle, you could disable the chip and PIN protection on any credit card. [Samy]’s device doesn’t need that degree of dexterity – he can just flip a bit in the firmware for the MagSpoof. It’s all brilliant work, and although the code for the chip and PIN defeat isn’t included in the repo, the documents that show how that can be done exist.

[Samy]’s implementation is very neat, but it stands on the shoulders of giants. In particular, we’ve covered similar devices before (here and here, for instance) and everything that you’ll need for this hack except for the chip-and-PIN-downgrade attack are covered in [Count Zero]’s classic 1992 “A Day in the Life of a Flux Reversal“.

Thanks [toru] for sending this one in. [Samy]’s video is available below.

Continue reading “Defeating Chip And PIN With Bits Of Wire”

Turning A Teensy Into A Better U2F Key

A few days ago, we saw a project that used a Teensy to build a Universal 2nd Factor (U2F) key. While this project was just an experiment in how to implement U2F on any ‘ol microcontroller, and the creator admitted it wasn’t very secure, the comments for that post said otherwise: “making your own thing is the ONLY way to be secure,” read the comments.

In a stunning turn of events, writing comments on a blog post doesn’t mean you know what you’re talking about. It turns out, to perform a security analysis of a system, you need to look at the code. Shocking, yes, but [makomk] took a good, hard look at the code and found it was horribly broken.

The critical error of the Teensy U2F key crypto is simply how U2F is performed. During authentication, the device sends the U2F key handle to whatever service is trying to authenticating it. Because the key in the Teensy implementation is only ‘encrypted’ with XOR, it only takes 256 signing requests to recover the private key.

The original experimentation with using the Teensy as a U2F key was an educational endeavor, and it was never meant to be used by anyone. The attack on this small lesson in security is interesting, though, and [makomk] wrote a proof of concept that demonstrates his attack. This could be used to perform attacks from a remote server, but hopefully that won’t happen, because the original code should never be used in the wild.

Physical Security For Desktop Computers

There’s a truism in the security circles that says physical security is security. It doesn’t matter how many bits you’ve encrypted your password with, which elliptic curve you’ve used in your algorithm, or if you use a fingerprint, retina scan, or face print for a second factor of authentication. If someone has physical access to a device, all these protections are just road bumps in the way of getting your data. Physical access to a machine means all that data is out in the open, and until now there’s nothing you could do to stop it.

This week at Black Hat Europe, Design-Shift introduced ORWL, a computer that provides the physical security to all the data sitting on your computer.

The first line of protection for the data stuffed into the ORWL is unique key fob radio. This electronic key fob is simply a means of authentication for the ORWL – without it, ORWL simply stays in its sleep mode. If the user walks away from the computer, the USB ports are shut down, and the HDMI output is disabled. While this isn’t a revolutionary feature – something like this can be installed on any computer – that’s not the biggest trick ORWL has up its sleeve.

ORWL2The big draw to the ORWL is a ‘honeycomb mesh’ that completely covers every square inch of circuit board. This honeycomb mesh is simply a bit of plastic that screws on to the ORWL PCB and connects dozens of electronic traces embedded in this board to a secure microcontroller. If these traces are broken – either through taking the honeycomb shell off or by breaking it wide open, the digital keys that unlock the computer are erased.

The ORWL specs are what you would expect from a bare-bones desktop computer: Intel Skylake mobile processors, Intel graphics, a choice of 4 or 8GB of RAM, 64 to 512GB SSD. WiFi, two USB C ports, and an HDMI port provide all the connections to the outside world.

While this isn’t a computer for everyone, and it may not even a very large deployment, it is an interesting challenge. Physical security rules over all, and it would be very interesting to see what sort of attack can be performed on the ORWL to extract all the data hidden away behind an electronic mesh. Short of breaking the digital key hidden on a key fob, the best attack might just be desoldering the chips for the SSD and transplanting them into a platform more amenable to reading them.

In any event, ORWL is an interesting device if only for being one of the few desktop computers to tackle the problem of physical security. As with any computer, if you have physical access to a device, you have access to all the data on the device; we just don’t know how to get the data off one of these tiny computers.

Video below.

Continue reading “Physical Security For Desktop Computers”

Your Unhashable Fingerprints Secure Nothing

Passwords are crap. Nobody picks good ones, when they do they re-use them across sites, and if you use even a trustworthy password manager, they’ll get hacked too. But you know what’s worse than a password? A fingerprint. Fingerprints have enough problems with them that they should never be used anywhere a password would be.

Passwords are supposed to be secret, like the name of your childhood pet. In contrast, you carry your fingers around with you out in the open nearly everywhere you go. Passwords also need to be revocable. In the case that your password does get revealed, it’s great to be able to simply pick another one. You don’t want to have to revoke your fingers. Finally, and this is the kicker, you want your password to be hashable, in order to protect the password database itself from theft.

In the rest of the article, I’ll make each of these three cases, and hopefully convince you that using fingerprints in place of a password is even more broken than using a password in the first place. (You listening Apple and Google? No, I didn’t think you were.)

Continue reading “Your Unhashable Fingerprints Secure Nothing”

Stegosploit: Owned By A JPG

We’re primarily hardware hackers, but every once in a while we see a software hack that really tickles our fancy. One such hack is Stegosploit, by [Saumil Shah]. Stegosploit isn’t really an exploit, so much as it’s a means of delivering exploits to browsers by hiding them in pictures. Why? Because nobody expects a picture to contain executable code.

stegosploit_diagram[Saumil] starts off by packing the real exploit code into an image. He demonstrates that you can do this directly, by encoding characters of the code in the color values of the pixels. But that would look strange, so instead the code is delivered steganographically by spreading the bits of the characters that represent the code among the least-significant bits in either a JPG or PNG image.

OK, so the exploit code is hidden in the picture. Reading it out is actually simple: the HTML canvas element has a built-in getImageData() method that reads the (numeric) value of a given pixel. A little bit of JavaScript later, and you’ve reconstructed your code from the image. This is sneaky because there’s exploit code that’s now runnable in your browser, but your anti-virus software won’t see it because it wasn’t ever written out — it was in the image and reconstructed on the fly by innocuous-looking “normal” JavaScript.

232115_1366x1792_scrotAnd here’s the coup de grâce. By packing HTML and JavaScript into the header data of the image file, you can end up with a valid image (JPG or PNG) file that will nonetheless be interpreted as HTML by a browser. The simplest way to do this is send your file myPic.JPG from the webserver with a Content-Type: text/html HTTP header. Even though it’s a totally valid image file, with an image file extension, a browser will treat it as HTML, render the page and run the script it finds within.

The end result of this is a single image that the browser thinks is HTML with JavaScript inside it, which displays the image in question and at the same time unpacks the exploit code that’s hidden in the shadows of the image and runs that as well. You’re owned by a single image file! And everything looks normal.

We like this because it combines two sweet tricks in one hack: steganography to deliver the exploit code, and “polyglot” files that can be read two ways, depending on which application is doing the reading. A quick tag-search of Hackaday will dig up a lot on steganography here, but polyglot files are a relatively new hack.

[Ange Ablertini] is the undisputed master of packing one file type inside another, so if you want to get into the nitty-gritty of [Ange]’s style of “polyglot” file types, watch his talk on “Funky File Formats” (YouTube). You’ll never look at a ZIP file the same again.

Sweet hack, right? Who says the hardware guys get to have all the fun?

IPhone Jailbreak Hackers Await $1M Bounty

According to Motherboard, some unspecified (software) hacker just won a $1 million bounty for an iPhone exploit. But this is no ordinary there’s-a-glitch-in-your-Javascript bug bounty.

On September 21, “Premium” 0day startup Zerodium put out a call for a chain of exploits, starting with a browser, that enables the phone to be remotely jailbroken and arbitrary applications to be installed with root / administrator permissions. In short, a complete remote takeover of the phone. And they offered $1 million. A little over a month later, it looks like they’ve got their first claim. The hack has yet to be verified and the payout is actually made.

But we have little doubt that the hack, if it’s actually been done, is worth the money. The NSA alone has a $25 million annual budget for buying 0days and usually spends that money on much smaller bits and bobs. This hack, if it works, is huge. And the NSA isn’t the only agency that’s interested in spying on folks with iPhones.

Indeed, by bringing something like this out into the open, Zerodium is creating a bidding war among (presumably) adversarial parties. We’re not sure about the ethics of all this (OK, it’s downright shady) but it’s not currently illegal and by pitting various spy agencies (presumably) against each other, they’re almost sure to get their $1 million back with some cream on top.

We’ve seen a lot of bug bounty programs out there. Tossing “firmname bug bounty” into a search engine of your choice will probably come up with a hit for most firmnames. A notable exception in Silicon Valley? Apple. They let you do their debugging work for free. How long this will last is anyone’s guess, but if this Zerodium deal ends up being for real, it looks like they’re severely underpaying.

And if you’re working on your own iPhone remote exploits, don’t be discouraged. Zerodium still claims to have money for two more $1 million payouts. (And with that your humble author shrugs his shoulders and turns the soldering iron back on.)