Unlocking SIM Cards With A Logic Analyzer

[Jason Gin] wanted to reuse the SIM card that came with a ZTE WF721 wireless terminal he got from AT&T, but as he expected, it was locked to the device. Unfortunately, the terminal has no function to change the PIN and none of the defaults he tried seemed to work. The only thing left to do was crack it open and sniff the PIN with a logic analyzer.

This project is a fantastic example of the kind of reverse engineering you can pull off with even a cheap logic analyzer and a keen eye, but also perfectly illustrates the fact that having physical access to a device largely negates any security measures the manufacturer tries to implement. [Jason] already knew what the SIM unlock command would look like; he just needed to capture the exchange between the WF721 and SIM card, find the correct byte sequence, and look at the bytes directly after it.

Finding the test pads on the rear of the SIM slot, he wired his DSLogic Plus logic analyzer up to the VCC, CLK, RST, and I/O pins, then found a convenient place to attach his ground wire. After a bit of fiddling, he determined the SIM card was being run at 4 MHz, so he needed to configure a baud rate of 250 kbit/s to read the UART messages passing between the devices.

Once he found the bytes that signified successful unlocking, he was able to work his way backwards and determine the unlock command and its PIN code. It turns out the PIN was even being sent over the wire in plain text, though with the way security is often handled these days, we can’t say it surprises us. All [Jason] had to do then was put the SIM in his phone and punch in the sniffed PIN when prompted.

Could [Jason] have just run out to the store and picked up a prepaid SIM instead of cracking open this wireless terminal and sniffing its communications with a logic analyzer? Of course. But where’s the fun in that?

36C3: All Wireless Stacks Are Broken

Your cellphone is the least secure computer that you own, and worse than that, it’s got a radio. [Jiska Classen] and her lab have been hacking on cellphones’ wireless systems for a while now, and in this talk gives an overview of the wireless vulnerabilities and attack surfaces that they bring along. While the talk provides some basic background on wireless (in)security, it also presents two new areas of research that she and her colleagues have been working on the last year.

One of the new hacks is based on the fact that a phone that wants to support both Bluetooth and WiFi needs to figure out a way to share the radio, because both protocols use the same 2.4 GHz band. And so it turns out that the Bluetooth hardware has to talk to the WiFi hardware, and it wouldn’t entirely surprise you that when [Jiska] gets into the Bluetooth stack, she’s able to DOS the WiFi. What this does to the operating system depends on the phone, but many of them just fall over and reboot.

Lately [Jiska] has been doing a lot of fuzzing on the cell phone stack enabled by some work by one of her students [Jan Ruge] work on emulation, codenamed “Frankenstein”. The coolest thing here is that the emulation runs in real time, and can be threaded into the operating system, enabling full-stack fuzzing. More complexity means more bugs, so we expect to see a lot more coming out of this line of research in the next year.

[Jiska] gives the presentation in a tinfoil hat, but that’s just a metaphor. In the end, when asked about how to properly secure your phone, she gives out the best advice ever: toss it in the blender.

36C3: Open Source Is Insufficient To Solve Trust Problems In Hardware

With open source software, we’ve grown accustomed to a certain level of trust that whatever we are running on our computers is what we expect it to actually be. Thanks to hashing and public key signatures in various parts in the development and deployment cycle, it’s hard for a third party to modify source code or executables without us being easily able to spot it, even if it travels through untrustworthy channels.

Unfortunately, when it comes to open source hardware, the number of steps and parties involved that are out of our control until we have a final product — production, logistics, distribution, even the customer — makes it substantially more difficult to achieve the same peace of mind. To make things worse, to actually validate the hardware on chip level, you’d ultimately have to destroy it.

On his talk this year at the 36C3, [bunnie] showed a detailed insight of several attack vectors we could face during manufacturing. Skipping the obvious ones like adding or substituting components, he’s focusing on highly ambitious and hard to detect modifications inside an IC’s package with wirebonded or through-silicon via (TSV) implants, down to modifying the netlist or mask of the integrated circuit itself. And these aren’t any theoretical or “what if” scenarios, but actual possible options — of course, some of them come with a certain price tag, but in the end, with the right motivation, money is only a detail.

Continue reading “36C3: Open Source Is Insufficient To Solve Trust Problems In Hardware”

Amazon Ring: Neighbors Leaking Data On Neighbors

For a while now a series of stories have been circulating about Amazon’s Ring doorbell, an Internet-connected camera and entry system that lets users monitor and even interact with visitors and delivery people at their doors. The adverts feature improbable encounters with would-be crooks foiled by the IoT-equipped homeowner, but the stories reveal a much darker side. From reports of unhindered access by law enforcement to privately-held devices through mass releases of compromised Ring account details to attackers gaining access to children via compromised cameras, it’s fair to say that there’s much to be concerned about.

One cause for concern has been the location data exposed by the associated Amazon Neighbors crowd-sourced local crime paranoia app, and for those of us who don’t live and breathe information security there is an easy-to-understand Twitter breakdown of its vulnerabilities from [Elliot Alderson] that starts with the app itself and proceeds from there into compromising Ring accounts by finding their passwords. We find that supposedly anonymized information in the app sits atop an API response with full details, that there’s no defense against brute-forcing a Ring password, and that a tasty list of API and staging URLs is there for all to see embedded within the app. Given all that information, there’s little wonder that the system has proven to be so vulnerable.

As traditional appliance makers have struggled with bringing Internet connectivity into their products there have been a few stories of woeful security baked into millions of homes. A defense could be made that a company with roots outside the Internet can be forgiven for such a gaffe, but in the case of Amazon whose history has followed that of mass Web adoption and whose infrastructure lies behind so much of the services we trust, this level of lax security is unforgivable. Hackaday readers will be aware of the security issues behind so-called “smart” devices, but to the vast majority of customers they are simply technological wonders that are finally delivering a Jetsons-style future. If some good comes of these Ring stories it might be that those consumers finally begin to wake up to IoT security, and use their new-found knowledge to demand better.

Header image: Ring [CC BY-SA 4.0]

Kerry Scharfglass Secures Your IoT Things

We’ve all seen the IoT device security trainwrecks: those gadgets that fail so spectacularly that the comment section lights up with calls of “were they even thinking about the most basic security?” No, they probably weren’t. Are you?

Hackaday Contributor and all around good guy Kerry Scharfglass thinks about basic security for a living, and his talk is pitched at the newcomer to device security. (Embedded below.) Of course “security” isn’t a one-size-fits-all proposition; you need to think about what threats you’re worried about, which you can ignore, and defend against what matters. But if you’ve never worked through such an exercise, you’re in for a treat here. You need to think like a maker, think like a breaker, and surprisingly, think like an accountant in defining what constitutes acceptable risks. Continue reading “Kerry Scharfglass Secures Your IoT Things”

This Week In Security: Unicode, Truecrypt, And NPM Vulnerabilities

Unicode, the wonderful extension to to ASCII that gives us gems like “✈”, “⌨”, and “☕”, has had some unexpected security ramifications. The most common problems with Unicode are visual security issues, like character confusion between letters. For example, the English “M” (U+004D) is indistinguishable from the Cyrillic “М” (U+041C). Can you tell the difference between IBM.com and IBМ.com?

This bug, discovered by [John Gracey] turns the common problem on its head. Properly referred to as a case mapping collision, it’s the story of different Unicode characters getting mapped to the same upper or lowercase equivalent.

'ß'.toLowerCase() === 'SS'.toLowerCase() // true
// Note the Turkish dotless i
'John@Gıthub.com'.toUpperCase() === 'John@Github.com'.toUpperCase()

GitHub stores all email addresses in their lowercase form. When a user sends a password reset, GitHub’s logic worked like this: Take the email address that requested a password reset, convert to lower case, and look up the account that uses the converted email address. That by itself wouldn’t be a problem, but the reset is then sent to the email address that was requested, not the one on file. In retrospect, this is an obvious flaw, but without the presence of Unicode and the possibility of a case mapping collision, would be a perfectly safe practice.

This flaw seems to have been fixed quite some time ago, but was only recently disclosed. It’s also a novel problem affecting Unicode that we haven’t covered. Interestingly, my research has turned up an almost identical problem at Spotify, back in 2013.
Continue reading “This Week In Security: Unicode, Truecrypt, And NPM Vulnerabilities”

This Week In Security: VPNs, Patch Tuesday, And Plundervault

An issue in Unix virtual private networks was disclosed recently, where an attacker could potentially hijack a TCP stream, even though that stream is inside the VPN. This attack affects OpenVPN, Wireguard, and even IPSec VPNs. How was this possible? Unix systems support all manner of different network scenarios, and oftentimes a misconfiguration can lead to problems. Here, packets sent to the VPNs IP address are processed and responded to, even though they are coming in over a different interface.

The attack initially sounds implausible, as an attacker has to know the Virtual IP address of the VPN client, the remote IP address of an active TCP connection, and the sequence and ACK numbers of that connection. That’s a lot of information, but an attacker can figure it out one piece at a time, making it a plausible attack. Continue reading “This Week In Security: VPNs, Patch Tuesday, And Plundervault”