France Proposes Software Security Liability For Manufacturers, Open Source As Support Ends

It sometimes seems as though barely a week can go by without yet another major software-related hardware vulnerability story. As manufacturers grapple with the demands of no longer building simple appliances but instead supplying them containing software that may expose itself to the world over the Internet, we see devices shipped with insecure firmware and little care for its support or updating after the sale.

The French government have a proposal to address this problem that may be of interest to our community, to make manufacturers liable for the security of a product while it is on the market, and with the possibility of requiring its software to be made open-source at end-of-life. In the first instance it can only be a good thing for device security to be put at the top of a manufacturer’s agenda, and in the second the ready availability of source code would present reverse engineers with a bonanza.

It’s worth making the point that this is a strategy document, what it contains are only proposals and not laws. As a 166 page French-language PDF it’s a long read for any Francophones among you and contains many other aspects of the French take on cybersecurity. But it’s important, because it shows the likely direction that France intends to take on this issue within the EU. At an EU level this could then represent a globally significant move that would affect products sold far and wide.

What do we expect to happen in reality though? It would be nice to think that security holes in consumer devices would be neutralised overnight and then we’d have source code for a load of devices, but we’d reluctantly have to say we’ll believe it when we see it. It is more likely that manufacturers will fight it tooth and nail, and given some recent stories about devices being bricked by software updates at the end of support we could even see many of them willingly consigning their products to the e-waste bins rather than complying. We’d love to be proven wrong, but perhaps we’re too used to such stories. Either way this will be an interesting story to watch, and we’ll keep you posted.

Merci beaucoup [Sebastien] for the invaluable French-language help.

French flag: Wox-globe-trotter [Public domain].

Cell Phone Surveillance Car

There are many viable options for home security systems, but where is the fun in watching a static camera feed from inside your place? The freedom to really look around might have been what compelled [Varun Kumar] to build a security car robot to drive around his place and make sure all is in order.

Aimed at cost-effectiveness and WiFi or internet accessibility, an Android smartphone provides the foundation of this build — skipping the need for a separate Bluetooth or WiFi module — and backed up by an Arduino Uno, an L298 motor controller, and two geared DC motors powering the wheels.

Further taking advantage of the phone’s functionality, the robot is controlled by DTMF tones. Using the app DTMF Tone Generator and outputting through the 3.5mm jack, commands are interpreted by a MT8870DE DTMF decoder module.While this control method carries some risks — as with many IoT-like devices — [Kumar] has circumvented one of DTMF’s vulnerabilities by adding a PIN before the security car will accept any commands.

He obtains a live video feed from the phone using AirDroid in concert with VNC server, and assisted by a servo motor for the phone is enabled to sweep left and right for a better look. A VNC client on [Kumar]’s laptop is able to access the video feed and issue commands. Check it out in action after the break!

Continue reading “Cell Phone Surveillance Car”

Seek And Exploit Security Vulnerabilities In An Infusion Pump

Infusion pumps and other medical devices are not your typical everyday, off-the-shelf embedded system. Best case scenario, you will rarely, if ever, come across one in your life. So for wide-spread exploitation, chances are that they simply seem too exotic for anyone to bother exploring their weaknesses. Yet their impact on a person’s well-being makes potential security holes tremendously more severe in case someone decides to bother one day after all.

[Scott Gayou] is one of those someones, and he didn’t shy away from spending hundreds of hours of his free time inspecting the Smiths Medical Medfusion 4000 infusion pump for any possible security vulnerabilities. Looking at different angles for his threat model, he started with the physical handling of the device’s user interface. This allowed him to enable the external communication protocols settings, which in turn opened to the device’s FTP and Telnet ports. Not to give too much away, but he manages to gain access to both the file system content and — as a result of that — to the system’s login credentials. This alone can be clearly considered a success, but for [Scott], it merely opened a door that eventually resulted in desoldering the memory chips to reverse engineer the bootloader and firmware, and ultimately executing his own code on the device.

Understanding the implications of his discoveries, [Scott] waited long enough to publish his research so the manufacturer could address and handle these security issues. So kudos to him for fighting the good fight. And just in case the thought of someone gaining control over a machine that is crucial to your vitality doesn’t scare you enough yet, go ahead and imagine that device was actually implanted in your body.

Joykill: Previously Undisclosed Vulnerability Endangers User Data

Researchers have recently announced a vulnerability in PC hardware enabling attackers to wipe the disk of a victim’s computer. This vulnerability, going by the name Joykill, stems from the lack of proper validation when enabling manufacturing system tests.

Joykill affects the IBM PCjr and allows local and remote attackers to destroy the contents of the floppy diskette using minimal interaction. The attack is performed by plugging two joysticks into the PCjr, booting the computer, entering the PCjr’s diagnostic mode, and immediately pressing button ‘B’ on joystick one, and buttons ‘A’ and ‘B’ on joystick two. This will enable the manufacturing system test mode, where all internal tests are performed without user interaction. The first of these tests is the diskette test, which destroys all user data on any inserted diskette. There is no visual indication of what is happening, and the data is destroyed when the test is run.

A local exploit destroying user data is scary enough, but after much work, the researchers behind Joykill have also managed to craft a remote exploit based on Joykill. To accomplish this, the researchers built two IBM PCjr joysticks with 50-meter long cables.

Researchers believe this exploit is due to undocumented code in the PCjr’s ROM. This code contains diagnostics code for manufacturing burn-in, system test code, and service test code. This code is not meant to be run by the end user, but is still exploitable by an attacker. Researchers have disassembled this code and made their work available to anyone.

As of the time of this writing, we were not able to contact anyone at the IBM PCjr Information Center for comment. We did, however, receive an exciting offer for a Carribean cruise.

Apple Passwords: They All ‘Just Work’

When the Macintosh was released some thirty-odd years ago, to Steve Jobs’ triumphant return in the late 90s, there was one phrase to describe the simplicity of using a Mac. ‘It Just Works’. Whether this was a reference to the complete lack of games on the Mac (Marathon shoutout, tho) or a statement to the user-friendliness of the Mac, one thing is now apparent. Apple has improved the macOS to such a degree that all passwords just work. That is to say, security on the latest versions of macOS is abysmal, and every few weeks a new bug is reported.

The first such security vulnerability in macOS High Sierra was reported by [Lemi Ergin] on Twitter. Simply, anyone could login as root with an empty password after clicking the login button several times. The steps to reproduce were as simple as opening System Preferences, Clicking the lock to make changes, typing ‘root’ in the username field, and clicking the Unlock button. It should go without saying this is incredibly insecure, and although this is only a local exploit, it’s a mind-numbingly idiotic exploit. This issue was quickly fixed by Apple in the Security Update 2017-001

The most recent password flaw comes in the form of unlocking the App Store preferences that can be unlocked with any password. The steps to reproduce on macOS High Sierra are simply:

  • Click on System Preferences
  • Click on App Store
  • Click the padlock icon
  • Enter your username and any password
  • Click unlock

This issue has been fixed in the beta of macOS 10.13.3, which should be released within a month. The bug does not exist in macOS Sierra version 10.12.6 or earlier.

This is the second bug in macOS in as many months where passwords just work. Or don’t work, depending on how cheeky you want to be. While these bugs have been overshadowed with recent exploits of Intel’s ME and a million blog posts on Meltdown, these are very, very serious bugs that shouldn’t have happened in the first place. And, where there are two, there’s probably more.

We don’t know what’s up with the latest version of the macOS and the password problems, but we are eagerly awaiting the Medium post from a member of the macOS team going over these issues. We hope to see that in a decade or two.

34C3: Fitbit Sniffing And Firmware Hacking

If you walked into a gym and asked to sniff exercise equipment you would get some mighty strange looks. If you tell hackers you’ve sniffed a Fitbit, you might be asked to give a presentation. [Jiska] and [DanielAW] were not only able to sniff Bluetooth data from a run-of-the-mill Fitbit fitness tracker, they were also able to connect to the hardware with data lines using test points etched right on the board. Their Fitbit sniffing talk at 34C3 can be seen after the break. We appreciate their warning that opening a Fitbit will undoubtedly void your warranty since Fitbits don’t fare so well after the sealed case is cracked. It’s all in the name of science.

There’s some interesting background on how Fitbit generally work. For instance, the Fitbit pairs with your phone which needs to be validated with the cloud server. But once the cloud server sends back authentication credentials they will never change because they’re bound to to the device ID of the Fitbit. This process is vulnerable to replay attacks.

Data begin sent between the Fitbit and the phone can be encrypted, but there is a live mode that sends the data as plain text. The implementation seemed to be security by obscurity as a new Bluetooth handle is used for this mode. This technique prevents the need to send every encrypted packet to the server for decryption (which would be for every heartbeat packet). So far the fix for this has been the ability to disable live mode. If you have your own Fitbit to play with, sniffing live mode would be a fun place to start.

The hardware side of this hack begins by completely removing the PCB from the rubber case. The board is running an STM32 and the team wanted to get deep access by enabling GDB. Unfortunately, the debug pins were only enabled during reset and the stock firmware disables them at startup (as it should). The workaround was to rewrite the firmware so that the necessary GPIO remain active and there’s an interesting approach here. You may remember [Daniel Wegemer] from the Nexmon project that reverse engineered the Nexus 5 WiFi. He leveraged the binary patching he used on Nexmon to patch the Fitbit firmware to enable debugging support. Sneaky!

For more about 34C3 we have a cheatsheet of the first day and for more about Fitbit security, check out this WAV file.

Continue reading “34C3: Fitbit Sniffing And Firmware Hacking”

34C3: Hacking Into A CPU’s Microcode

Inside every modern CPU since the Intel Pentium fdiv bug, assembly instructions aren’t a one-to-one mapping to what the CPU actually does. Inside the CPU, there is a decoder that turns assembly into even more primitive instructions that are fed into the CPU’s internal scheduler and pipeline. The code that drives the decoder is the CPU’s microcode, and it lives in ROM that’s normally inaccessible. But microcode patches have been deployed in the past to fix up CPU hardware bugs, so it’s certainly writeable. That’s practically an invitation, right? At least a group from the Ruhr University Bochum took it as such, and started hacking on the microcode in the AMD K8 and K10 processors.

The hurdles to playing around in the microcode are daunting. It turns assembly language into something, but the instruction set that the inner CPU, ALU, et al use was completely unknown. [Philip] walked us through their first line of attack, which was essentially guessing in the dark. First they mapped out where each x86 assembly codes went in microcode ROM. Using this information, and the ability to update the microcode, they could load and execute arbitrary microcode. They still didn’t know anything about the microcode, but they knew how to run it.

So they started uploading random microcode to see what it did. This random microcode crashed almost every time. The rest of the time, there was no difference between the input and output states. But then, after a week of running, a breakthrough: the microcode XOR’ed. From this, they found out the syntax of the command and began to discover more commands through trial and error. Quite late in the game, they went on to take the chip apart and read out the ROM contents with a microscope and OCR software, at least well enough to verify that some of the microcode operations were burned in ROM.

The result was 29 microcode operations including logic, arithmetic, load, and store commands — enough to start writing microcode code. The first microcode programs written helped with further discovery, naturally. But before long, they wrote microcode backdoors that triggered when a given calculation was performed, and stealthy trojans that exfiltrate data encrypted or “undetectably” through introducing faults programmatically into calculations. This means nearly undetectable malware that’s resident inside the CPU. (And you think the Intel Management Engine hacks made you paranoid!)

[Benjamin] then bravely stepped us through the browser-based attack live, first in a debugger where we could verify that their custom microcode was being triggered, and then outside of the debugger where suddenly xcalc popped up. What launched the program? Calculating a particular number on a website from inside an unmodified browser.

He also demonstrated the introduction of a simple mathematical error into the microcode that made an encryption routine fail when another particular multiplication was done. While this may not sound like much, if you paid attention in the talk on revealing keys based on a single infrequent bit error, you’d see that this is essentially a few million times more powerful because the error occurs every time.

The team isn’t done with their microcode explorations, and there’s still a lot more of the command set left to discover. So take this as a proof of concept that nearly completely undetectable trojans could exist in the microcode that runs between the compiled code and the CPU on your machine. But, more playfully, it’s also an invitation to start exploring yourself. It’s not every day that an entirely new frontier in computer hacking is bust open.