Neutralizing Intel’s Management Engine

Five or so years ago, Intel rolled out something horrible. Intel’s Management Engine (ME) is a completely separate computing environment running on Intel chipsets that has access to everything. The ME has network access, access to the host operating system, memory, and cryptography engine. The ME can be used remotely even if the PC is powered off. If that sounds scary, it gets even worse: no one knows what the ME is doing, and we can’t even look at the code. When — not ‘if’ — the ME is finally cracked open, every computer running on a recent Intel chip will have a huge security and privacy issue. Intel’s Management Engine is the single most dangerous piece of computer hardware ever created.

Researchers are continuing work on deciphering the inner workings of the ME, and we sincerely hope this Pandora’s Box remains closed. Until then, there’s now a new way to disable Intel’s Management Engine.

Previously, the first iteration of the ME found in GM45 chipsets could be removed. This technique was due to the fact the ME was located on a chip separate from the northbridge. For Core i3/i5/i7 processors, the ME is integrated to the northbridge. Until now, efforts to disable an ME this closely coupled to the CPU have failed. Completely removing the ME from these systems is impossible, however disabling parts of the ME are not. There is one caveat: if the ME’s boot ROM (stored in an SPI Flash) does not find a valid Intel signature, the PC will shut down after 30 minutes.

A few months ago, [Trammell Hudson] discovered erasing the first page of the ME region did not shut down his Thinkpad after 30 minutes. This led [Nicola Corna] and [Frederico Amedeo Izzo] to write a script that uses this exploit. Effectively, ME still thinks it’s running, but it doesn’t actually do anything.

With a BeagleBone, an SOIC-8 chip clip, and a few breakout wires, this script will run and effectively disable the ME. This exploit has only been confirmed to work on Sandy Bridge and Ivy Bridge processors. It should work on Skylake processors, and Haswell and Broadwell are untested.

Separating or disabling the ME from the CPU has been a major focus of the libreboot and coreboot communities. The inability to do so has, until now, made the future prospects of truly free computing platforms grim. The ME is in everything, and CPUs without an ME are getting old. Even though we don’t have the ability to remove the ME, disabling it is the next best thing.

PoisonTap Makes Raspberry Pi Zero Exploit Locked Computers

[Samy Kamkar], leet haxor extraordinaire, has taken a treasure trove of exploits and backdoors and turned it into a simple hardware device that hijacks all network traffic, enables remote access, and does it all while a machine is locked. It’s PoisonTap, and it’s based on the Raspberry Pi Zero for all that awesome tech blog cred we crave so much.

PoisonTap takes a Raspberry Pi Zero and configures it as a USB Gadget, emulating a network device. When this Pi-come-USB-to-Ethernet adapter is plugged into a computer (even a locked one), the computer sends out a DHCP request, and PoisonTap responds by telling the machine the entire IPv4 space is part of the Pi’s local network. All Internet traffic on the locked computer is then sent over PoisonTap, and if a browser is running on the locked computer, all requests are sent to this tiny exploit device.

With all network access going through PoisonTap, cookies are siphoned off, and the browser cache is poisoned with an exploit providing a WebSocket to the outside world. Even after PoisonTap is unplugged, an attacker can remotely send commands to the target computer and force the browser to execute JavaScript. From there, it’s all pretty much over.

Of course, any device designed to plug into a USB port and run a few exploits has a few limitations. PoisonTap only works if a browser is running. PoisonTap does not work on HTTPS cookies with the Secure cookie flag set. PoisonTap does not work if you have filled your USB ports with epoxy. There are a thousand limitations to PoisonTap, all of which probably don’t apply if you take PoisonTap into any office, plug it into a computer, and walk away. That is, after all, the point of this exploit.

As with all ub3r-1337 pen testing tools, we expect to see a version of PoisonTap for sale next August in the vendor area of DEF CON. Don’t buy it. A Raspberry Pi Zero costs $5, a USB OTG cable less than that, and all the code is available on Github. If you buy a device like PoisonTap, you are too technically illiterate to use it.

[Samy] has a demonstration of PoisonTap in the video below.

Continue reading “PoisonTap Makes Raspberry Pi Zero Exploit Locked Computers”

A Linux Exploit That Uses 6502 Code

With ubiquitous desktop computing now several decades old, anyone creating an operating system distribution now faces a backwards compatibility problem. Each upgrade brings its own set of new features, but it must maintain compatibility with the features of the previous versions or risk alienating users. If you are a critic of Microsoft products for their bloat, this is one of the factors behind that particular issue.

As well as a problem of compatibility, this extra software overhead creates one of security. A piece of code descended from a DOS word processor of the 1980s for example was not originally created with any idea that it might one day be hiding in a library on a machine visible to the entire world by the Internet. Our subject today is a good example, just such a vulnerability hiding in an old piece of code whose purpose is to maintain an obscure piece of backward compatibility. [Chris Evans] has demonstrated a vulnerability in an Ubuntu version by playing an NES music file that contains exploit code emulated by the player on a virtual 6502 processor.

The NES Sound Format is a music file standard that packages Nintendo game music for playback. It contains a scripting language, and it is this that is used to trigger the vulnerability. When you open an NSF file on the affected Ubuntu system it finds its way via your music player and the gstreamer multimedia framework to libgstnsf.so, a gstreamer plugin for playing NSF files.

Rather unbelievably, his plugin works by emulating a real 6502 as found in a NES to derive the musical output, and it is somewhere here that the vulnerability exists. So not only do we have layer upon layer of backward compatibility to play an obscure music file format, there is also a software emulation of some 8-bit silicon from the 1970s. [Chris] comments “Is that cool or what?“, and while we agree that a 6502 emulator buried in a modern distro is cool, we can’t help thinking something’s been lost along the way.

A proof-of-concept is provided for Ubuntu 12.04. It’s an older version, but he points out that while he thinks the most recent releases should not contain exactly the same vulnerability, it certainly exists in more than one still-supported version. There’s also a worrying twist in that due to the vagaries of Ubuntu’s file manager it auto-opens when its folder is accessed from the GUI. The year 2000 called, they want their auto-opening Windows ME worms back.

Sadly we suspect the 6502 lurking in this music player can’t be put to more general-purpose use. If you manage it, please do share it with us! But if emulated 6502s are your thing, take a look at this 150MHz 6502 co-processor for an Acorn BBC Micro that someone made using a Raspberry Pi.

[via r/hacking]

6502 image, Dirk Oppelt, (CC BY-SA 3.0) via Wikimedia Commons.

ArduWorm: A Malware for Your Arduino Yun

We’ve been waiting for this one. A worm was written for the Internet-connected Arduino Yun that gets in through a memory corruption exploit in the ATmega32u4 that’s used as the serial bridge. The paper (as PDF) is a bit technical, but if you’re interested, it’s a great read. (Edit: The link went dead. Here is our local copy.)

The crux of the hack is getting the AVR to run out of RAM, which more than a few of us have done accidentally from time to time. Here, the hackers write more and more data into memory until they end up writing into the heap, where data that’s used to control the program lives. Writing a worm for the AVR isn’t as easy as it was in the 1990’s on PCs, because a lot of the code that you’d like to run is in flash, and thus immutable. However, if you know where enough functions are located in flash, you can just use what’s there. These kind of return-oriented programming (ROP) tricks were enough for the researchers to write a worm.

In the end, the worm is persistent, can spread from Yun to Yun, and can do most everything that you’d love/hate a worm to do. In security, we all know that a chain is only as strong as its weakest link, and here the attack isn’t against the OpenWRT Linux system running on the big chip, but rather against the small AVR chip playing a support role. Because the AVR is completely trusted by the Linux system, once you’ve got that, you’ve won.

Will this amount to anything in practice? Probably not. There are tons of systems out there with much more easily accessed vulnerabilities: hard-coded passwords and poor encryption protocols. Attacking all the Yuns in the world wouldn’t be worth one’s time. It’s a very cool proof of concept, and in our opinion, that’s even better.

Thanks [Dave] for the great tip!

Stealth Cell Tower Inside This Office Printer Calls to Say I Love You

If you look around the street furniture of your city, you may notice some ingenious attempts to disguise cell towers. There are fake trees, lamp posts with bulges, and plenty you won’t even be aware of concealed within commercial signage. The same people who are often the first to complain when they have no signal it seems do not want to be reminded how that signal reaches them. On a more sinister note, government agencies have been known to make use of fake cell towers of a different kind, those which impersonate legitimate towers in order to track and intercept communications.

In investigating the phenomenon of fake cells, [Julian Oliver] has brought together both strands by creating a fake cell tower hidden within an innocuous office printer. It catches the phones it finds within its range, and sends them a series of text messages that appear to be from someone the phone’s owner might know. It then prints out a transcript of the resulting text conversation along with all the identifying information it can harvest from the phone. As a prank it also periodically calls phones connected to it and plays them the Stevie Wonder classic I Just Called To Say I Love You.

In hardware terms the printer has been fitted with a Raspberry Pi 3, a BladeRF software-defined transceiver, and a pair of omnidirectional antennas which are concealed behind the toner cartridge hatch. Software comes via  YateBTS, and [Julian] provides a significant amount of information about its configuration as well as a set of compiled binaries.

In one sense this project is a fun prank, yet on the other hand it demonstrates how accessible the technology now is to impersonate a cell tower and hijack passing phones. We’re afraid to speculate though as to the length of custodial sentence you might receive were you to be caught using one as a private individual.

We’ve considered the Stingray cell phone trackers before here at Hackaday, as well as looking at a couple of possible counter-measures. An app that uses a database of known towers to spot fakes, as well as a solution that relies on an SDR receiver to gather cell tower data from a neighbourhood.

[via Hacker News]

Duckhunting – Stopping Rubber Ducky Attacks

One morning, a balaclava-wearing hacker walks into your office. You assume it’s a coworker, because he’s wearing a balaclava. The hacker sticks a USB drive into a computer in the cube next door. Strange command line tools show up on the screen. Minutes later, your entire company is compromised. The rogue makes a quick retreat carrying a thumb drive in hand.

This is the scenario imagined by purveyors of balaclavas and USB Rubber Duckys, tiny USB devices able to inject code, run programs, and extract data from any system. The best way — and the most common — to prevent this sort of attack is by filling the USB ports with epoxy. [pmsosa] thought there should be a software method of defense against these Rubber Duckys, so he’s created Duckhunter, a small, efficient daemon that can catch and prevent these exploits.

The Rubber Ducky attack is simply opening up a command line and spewing an attack from an emulated USB HID keyboard. If the attacker can’t open up cmd or PowerShell, the attack breaks. That’s simple enough to code, but [pmsosa] has a few more tricks up his sleeve. Duckhunter has a ‘sneaky’ countermeasure feature, where one out of every 5-7 keystrokes is blocked. To the attacker, the ‘sneaky’ countermeasure makes it look like the attack worked, where in fact it failed spectacularly.

There are a number of different attacks similar to what the Rubber Ducky can accomplish. Mousejack performs the same attack over Bluetooth. BadUSB is a little more technical, allowing anyone with access to a device’s firmware to turn your own keyboard against you. Because of the nature of the attack, Duckhunter shuts them all down.

Right now the build is only for Windows, but according to [pmsosa]’s GitHub there will be Linux and OS X versions coming.

Hajime, Yet Another IoT Botnet

Following on the heels of Mirai, a family of malware exploiting Internet of Things devices, [Sam Edwards] and [Ioannis Profetis] of Rapidity Networks have discovered a malicious Internet worm dubbed Hajime which targets Internet of Things devices.

Around the beginning of October, news of an IoT botnet came forward, turning IP webcams around the world into a DDoS machine. Rapidity Networks took an interest in this worm, and set out a few honeypots in the hopes of discovering what makes it tick.

Looking closely at the data, there was evidence of a second botnet that was significantly more sophisticated. Right now, they’re calling this worm Hajime.

Continue reading “Hajime, Yet Another IoT Botnet”