The ESP32, Laid Bare

Most readers will be familiar with the ESP32, Espressif’s dual-core processor with integrated WiFi and Bluetooth. Few of us though will have explored all of its features, including its built-in encryption facilities and secure booting capability. With these, a developer can protect and secure their code, and keep their devices secure.

That sense of security may now be illusory though, thanks to [LimitedResults] who has developed a series of attacks on the chip that compromise its crypto core, secure boot, and flash encryption. This enables both the chance of arbitrary code execution and firmware extraction on locked-down ESP32 devices.

To achieve all this he used a glitching technique on the device’s power supply, inserting a carefully timed glitch in the rail to coincide with a particular instruction being executed. For those of us who are not experts in this technique, he provides a basic primer with a description of his home-made glitcher made using a CMOS switch chip.

It appears that there is no solution to this attack short of new silicon, however, it should be borne in mind that it’s something that depends upon a specialist hacker with a well-equipped bench, and is thus only likely to be a significant headache to manufacturers. But it undermines a key feature of a major line of microcontrollers, and as such it remains a significant piece of work.

This Week In Security: Fuzzing Fixes, Foul Fonts, TPM Timing Attacks, And More!

An issue was discovered in libarchive through Google’s ClusterFuzz project. Libarchive is a compression and decompression library, widely used in utilities. The issue here is how the library recovers from a malformed archive. Hitting an invalid header causes the memory in use to be freed. The problem is that it’s possible for file processing to continue even after that working memory has been freed, leading to all kinds of problems. So far an actual exploit hasn’t been revealed, but it’s likely that one is possible. The problem was fixed back in May, but the issue was just announced to give time for that update to percolate down to users.

Of note is the fact that this issue was found through Google’s fuzzing efforts. Google runs the oss-fuzz project, which automatically ingests nightly builds from around 200 open source projects and runs ClusterFuzz against them. This process of throwing random data at programs and functions has revealed over 14,000 bugs.
Continue reading “This Week In Security: Fuzzing Fixes, Foul Fonts, TPM Timing Attacks, And More!”

This Week In Security: BGP Bogons, Chrome Zero Day, And Save Game Attacks

Our own [Pat Whetman] wrote about a clever technique published by the University of Michigan, where lasers can be used to trigger a home assistant device. It’s an interesting hack, and you should go read it.

Borrowing IP Addresses

We’ve lived through several IPv4 exhaustion milestones, and the lack of available addresses is really beginning to show, even for trolls and scammers. A new approach takes advantage of the weak security of the Border Gateway Protocol, and allows bad actors to temporarily take over reserved address blocks. These particular providers operate out of Russia, operating network services they advertise as “bulletproof”, or immune to takedown requests. What better way to sidestep takedowns than to use IP addresses that aren’t really yours to begin with?

BGP spoofing has been at the center of other types of attacks and incidents, like in 2018 when a misconfiguration in a Nigerian ISP’s BGP tables routed traffic intended for Google’s servers through Chinese and Russian infrastructure. In that case it appeared to be a genuine mistake, but little prevents malicious BGP table poisoning.

Chrome Zero-day

Google released an update to Chrome on the 31st that addresses two CVEs, one of which is being actively exploited. That vulnerability, CVE-2019-13720, is a race condition resulting in a potential use-after-free. Kaspersky Labs found this one being actively used on a Korean news site. The attack runs entirely from Javascript, and simply visiting a malicious site is enough for compromise, so update Chrome if it’s installed.

Anti-anti-doping

What do you do when you feel you’ve been unfairly targeted by an anti-doping investigation? Apparently hacking the investigating agency and releasing stolen information is an option. It seems like this approach is more effective when there are shenanigans revealed in the data dump. In this case, the data being released seems rather mundane.

Firefox Blocking Sideload Extensions

Mozilla made a controversial announcement on the 31st. They intend to block “sideload” browser extensions. Until this change, it was possible to install browser extensions by copying them to a particular folder on the computer. Some legitimate extensions used this installation method, but so did malware, adware, and other unwanted software. While this change will block some malicious add-ons, it does present a bit of a challenge to a user installing an extension that isn’t on the official Mozilla store or signed by Mozilla.

As you might imagine, the response has been… less than positive. While making malware harder to install is certainly welcome, this makes some use cases very difficult. An example that comes to mind is a Linux package that includes a browser extension. It remains to be seen exactly how this change will shake out.

Save Games as Attack Vector

An oddball vulnerability caught my eye, published by [Denis Andzakovic] over at Pulse Security. He discovered that a recent indy game, Untitled Goose Game, can be manipulated into running arbitrary code as a result of loading a maliciously modified save file. The vulnerability is rooted in a naive deserialization routine.

If you’re interested in a deeper dive into .net deserialization bugs, a great paper was submitted to Blackhat 2012 discussing the topic. The short version is that if a programmer isn’t careful, the deserialization routine can overwrite variables in unexpected ways, potentially leading to code execution.

At first glance, a vulnerability triggered by a malicious save file seems relatively harmless. The level of access needed to modify a save file on a hard drive is enough to compromise that computer in a multitude of better ways. Enter cloud save synchronization. Steam, for instance, will automatically sync save games across a user’s install locations. This is a very useful feature for those of us that might play the same game on a laptop and a desktop. Having the save game automatically synced to all your devices is quite useful, but if an attacker compromised your Steam account, your save games could be manipulated. This leads to the very real possibility that an attacker could use a save game vulnerability to turn a Steam account compromise into an attack on all your machines with Steam installs.

Google Creates Debuggable IPhone

Apple is known for a lot of things, but opening up their platforms to the world isn’t one of those things. According to a recent Google post by [Brandon Azad], there do exist special iPhones that are made for development with JTAG ports and other magic capabilities. The port is in all iPhones (though unpopulated), but is locked down by default. We don’t know what it takes to get a magic iPhone, but we are guessing Google can’t send in the box tops to three Macbook Pros to get on the waiting list. But what is locked can be unlocked, and [Brandon] set out to build a debuggable iPhone.

Exploiting some debug registers, it is possible to debug the A11 CPU at any point in its execution. [Brandon’s] tool single steps the system reset and makes some modifications to the CPU after key instructions to prevent the lockdown of kernel memory. After that, the world’s your oyster. KTRW is a tool built using this technique that can debug an iPhone with a standard cable.

Continue reading “Google Creates Debuggable IPhone”

This Week In Security: Project Zero’s IPhone, BBC The Onion, Rooting Androids, And More

The always interesting Project Zero has a pair of stories revolving around security research itself. The first, from this week, is all about one man’s quest to build a debug iPhone for research. [Brandon Azad] wanted iOS debugging features like single-stepping, turning off certain mitigations, and using the LLDB debugger. While Apple makes debug iPhones, those are rare devices and apparently difficult to get access to.

[Brandon] started looking at the iBoot bootloader, but quickly turned his attention to the debugging facilities baked into the Arm chipset. Between the available XNU source and public Arm documentation, he managed to find and access the CoreSight debug registers, giving him single-step control over a core at a time. By triggering a core halt and then interrupting that core during reset, he was able to disable the code execution protections, giving him essentially everything he was looking for. Accessing this debug interface still requires a kernel level vulnerability, so don’t worry about this research being used maliciously.

The second Google Zero story that caught my eye was published earlier in the month, and is all about finding useful information in unexpected places. Namely, finding debugging symbols in old versions of Adobe Reader. Trying to understand what’s happening under the hood of a running application is challenging when all you have is a decompiler output. Adobe doesn’t ship debug builds of Reader, and has never shipped debug information on Windows. Reader has been around for a long time, and has supported quite a few architectures over the years, and surprisingly quite a few debug builds have been shipped as a result.

How useful could ancient debugging data be? Keep in mind that Adobe changes as little as possible between releases. Some code paradigms, like enums, tend to be rather static as well. Additional elements might be added to the end of the enum, but the existing values are unlikely to change. [Mateusz Jurczyk], the article’s author, then walks us through an example of how to take that data and apply it to figuring out what’s going on with a crash. Continue reading “This Week In Security: Project Zero’s IPhone, BBC The Onion, Rooting Androids, And More”

This Week In Security: The Robots Are Watching, Insecure VPNs, Graboids, And Biometric Fails

A Japanese hotel chain uses robots for nearly everything. Check in, room access, and most importantly, bedside service. What could possibly go wrong with putting embedded Android devices, complete with mics and cameras, right in every hotel room? While I could imagine bedside robots ending badly in many ways, today we’re looking at the possibility that a previous guest installed an app that can spy on the room. The kiosk mode used on these devices left much to be desired. Each bot has an NFC reader, and all it takes is an URL read by that reader to break out of the kiosk jail. From there, a user has full access to the Android system underneath, and can install whatever software they wish.

[Lance Vick] discovered this potential problem way back in July, and after 90 days of inaction has released the vulnerability. More of these hotels are being rolled out for the 2020 Olympics, and this sort of vulnerability is sure to be present in other similar kiosk devices.

VPN Compromise

In March 2018, a server in a Finnish data center was compromised through a remote management system. This was probably a Baseboard Management Controller (BMC), which is as dangerous as it is useful. Most BMCs have their own Ethernet adapter, not controlled by the host computer, and allows a remote user to access the machine just as if they had a monitor and keyboard connected to it. This particularly server was one rented by NordVPN, who was apparently not notified of the data center breach.

So what was captured from this server? Apparently the OpenVPN credentials stored on that server, as well as a valid TLS key. (Document mirror via TechCrunch) It’s been noted that this key is now expired, which does mean that it’s not being actively exploited. There were, however, about 7 months between the server break-in and the certificate expiration, during which time it could have been used for man-in-the-middle attacks.

NordVPN has confirmed the breach, and tried to downplay the potential impact. This report doesn’t seem to entirely match the leaked credentials. An attacker with this data and root access to the server would have likely been able to decrypt VPN traffic on the fly.

Graboid

Named in honor of a certain sci-fi worm, Graboid is an unusual piece of malware aimed at Docker instances. It is a true worm, in that compromised hosts are used to launch attacks against other vulnerable machines. Graboid isn’t targeting a Docker vulnerability, but simply looking for an unsecured Docker daemon exposed to the internet. The malware downloads malicious docker images, one of which is used for crypto-currency mining, while another attempts to compromise other servers.

Graboid has an unusual quirk — the quirk that earned it the name: It doesn’t constantly mine or attempt to spread, but waits over a minute between bursts of activity. This was likely an attempt to mask the presence of mining malware. It’s notable that until discovered, the malicious Docker images were hosted on the Docker Hub. Be careful what images you trust, and look for the “Docker Official Image” tag.

Iran and Misdirection

Remember a couple weeks ago, when we discussed the difficulty of attack attribution? It seems a healthy dose of such paranoia might be warranted. The American NSA and British NCSC revealed that they now suspect Russian actors compromised Iranian infrastructure and deployed malware developed by Iranian coders. The purpose of this seems to have been redirection — to compromise targets and put the blame on Iran. To date it’s not certain that this particular gambit fooled any onlookers, but this is likely not the only such effort.

Android Biometrics

New Android handsets have had a rough week. First, the Samsung Galaxy S10 had an issue with screen protectors interfering with the under-the-screen fingerprint reader. This particular problem seems to only affect fingerprints that are enrolled after a screen protector has been applied. With the protector still in place, anyone’s fingerprint is able to unlock the device. What’s happening here seems obvious. The ultrasonic fingerprint scanner isn’t able to penetrate the screen protector, so it’s recording an essentially blank fingerprint. A patch to recognize these blank prints has been rolled out to devices in Samsung’s home country of South Korea, with the rest of the world soon to follow.

The second new handset is the Google Pixel 4, which includes a new Face Unlock feature. While many have praised the feature, there is trouble in paradise. The Pixel’s Face Unlock works even when the user is asleep or otherwise unmoving. To their credit, Apple’s Face ID also checks for user alertness, trying to avoid unlocking unless the user is intentionally doing so.

The humorous scenario is a child or spouse unlocking your phone while you’re asleep, but a more sobering possibility is your face being used against you unwillingly, or even while unconscious or dead. Based on leaks, it’s likely that there was an “eyes open” mode planned but cut before launch. Hopefully the bugs can be worked out of that feature, and it can be re-added in a future update. Until then, it’s probably best not to use Google’s Face Unlock on Pixel 4 devices.

DNS-over-HTTPS Is The Wrong Partial Solution

Openness has been one of the defining characteristics of the Internet for as long as it has existed, with much of the traffic today still passed without any form of encryption. Most requests for HTML pages and associated content are in plain text, and the responses are returned in the same way, even though HTTPS has been around since 1994.

But sometimes there’s a need for security and/or privacy. While the encryption of internet traffic has become more widespread for online banking, shopping, the privacy-preserving aspect of many internet protocols hasn’t kept pace. In particular, when you look up a website’s IP address by hostname, the DNS request is almost always transmitted in plain text, allowing all the computers and ISPs along the way to determine what website you were browsing, even if you use HTTPS once the connection is made.

The idea of also encrypting DNS requests isn’t exactly new, with the first attempts starting in the early 2000s, in the form of DNSCrypt, DNS over TLS (DoT), and others. Mozilla, Google, and a few other large internet companies are pushing a new method to encrypt DNS requests: DNS over HTTPS (DoH).

DoH not only encrypts the DNS request, but it also serves it to a “normal” web server rather than a DNS server, making the DNS request traffic essentially indistinguishable from normal HTTPS. This is a double-edged sword. While it protects the DNS request itself, just as DNSCrypt or DoT do, it also makes it impossible for the folks in charge of security at large firms to monitor DNS spoofing and it moves the responsibility for a critical networking function from the operating system into an application. It also doesn’t do anything to hide the IP address of the website that you just looked up — you still go to visit it, after all.

And in comparison to DoT, DoH centralizes information about your browsing in a few companies: at the moment Cloudflare, who says they will throw your data away within 24 hours, and Google, who seems intent on retaining and monetizing every detail about everything you’ve ever thought about doing.

DNS and privacy are important topics, so we’re going to dig into the details here. Continue reading “DNS-over-HTTPS Is The Wrong Partial Solution”