The always interesting Project Zero has a pair of stories revolving around security research itself. The first, from this week, is all about one man’s quest to build a debug iPhone for research. [Brandon Azad] wanted iOS debugging features like single-stepping, turning off certain mitigations, and using the LLDB debugger. While Apple makes debug iPhones, those are rare devices and apparently difficult to get access to.
[Brandon] started looking at the iBoot bootloader, but quickly turned his attention to the debugging facilities baked into the Arm chipset. Between the available XNU source and public Arm documentation, he managed to find and access the CoreSight debug registers, giving him single-step control over a core at a time. By triggering a core halt and then interrupting that core during reset, he was able to disable the code execution protections, giving him essentially everything he was looking for. Accessing this debug interface still requires a kernel level vulnerability, so don’t worry about this research being used maliciously.
The second Google Zero story that caught my eye was published earlier in the month, and is all about finding useful information in unexpected places. Namely, finding debugging symbols in old versions of Adobe Reader. Trying to understand what’s happening under the hood of a running application is challenging when all you have is a decompiler output. Adobe doesn’t ship debug builds of Reader, and has never shipped debug information on Windows. Reader has been around for a long time, and has supported quite a few architectures over the years, and surprisingly quite a few debug builds have been shipped as a result.
How useful could ancient debugging data be? Keep in mind that Adobe changes as little as possible between releases. Some code paradigms, like enums, tend to be rather static as well. Additional elements might be added to the end of the enum, but the existing values are unlikely to change. [Mateusz Jurczyk], the article’s author, then walks us through an example of how to take that data and apply it to figuring out what’s going on with a crash.
I fought with a strange problem this week for a client. Chrome suddenly began displaying the “Aw, Snap!” crash page for every website visited, including the Chrome settings pages. It turns out I was not alone: Many users of Chrome 78 on Windows 10 were seeing similar problems. It turns out that Chrome 78 was the first release that included support for Renderer Code Integrity, a Windows 10 feature designed to bring additional security to web browsers. The way an antivirus like Symantec Endpoint Protection hooks the browser process is also an integrity violation, making this yet another example of antivirus behavior that is uncomfortably similar to malware behavior. While Symantec had already released an update correcting the problem, they weren’t the only provider causing this problem, so Google has temporarily rolled back their RCI support.
BBC on Tor
Tor, previously The Onion Router, is a network of relays that provide a way to access the internet with true anonymity. One of the most interesting elements of Tor is the hidden service: usually a website with a name ending in “.onion”. One of the newest .onion services is the BBC, or you can access that story at https://www.bbcnewsv2vjtpsuy.onion/news/technology-50150981 if you’re connected to Tor. I wanted to snark about BBC and The Onion, that great bastion of news satire, but it’s genuinely fascinating to see the BBC embracing Tor.
What’s the purpose? Isn’t Tor just a glorified VPN service, with all the same potential problems? Well no, Tor has its own unique problems. The central concept of Tor is nested public key encryption. Each packet is built with 3 layers of encryption, leading to the onion comparison. That encrypted packet is sent to a Tor entry node, which performs the first layer of decryption. This results in a double-encrypted packet and a pointer to the next node. The entry node sends the packet on to the indicated node, which decrypts the next layer and forwards the packet to an exit node. The exit node decrypts the final layer, resulting in an unencrypted packet (unless it’s HTTPS, for example) and the IP address of the external service the user actually wanted to access. The entry node only knows the user’s IP address and the intermediary node. The middle node only sees which nodes served as entry and exit nodes. The exit node knows the IP of the target service, but has no knowledge of the user’s IP or location.
While this does provide anonymity, the downside is that the exit node can inspect and even attempt to modify all the traffic flowing through. On top of that, many exit nodes are blacklisted on various services. There are also some practical attacks against Tor that can reveal users. For instance when an attacker can observe the entry node and exit node’s traffic, a timing attack can unmask users.
A hidden service avoids at least some of those problems by avoiding the untrusted exit node. Instead, the service generates its public key, which serves as the source of the .onion domain name, and then uploads that data to the Distributed Hash Table (DHT), which is stored by multiple Tor nodes. When a user tries to connect to a hidden service, they retrieve a rendezvous node from the DHT, and the connection can be made entirely inside Tor without revealing the identity of either party. The BBC isn’t trying to hide their identity, so they were able to make the entire process a bit speedier by advertising their node directly. Users can still connect anonymously, but only 3 hops are needed instead of 6.
It’s worth noting that some elements of the BBC site aren’t hosted on the BBC.com domain, and as a result, aren’t a part of the .onion service. Elements like ads and some scripts will still be loaded through an exit node. This isn’t necessarily a problem, but worth being aware of. The BBC have gone the extra mile and built at least one other secret service to mirror their bbci.co.uk domain, to mitigate at least some of this issue.
Nginx Reveals PHP-FPM RCE
A slightly odd nginx configuration revealed a bug in PHP-FPM. Pointer arithmetic is done based on an unchecked assumption, and a data structure can be manipulated as a result. The key to exploiting this assumption is a regex that fails to properly process a newline as part of the URL. The newline in the URL results in a variable being empty that is assumed to never be empty. The result is that the pointers corresponding to the that data structure are corrupted. Careful manipulation of the rest of the URL means that an attacker can use the corruption to execute part of the URL directly as PHP code.
A vulnerable configuration was included in the Nextcloud configuration documentation, so if you have a Nextcloud instance hosted using nginx, be sure to go check for this problem.
Oneclick Android Rooting
A recent Android vulnerability, a use-after-free bug, has been packaged into an easy to use root application. Because the vulnerability exists in the Android codebase itself, it this vulnerability applies to quite a few devices. It’s a use-after-free vulnerability, which means that memory is freed, but some piece of code tries to access that memory as if it was still valid. Since it’s been freed, another process could write to that memory location before it’s accessed.
While the vulnerability is present in many devices, [Grant Hernandez] warns against blindly running his code on your device, and as a result has opted not to release a compiled version of the exploit. While compiling the code into an APK is relatively simple, tailoring the exploit to work as expected on your device requires a bit more skill and knowledge. [Grant] wrote up the process of turning the vulnerability into a full root of his device, and it’s worth the read if you’re interested in the Android security details.
Rental Cars and Smartphones
A selling point of some late-model automobiles is the ability to connect them to a smartphone app. It’s useful to be able to unlock your car, start the engine, and even track its location from afar. Squarely in the realm of unintended consequences is what happens when a smartphone compatible vehicle is used as a rental car.
[Masamba Sinclair] connected his rental car to his phone, and enjoyed the connectivity features during the rental period. He found it strange, then, when a few days later he discovered he could still access the vehicle through the app, even though someone else had rented the car. He emailed and tweeted Ford about the issue, but to no avail. Finally, after the Ars Technica article ran, he was contacted by Enterprise. He finally lost control of his rental from months earlier, but how many other vehicles are in the same state?
It’s good security practice to wipe any such settings from a rental car both before *and* after your rental period. How much information have you given away simply by pairing your phone’s Bluetooth to the infotainment system? Probably more than you realize.