CVE-2019-5700 is a vulnerability in the Nvidia Tegra bootloader, discovered by [Ryan Grachek], and breaking first here at Hackaday. To understand the vulnerability, one first has to understand a bit about the Tegra boot process. When the device is powered on, a irom firmware loads the next stage of the boot process from the device’s flash memory, and validates the signature on that binary. As an aside, we’ve covered a similar vulnerability in that irom code called selfblow.
On Tegra T4 devices, irom loads a single bootloader.bin, which in turn boots the system image. The K1 boot stack uses an additional bootloader stage, nvtboot, which loads the secure OS kernel before handing control to bootloader.bin. Later devices add additional stages, but that isn’t important for understanding this. The vulnerability uses an Android boot image, and the magic happens in the header. Part of this boot image is an optional second stage bootloader, which is very rarely used in practice. The header of this boot image specifies the size in bytes of each element, as well as what memory location to load that element to. What [Ryan] realized is that while it’s usually ignored, the information about the second stage bootloader is honored by the official Nvidia bootloader.bin, but neither the size nor memory location are sanity checked. The images are copied to their final position before the cryptographic verification happens. As a result, an Android image can overwrite the running bootloader code.
The simplest way to use this vulnerability is to replace the verification routine with NoOp instructions. The older T4 devices copy the Android image before the trusted OS is loaded, so it’s possible to load unsigned code as the Secure OS image. If you want to dig just a bit further into the technical details, [Ryan] has published notes on the CVE.
So what does this mean for the hobbyist? It allows for things like running uboot at the equivalent of ring 0. It allows running more recent Android releases on Tegra devices once they’ve been end-of-lifed. It might even be possible to load Nintendo Switch homebrew software on the Nvidia Shield TV, as those are nearly identical pieces of hardware. Hacks like this are a huge boon to the homebrew and modding community.
We’ve seen this before, and I suspect this style of vulnerability will show up in the future, especially as ARM devices continue to grow in popularity. I suggest this class of vulnerability be called Bootjacking, as it is a highjack of the boot process, as well as jacking instructions into the existing bootloader.
Leaky SSH Certificates
SSH certificates are a serious upgrade over simple passwords. So much so, services like Github and Gitlab have begun mandating SSH keys. One of the quirks of those services: Anyone can download public SSH keys from Github. When a client connects to an SSH server, it lists the keys it has access to, by sending the corresponding public keys. In response, if any of those keys are trusted by the server, it sends back a notification so the client can authenticate with the secret key.
[Artem Golubin] noticed the potential data leak, and wrote it up in detail. You could pick a developer on Github, grab his public SSH key, and start checking public-facing SSH servers to find where that public key is recognized. This seems to be baked into the SSH protocol itself, rather than just an implementation quirk. This isn’t the sort of flaw that can be turned into a worm, or will directly get a server compromised, but is an interesting information gathering tool.
HackerOne Exposed
HackerOne is a bug-bounty-as-a-service that represents a bunch of tech companies. Just recently they announced that a vulnerability had been found in the HackerOne infrastructure itself. A security researcher using the platform, [Haxta4ok00], was accidentally given an employee’s session key during a back-and-forth about an unrelated bug report, and discovered that session key allowed him to access the HackerOne infrastructure with the same permissions as the employee.
Session key hijacking isn’t a new problem; it is one of the attacks that led to the HTTPS everywhere approach we see today. Once a user has authenticated to a website, how does that authentication “stick” to the user? Sending a username and password with each page load isn’t a great idea. The solution is the session key. Once a user authenticates, the server generates a long random string, and passes it back to the browser. This string is the agreed upon token that authenticates that user for all further communication, until a time limit is reached, or the token is invalidated for another reason.
Not so long ago, most web services only used HTTPS connections for the initial user log-on, and dropped back to unencrypted connections for the bulk of data transfer. This session key was part of the unencrypted payload, and if it could be captured, an attacker could hijack the legitimate session and act as the user. The Firesheep browser extension made it clear how easy this attack was to pull off, and pushed many services to finally fixing the problem through full-time HTTPS connections.
HTTPS everywhere is a huge step forward for preventing session hijacking, but as seen at HackerOne, it doesn’t cover every case. The HackerOne employee was using a valid session key as part of a curl command line, and accidentally included it in a response. [Haxta4ok00] noticed the key, and quickly confirmed what is was, and that it allowed him access to HackerOne internal infrastructure.
The leak was reported and the key quickly revoked. Because it was leaked in a private report, only [Haxta4ok00] had access. That said, several other private vulnerability reports were accessed. It’s worth mentioning that HackerOne handled this as well as they could have, awarding $20,000 for the report. They updated their researcher guidelines, and now restrict those session keys to the IP address that generated them.
StrandHogg
One of the more notable stories in the past week was all about Android, and malicious apps masquerading as legitimate ones. StrandHogg has been exploited in one form or another since 2017, and was first theorized in a Usenix paper from 2015. In some ways, it’s an extremely simple attack, but does some very clever things.
So how does it work? A malicious app, once installed, runs in the background waiting for a target app to be launched. Once the target app is detected, the malicious app jumps to the forefront, disguised as the target. From here, a phishing attack is trivial. More interesting, though, is the permissions attack. Your benign application appears to request file system permissions, camera permissions, etc. It’s not immediately apparent that the malicious app is the one that is actually requesting permissions.
The only actual vulnerability here seems to be the ability of a malicious app to rename and “reparent” itself, in order to spoof being part of the target app. Do note that at least on permissions popups, the name of the requesting application is blank during a StrandHogg attack.
Contactless Payment
Contactless payments look like magic the first time you see them. Just wave a compatible card or mobile device over the payment terminal, and payment happens over NFC. Since you’re reading this column, it’s safe to assume that quickly after that first moment of awe wears off, you starting wondering how this is all done securely. That is what [Leigh-Anne Galloway] and [Tim Yunusov] wanted to know as well. They just released their research, and managed to find several nasty tricks. A tin-foil hat might be overkill, but maybe it’s time to invest in an NFC blocking wallet.
They manipulated data in transit, allowing for much larger payments without a PIN entry, made purchases via an NFC proxy, and even illustrated a practical pre-pay attack where a card could be read, make a fake transaction, and then play that fake transaction back for a real payment terminal.
Superfish returns?
Twitter is a fascinating place. Sometimes simple observations turn out to be CVEs. An interesting interaction took place when [SwiftOnSecurity] pointed out an odd DNS name, “atlassian-domain-for-localhost-connections-only.com”, with the description that it allowed a secure HTTPS connection to a service running on localhost. Our friend from Google’s Project Zero, [Tavis Ormandy], pointed out that a valid https cert installed on localhost means that Atlassian must be shipping a private certificate for that domain name as part of their software. Follow the link, and you too can host this oddball domain with a valid HTTPS certificate.
This is a bad idea for several reasons, but not the worst thing that could happen. The worst case scenario for this style of mistake probably belongs to Superfish. An aptly name adware program was pre-installed on many Lenovo machines in 2014, with the helpful feature of showing you more personalized ads. In order to do this, the software simply added its own certificate authority information to the system’s trusted CA bundle… and shipped the private certificate and key along with the software. Yes, you read that right, any HTTPS certificate could be perfectly spoofed for a Lenovo user.
Looking at the Atlassian domain, another user noted that IBM’s Aspera software had a similar localhost domain and certificate. According to [Tavis], that software also includes a full CA cert and key. If an iteration of IBM software actually added that CA to a system’s root trust, then it’s another superfish: Any HTTPS certification could be successfully spoofed.
You miss the Atlassian Zeroday with their awesome secure localhost domain ‘https://atlassian-domain-for-localhost-connections-only.com’: https://www.theregister.co.uk/2019/12/05/atlassian_zero_day_bug/
It is, what I would call a deliberate, design flaw in the CA infrastructure, that all root certificate authorities – 100+ (android), 180+ (firefox), 200+ (OSX) can issue a 100% valid certificate for any domain, even one that has already been issued by a different CA. That accessing your email or bank or shopping online implies that you implicitly trust all CA’s that your browser has installed and that none have had any current or previous security issues. Do not get me wrong the current CA system is still better than none. But it was designed to allow legal, or state, intercept.
“better than none” — are we sure about that? False security is worse than none at all when thousands of politically minded people raised on “HTTPS-Everywhere for great good!” realize the government has read every single private thought they posted and were told was “encrypted” by HTTPS.
It’s no coincidence that self-signed certs, the one bastion of truly secure HTTPS*, is flagrantly lied about in scary warnings by browsers; “We can’t verify the NSA/CIA has the private key to MITM you! You shouldn’t trust it!”
*provided you can verify the certificate fingerprint via external, non-CA means
What I meant is that it using CA’s is better than transferring data as plain text.
How do you check a system to see the quality of the certificates that are installed on it?
What tools make it easy for end user to audit/understand who-all they are
trusting on a given system (and help limit the trusted authorities)?
Do certificate using programs check certificates and flag those where the CA has changed, or where multiple CAs issue provide certificates for the same resource?
What security tools check your system for this sort of thing? (Malware scanners/etc.?)
NoScript lets you see the various domains which contribute to a web page
(and control which ones you trust).
Some firewalls let you see what programs are communicating with what machines on the network, and control that.
What similar tools are there to explore/understand/control the certificates
and “trust” networks in your system/browser/etc.?
Thank you
Most browsers have a place to go inspect your CAs. In chrome, it’s in settings, advanced, privacy and security, manage certificates.
Thanks.
At least in firefox this is very primitive – shows you a bunch of technical jargon and minutae. It does not make it easy for end users (or even superusers) to make intelligent security choices quickly and easily.
Doesn’t give easy access to background information about the CA (who are they, where are they, what are their policies, what is their reputation).
Doesn’t group the CAs intelligently (e.g. all CA’s relating to Spain,
or Europe, or ….)
Doesn’t let you turn CA’s off in groups.
This article seems to provide a little information on the topic of who
does your device trust (a bit out of date though).
Questioning the chain of trust: investigations into the root certificates on mobile devices
http://en.hackdig.com/?7005.htm
Still haven’t found much in line of sites providing information about default CAs (and professional as well as user reviews, from the point of view of the user, rather than those issuing certificates).
Also not much in tools to make it easy to turn CAs off in batches,
or to see what CAs a system actually has been using.
Mozilla has a clear policy for inclusion.
https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/
Basically anyone no matter how small or big can be included if, and only if, they meet ALL their requirements.