Let’s Encrypt recently celebrated their one billionth certificate. That’s over 190 million websites currently secured, and thirteen full-time staff. The annual budget for Lets Encrypt is an eye-watering $3.3+ million, covered by sponsors like Mozilla, Google, Facebook, and the EFF.
A cynic might ask if we need to rewind the counter by the three million certificates Let’s Encrypt recently announced they are revoking as a result of a temporary security bug. That bug was in the handling of the Certificate Authority Authorization (CAA) security extension. CAA is a recent addition to the X.509 standard. A domain owner opts in by setting a CAA field in their DNS records, specifying a particular CA that is authorized to issue certificates for their domain. It’s absolutely required that when a CA issues a new certificate, it checks for a CAA record, and must refuse to issue the certificate if a different authority is listed in the CAA record.
The CAA specification specifies eight hours as the maximum time to cache the CAA check. Let’s Encrypt uses a similar automated process to determine domain ownership, and considers those results to be valid for 30 days. There is a corner case where the Let’s Encrypt validation is still valid, but the CAA check needs to be re-performed. For certificates that cover multiple domains, that check would need to be performed for each domain before the certificate can be issued. Rather validating each domain’s CAA record, the Let’s Encrypt validation system was checking one of those domain names multiple times. The problem was caught and fixed on the 28th.
The original announcement gave administrators 36 hours to manually renew their affected certificates. While just over half of the three million target certificates have been revoked, an additional grace period has been extended for the over a million certs that are still in use. Just to be clear, there aren’t over a million bad certificates in the wild, and in fact, only 445 certificates were minted that should have been prevented by a proper CAA check.
Ghostcat
Apache Tomcat, the open source Java-based HTTP server, has had a vulnerability for something like 13 years. AJP, the Apache JServ Protocol, is a binary protocol designed for server-to-server communication. An example use case would be an Apache HTTP server running on the same host as Tomcat. Apache would serve static files, and use AJP to proxy dynamic requests to the Tomcat server.
Ghostcat, CVE-2020-1938, is essentially a default configuration issue. AJP was never designed to be exposed to untrusted clients, but the default Tomcat configuration enables the AJP connector and binds it to all interfaces. An attacker can craft an AJP request that allows them to read the raw contents of webapp files. This means database credentials, configuration files, and more. If the application is configured to allow file uploads, and that upload location is in the folder accessible to the attacker, the result is a full remote code execution exploit chain for any attacker.
The official recommendation is to disable AJP if you’re not using it, or bind it to localhost if you must use it. At this point, it’s negligence to leave ports exposed to the internet that aren’t being used.
Have I Been P0wned
You may remember our coverage of [Troy Hunt] over at haveibeenpwned.com. He had made the decision to sell HIBP, as a result of the strain of running the project solo for years. In a recent blog post, [Troy] reveals the one thing more exhausting that running HIBP: trying to sell it. After a potential buyer was chosen, and the deal was nearly sealed, the potential buyer went through a restructuring. At the end of the day, the purchase no longer made sense for either party, and they both walked away, leaving HIBP independent. It sounds like the process was stressful enough that HIBP will remain a independent entity for the foreseeable future.
You Were Warned
Remember the Microsoft Exchange vulnerability from last week? Attack tools have been written, and the internet-wide scans have begun.
Ridl Me This, Chrome
We’ve seen an abundance of speculative execution vulnerabilities over the last couple of years. While these problems are technically interesting, there has been a bit of a shortage of real-world attacks that leverage those vulnerabilities. Well, thanks to a post over at Google’s Project Zero, that dearth has come to an end. This attack is a sandbox escape, meaning it requires a vulnerability in the Chrome JS engine to be able to pull it off.
To understand how Ridl plays into this picture, we have to talk about how the Chrome sandbox works. Each renderer thread runs with essentially zero system privileges, and sends requests through Mojo, an inter-process communication system. Mojo uses a 128 bit numbering system to both identify and secure those IPC endpoints.
Once an attacker has taken over the unprivileged sandbox process, the next step is to figure out the port name of an un-sandboxed Mojo port. The trick is to get that privileged process to access its Mojo port name repeatedly, and then capture an access using Ridl. Once the port is known, the attacker has essentially escaped the sandbox.
The whole read is interesting, and serves as a great example of the sorts of attacks enabled by speculative execution leaks.
Also intel again.
https://www.ptsecurity.com/ww-en/about/news/unfixable-vulnerability-in-intel-chipsets-threatens-users-and-content-rightsholders/
I saw this one just too late to cover this week. It’s on the docket to dive into for next week, though!
It’s a crazy unlikely vulnerability requiring physical access. If you have physical access to a device, there is no security. So a pointless PoC.
Tell that to “rightsholders” :-D
You need to read into the vulnerability more. None of that matters, because it only needs to work one, even on a computer the attacker owns.
Once they can sucessfully pull off this timing attack, the master key for an entier generation of intel processors can be obtained. This can intern be used to obtain the specific key for any given system and produce forged signed modules or falsify credentials that it will accept.
The only thing intel can do is pull a chernobyl and dump concrete on the problem, clumsily trying to cut off any form of access to these services (like the TPM) that were origionally safe to make available.
“Rather validating”?
SW people just create problem for themselves, let them run circles.