To start our week of vulnerabilities in everything, there’s a potentially big vulnerability in Android handsets, but it’s Apple’s fault. OK, maybe that’s a little harsh — Apple released the code to their Apple Lossless Audio Codec (ALAC) back in 2011 under the Apache License. This code was picked up and shipped as part of the driver stack for multiple devices by various vendors, including Qualcomm and MediaTek. The problem is that the Apple code was terrible, one researcher calling it a “walking colander” of security problems.
Apple has fixed their code internally over the years, but never pushed those updates to the public code-base. It’s a fire-and-forget source release, and that can cause problems like this. The fact that ALAC was released under a permissive license may contribute to the problem. Someone (in addition to Apple) likely found and fixed the security problems, but the permissive license doesn’t require sharing those fixes with a broader community. It’s worth pondering whether a Copyleft license like the GPL would have gotten a fix distributed years ago.
Regardless, CVE-2021-0674 and CVE-2021-0675 were fixed in both Qualcomm and MediaTek’s December 2021 security updates. These vulnerabilities are triggered by malicious audio files, and can result in RCE. An app could use this trick to escape the sandbox and escalate privileges. This sort of flaw has been used by actors like the NSO group to compromise devices via messaging apps.
Researchers at Microsoft have been looking at D-Bus, and the various daemons that listen to it. It’s interesting, because many of those daemons run as root, but a non-root program can make calls to D-Bus. It seems likely that some unintended interaction could lead to security problems. Right on cue, a pair of problems in
networkd-dispatcher can be chained to elevate privileges from user to root. The problems were fixed in
networkd-dispatcher version 2.2, so look for at least that release on your Linux distro. Edit: It looks like several distros have backported this fix, calling it
CVE-2022-29799 and CVE-2022-29800 are the two flaws, with the first being a directory traversal flaw. A message can be sent, setting a state field to a directory name like
../../maliciousScripts/. The second is a time-of-check-time-of-use (TOCTOU) flaw, where a script is verified to be controlled by root, but execution isn’t initiated right away. Since symlinks can be used in these directories, the trick is to set up symlinks to what appears to be properly secured scripts, and after the check has been performed, switch the link to the attacker-controlled scripts.
VirusTotal Got Totalled
Dealing with live malware is tricky, and running a public site dedicated to security research tends to attract both good and bad attention. In this case, it was fellow security researchers that discovered that VirusTotal was vulnerable to attack. The flaw was CVE-2021-22204, a vulnerability in
exiftool. VirusTotal uses this as part of it’s file analysis feature, and hadn’t integrated the patches yet. It was straightforward to embed the malicious command, and submit the file for scanning. As individual hosts went to work on the malware sample, they hit the exploit and launched reverse shells back to the researchers. A total win. After confirming that they had indeed hit pay dirt, the researchers from Cysrc turned their findings over to Google, who runs VirusTotal, and the vulnerable binary has since been updated.
Yes I Agree, What Could Go Wrong?
Do you read the End User License Agreements on the apps you install? Have you ever found the EULA so onerous, that you refused to agree? We might all want to get out of the habit of mindlessly agreeing to the Terms of Service. Many of those apps use GPS location data, and many of those EULAs specify that your location data can be sold to advertisers. The data is “anonymized”, which just means instead of names or email addresses, the location data is tied to pseudo-random numeric IDs. Surely no-one would go to the trouble of getting your data and unmasking your identity, right? Right?
According to The Intercept, a pair of intelligence companies have ingested location data en masse and automated the de-anonymizing process. How many people have their data caught up in this real-world version of The Machine? Something like three billion devices. Yikes.
So About that Pentest…
Red team exercises are the source of some of the most impressive security stories. How a scrappy team overcame adversity to pull off the ultimate hack is the stuff of legends. (Seriously, go watch Sneakers again.) But what happens when you go to all that work, try multiple approaches, and still don’t score a successful breach?
This was the question [DiabloHorn] pondered, with some good guidelines to help any of us in that awkward situation. The first task is to ask, what led to a null result? Was the test scoped too narrowly? Too many restrictions on techniques? Not enough time given? That’s all good information to report, so the next test can be more profitable. Additionally, what worked? If the code in use was bulletproof because of a really good test suite with fuzzing already being done, that’s good info, too. The whole write-up is a thought-provoking exercise, even for the rest of us, who are just trying to stay secure.
Psychic Signatures Continued
Last week we brought the Java Psychic Signatures story, and less than a week later, there is a particularly fun Proof of Concept to take a look at: Breaking TLS. Since the flawed implementation can be used to secure HTTPS traffic via TLS, this means a malicious server can authenticate as any host desired. It seems like this would defeat HSTS and certificate stapling as well. The attack extends to Man-in-the-Middle attacks as well. Remember this vulnerability only applies to Java clients that haven’t been updated. See last week’s coverage for more information.