This Week In Security: Zoom (Really This Time), Fingerprints, And Bloatware

You were promised Zoom news last week, but due to a late night of writing, that story was delayed to this week. So what’s the deal with Zoom? Google, SpaceX, and even the government of Taiwan and the US Senate have banned Zoom. You may remember our coverage of Zoom from nearly a year ago, when Apple forcibly removed the Zoom service from countless machines. The realities of COVID-19 have brought about an explosion of popularity for Zoom, but also a renewed critical eye on the platform’s security.

“Zoombombing”, joining a Zoom meeting uninvited, made national headlines as a result of a few high profile incidents. The US DOJ even released a statement about it. Those incidents seem to have been a result of Zoom default settings: no meeting passwords, no “waiting room”, and meeting IDs that persist indefinitely. A troll could simply search google for Zoom links, and try connecting to them until finding an active meeting. Ars ran a great article on how to avoid getting zoombombed (thanks to Sheldon for pointing this out last week).

There is another wrinkle to the Zoom story. Zoom is technically an American company, but its Chinese roots put it in a precarious situation. Recently it’s been reported that encryption keying is routed through infrastructure in China, even though the calling parties are elsewhere. In some cases, call data itself goes through Chinese infrastructure, though that was labeled as a temporary bug. Zoom was also advertising its meetings as having end-to-end encryption. That claim was investigated, and discovered to be false. All meetings get decrypted at Zoom servers, and could theoretically be viewed by Zoom staff. Continue reading “This Week In Security: Zoom (Really This Time), Fingerprints, And Bloatware”

This Week In Security: OpenWrt, ZOOM, And Systemd

OpenWrt announced a problem in opkg, their super-lightweight package manager. OpenWrt’s target hardware, routers, make for an interesting security challenge. A Linux install that fits in just 4 MB of flash memory is a minor miracle in itself, and many compromises had to be made. In this case, we’re interested in the lack of SSL: a 4 MB install just can’t include SSL support. As a result, the package manager can’t rely on HTTPS for secure downloads. Instead, opkg first downloads a pair of files: A list of packages, which contains a SHA256 of each package, and then a second file containing an Ed25519 signature. When an individual package is installed, the SHA256 hash of the downloaded package can be compared with the hash provided in the list of packages.


It’s a valid approach, but there was a bug, discovered by [Guido Vranken], in how opkg reads the hash values from the package list. The leading space triggers some questionable pointer arithmetic, and as a result, opkg believes the SHA256 hash is simply blank. Rather than fail the install, the hash verification is simply skipped. The result? Opkg is vulnerable to a rather simple man in the middle attack.

OpenWrt doesn’t do any automatic installs or automatic updates, so this vulnerability will likely not be widely abused, but it could be used for a targeted attack. An attacker would need to be in a position to MitM the router’s internet connection while software was being installed. Regardless, make sure you’re running the latest OpenWrt release to mitigate this issue. Via Ars Technica.

Wireguard V1.0

With the Linux Kernel version 5.6 being finally released, Wireguard has finally been christened as a stable release. An interesting aside, Google has enabled Wireguard in their Generic Kernel Image (GKI), which may signal more official support for Wireguard VPNs in Android. I’ve also heard reports that one of the larger Android ROM development communities is looking into better system-level Wireguard support as well.

Javascript in Disguise

Javascript makes the web work — and has been a constant thorn in the side of good security. For just an example, remember Samy, the worm that took over Myspace in ’05. That cross-site scripting (XSS) attack used a series of techniques to embed Javascript code in a user’s profile. Whenever that profile page was viewed, the embedded JS code would run, and then replicate itself on the page of whoever had the misfortune of falling into the trap.

Today we have much better protections against XSS attacks, and something like that could never happen again, right? Here’s the thing, for every mitigation like Content-Security-Policy, there is a guy like [theMiddle] who’s coming up with new ways to break it. In this case, he realized that a less-than-perfect CSP could be defeated by encoding Javascript inside a .png, and decoding it to deliver the payload.

Systemd

Ah, systemd. Nothing seems to bring passionate opinions out of the woodwork like a story about it. In this case, it’s a vulnerability found by [Tavis Ormandy] from Google Project Zero. The bug is a race condition, where a cached data structure can be called after it’s already been freed. It’s interesting, because this vulnerability is accessible using DBus, and could potentially be used to get root level access. It was fixed with systemd v220.

Mac Firmware

For those of you running MacOS on Apple hardware, you might want to check your firmware version. Not because there’s a particularly nasty vulnerability in there, but because firmware updates fail silently during OS updates. What’s worse, Apple isn’t publishing release notes, or even acknowledging the most recent firmware version. A crowd-sourced list of the latest firmware versions is available, and you can try to convince your machine to try again, and hope the firmware update works this time.

Anti-Rubber-Ducky

Google recently announced a new security tool, USB Keystroke Injection Protection. I assume the nickname, UKIP, isn’t an intentional reference to British politics. Regardless, this project is intended to help protect against the infamous USB Rubber Ducky attack, by trying to differentiate a real user’s typing cadence, as opposed to a malicious device that types implausibly quickly.

While the project is interesting, there are already examples of how to defeat it that amount to simply running the scripts with slight pauses between keystrokes. Time will tell if UKIP turns into a useful mitigation tool. (Get it?)

SMBGhost

Remember SMBGhost, the new wormable SMB flaw? Well, there is already a detailed explanation and PoC. This particular PoC is a local-only privilege escalation, but a remote code execution attack is like inevitable, so go make sure you’re patched!

This Week In Security: 0-Days, Pwn2Own, IOS And Tesla

LILIN DVRs and cameras are being actively exploited by a surprisingly sophisticated botnet campaign. There are three separate 0-day vulnerabilities being exploited in an ongoing campaigns. If you have a device built by LILIN, go check for firmware updates, and if your device is exposed to the internet, entertain the possibility that it was compromised.

The vulnerabilities include a hardcoded username/password, command injection in the FTP and NTP server fields, and an arbitrary file read vulnerability. Just the first vulnerability is enough to convince me to avoid black-box DVRs, and keep my IP cameras segregated from the wider internet.

Continue reading “This Week In Security: 0-Days, Pwn2Own, IOS And Tesla”

This Week In Security: Working From Home Edition

As the world sits back and waits for Coronavirus to pass, the normally frantic pace of security news has slowed just a bit. Google is not exempt, and Chrome 81 has been delayed as a result. Major updates to Chrome and Chrome OS are paused indefinitely, but security updates will continue as normal. In fact, Google has verified that the security related updates will be packaged as minor updates to Chrome 80.

Chinese Viruses Masquerading as Chinese Viruses

Speaking of COVID-19, researchers at Check Point Research stumbled upon a malware campaign that takes advantage of the current health scare. A pair of malicious RTF documents were being sent to various Mongolian targets. Created with a tool called “Royal Road“, these files target a set of older Microsoft Word vulnerabilities.

This particular attack drops its payload in the Microsoft Word startup folder, waiting for the next time Word is launched to run the next stage. This is a clever strategy, as it would temporarily deflect attention from the malicious files. The final payload is a custom RAT (Remote Access Trojan) that can take screenshots, upload and download files, etc.

While the standard disclaimer about the difficulty of attribution does apply, this particular attack seems to be originating from Chinese intelligence agencies. While the Coronavirus angle is new, this campaign seems to stretch back to 2017.
Continue reading “This Week In Security: Working From Home Edition”

This Week In Security: SMBv3, AMD And Intel, And Huawei Backdoors

Ready for more speculative execution news? Hope so, because both Intel and AMD are in the news this week.

LVI Logo

The first story is Load Value Injection, a different approach to reading arbitrary memory. Rather than try to read protected memory, LVI turns that on its head by injecting data into a target’s data. The processor speculatively executes based on that bad data, eventually discovers the fault, and unwinds the execution. As per other similar attacks, the execution still changes the under-the-hood state of the processor in ways that an attacker can detect.

What’s the actual attack vector where LVI could be a problem? Imagine a scenario where a single server hosts multiple virtual machines, and uses Intel’s Secure Guard eXentensions enclave to keep the VMs secure. The low-level nature of the attack means that not even SGX is safe.

The upside here is that the attack is quite difficult to pull off, and isn’t considered much of a threat to home users. On the other hand, the performance penalty of the suggested fixes can be pretty severe. It’s still early in the lifetime of this particular vulnerability, so keep an eye out for further updates.

AMD’s Takeaway Bug

AMD also found itself on the receiving end of a speculative execution attack (PDF original paper here). Collide+Probe and Load+Reload are the two specific attacks discovered by an international team of academics. The attacks are based around the reverse-engineering of a hash function used to speed up cache access. While this doesn’t leak protected data quite like Spectre and Meltdown, it still reveals internal data from the CPU. Time will tell where exactly this technique will lead in the future.

To really understand what’s going on here, we have to start with the concept of a hash table. This idea is a useful code paradigm that shows up all over the place. Python dictionaries? Hash tables under the hood.

Hash table image from Wikipedia by Jorge Stolfi

Imagine you have a set of a thousand values, and need to check whether a specific value is part of that set. Iterating over that entire set of values is a computationally expensive proposition. The alternative is to build a hash table. Create an array of a fixed length, let’s say 256. The trick is to use a hash function to sort the values into this array, using the first eight bits of the hash output to determine which array location each value is stored in.

When you need to check whether a value is present in your set, simply run that value through the hash function, and then check the array cell that corresponds to the hash output. You may be ahead of me on the math — yes, that works out to about four different values per array cell. These hash collisions are entirely normal for a hash table. The lookup function simply checks all the values held in the appropriate cell. It’s still far faster than searching the whole table.

AMD processors use a hash table function to check whether memory requests are present in L1 cache. The Takeaway researchers figured out that hash function, and can use hash collisions to leak information. When the hash values collide, the L1 cache has two separate chunks of memory that need to occupy the same cache line. It handles this by simply discarding the older data when loading the colliding memory. An attacker can abuse this by measuring the latency of memory lookups.checking

If an attacker knows the memory location of the target data, he can allocate memory in a different location that will be stored in the same cache line. Then by repeatedly loading his allocated memory, he knows whether the target location has been accessed since his last check. What real world attack does that enable? One of the interesting ones is mapping out the memory layout of ASLR/KASLR memory. It was also suggested that Takeaway could be combined with the Spectre attack.

There are two interesting wrinkles to this story. First, some have pointed out the presence of a thank-you to Intel in the paper’s acknowledgements. “Additional funding was provided by generous gifts from Intel.” This makes it sound like Intel has been funding security research into AMD processors, though it’s not clear what exactly this refers to.

Lastly, AMD’s response has been underwhelming. At the time of writing, their official statement is that “AMD believes these are not new speculation-based attacks.” Now that the paper has been publicly released, that statement will quickly be proven to be either accurate or misinformed.

Closed Source Privacy?

The Google play store and iOS app store is full of apps that offer privacy, whether it be a VPN, adblocker, or some other amazing sounding application. The vast majority of those apps, however, are closed source, meaning that you have little more than trust in the app publisher to ensure that your privacy is really being helped. In the case of Sensor Tower, it seems that faith is woefully misplaced.

A typical shell game is played, with paper companies appearing to provide apps like Luna VPN and Adblock Focus. While technically providing the services they claim to provide, the real aim of both apps is to send data back to Sensor Tower. When it’s possible, open source is the way to go, but even an open source app can’t protect you against a malicious VPN provider.

Huawei Back Doors

We haven’t talked much about it, but there has been a feud of sorts bubbling between the US government and Huawei. An article was published a few weeks back in the Wall Street Journal accusing Huawei of intentionally embedding backdoors in their network equipment. Huawei posted a response on Twitter, claiming that the backdoors in their equipment are actually for lawful access only. This official denial reminds me a bit of a certain Swiss company…

[Robert Graham] thought the whole story was fishy, and decided to write about it. He makes two important points. First, the Wall Street Journal article cites anonymous US officials. In his opinion, this is a huge red flag, and means that the information is either entirely false, or an intentional spin, and is being fed to journalists in order to shape the news. His second point is that Huawei’s redefinition of government-mandated backdoors as “front doors” takes the line of the FBI, and the Chinese Communist Party, that governments should be able to listen in on your communications at their discretion.

Graham shares a story from a few years back, when his company was working on Huawei brand mobile telephony equipment in a given country. While they were working, there was an unspecified international incident, and Graham watched the logs as a Huawei service tech remoted into the cell tower nearest the site of the incident. After the information was gathered, the logs were scrubbed, and the tech logged out as if nothing had happened.

Did this tech also work for the Chinese government? The NSA? The world will never know, but the fact is that a government-mandated “front door” is still a back door from the users’ perspective: they are potentially being snooped on without their knowledge or consent. The capability for abuse is built-in, whether it’s mandated by law or done in secret. “Front doors” are back doors. Huawei’s gear may not be dirtier than anyone else’s in this respect, but that’s different from saying it’s clean.

Abusing Regex to Fool Google

[xdavidhu] was poking at Google’s Gmail API, and found a widget that caught him by surprise. A button embedded on the page automatically generated an API key. Diving into the Javascript running on that page, as well as an iframe that gets loaded, he arrived at an ugly regex string that was key to keeping the entire process secure. He gives us a tip, www.debuggex.com, a regex visualizer, which he uses to find a bug in Google’s JS code. The essence of the bug is that part of the URL location is interpreted as being the domain name. “www.example.com\.corp.google.com” is considered to be a valid URL, pointing at example.com, but Google’s JS code sees the whole string as a domain, and thinks it must be a Google domain.

For his work, [xdavidhu] was awarded $6,000 because this bit of ugly regex is actually used in quite a few places throughout Google’s infrastructure.

SMBv3 Wormable Flaw

Microsoft’s SMBv3 implementation in Windows 10 and Server 2019 has a vulnerability in how it handles on-the-fly compression, CVE-2020-0796. A malicious packet using compression is enough to trigger a buffer overflow and remote code execution. It’s important to note that this vulnerability doesn’t required an authenticated user. Any unpatched, Internet-accessible server can be compromised. The flaw exists in both server and client code, so an unpatched Windows 10 client can be compromised by connecting to a malicious server.

There seems to have been a planned coordinated announcement of this bug, corresponding with Microsoft’s normal Patch Tuesday, as both Fortinet and Cisco briefly had pages discussing it on their sites. Apparently the patch was planned for that day, and was pulled from the release at the last moment. Two days later, on Thursday the 12th, a fix was pushed via Windows update. If you have Windows 10 machines or a Server 2019 install you’re responsible for, go make sure it has this update, as proof-of-concept code is already being developed.

This Week In Security: Let’s Encrypt Revocation, Ghostcat, And The RIDLer

Let’s Encrypt recently celebrated their one billionth certificate. That’s over 190 million websites currently secured, and thirteen full-time staff. The annual budget for Lets Encrypt is an eye-watering $3.3+ million, covered by sponsors like Mozilla, Google, Facebook, and the EFF.

A cynic might ask if we need to rewind the counter by the three million certificates Let’s Encrypt recently announced they are revoking as a result of a temporary security bug. That bug was in the handling of the Certificate Authority Authorization (CAA) security extension. CAA is a recent addition to the X.509 standard. A domain owner opts in by setting a CAA field in their DNS records, specifying a particular CA that is authorized to issue certificates for their domain. It’s absolutely required that when a CA issues a new certificate, it checks for a CAA record, and must refuse to issue the certificate if a different authority is listed in the CAA record.

The CAA specification specifies eight hours as the maximum time to cache the CAA check. Let’s Encrypt uses a similar automated process to determine domain ownership, and considers those results to be valid for 30 days. There is a corner case where the Let’s Encrypt validation is still valid, but the CAA check needs to be re-performed. For certificates that cover multiple domains, that check would need to be performed for each domain before the certificate can be issued. Rather validating each domain’s CAA record, the Let’s Encrypt validation system was checking one of those domain names multiple times. The problem was caught and fixed on the 28th.

The original announcement gave administrators 36 hours to manually renew their affected certificates. While just over half of the three million target certificates have been revoked, an additional grace period has been extended for the over a million certs that are still in use. Just to be clear, there aren’t over a million bad certificates in the wild, and in fact, only 445 certificates were minted that should have been prevented by a proper CAA check.

Ghostcat

Apache Tomcat, the open source Java-based HTTP server, has had a vulnerability for something like 13 years. AJP, the Apache JServ Protocol, is a binary protocol designed for server-to-server communication. An example use case would be an Apache HTTP server running on the same host as Tomcat. Apache would serve static files, and use AJP to proxy dynamic requests to the Tomcat server.

Ghostcat, CVE-2020-1938, is essentially a default configuration issue. AJP was never designed to be exposed to untrusted clients, but the default Tomcat configuration enables the AJP connector and binds it to all interfaces. An attacker can craft an AJP request that allows them to read the raw contents of webapp files. This means database credentials, configuration files, and more. If the application is configured to allow file uploads, and that upload location is in the folder accessible to the attacker, the result is a full remote code execution exploit chain for any attacker.

The official recommendation is to disable AJP if you’re not using it, or bind it to localhost if you must use it. At this point, it’s negligence to leave ports exposed to the internet that aren’t being used.

Have I Been P0wned

You may remember our coverage of [Troy Hunt] over at haveibeenpwned.com. He had made the decision to sell HIBP, as a result of the strain of running the project solo for years. In a recent blog post, [Troy] reveals the one thing more exhausting that running HIBP: trying to sell it. After a potential buyer was chosen, and the deal was nearly sealed, the potential buyer went through a restructuring. At the end of the day, the purchase no longer made sense for either party, and they both walked away, leaving HIBP independent. It sounds like the process was stressful enough that HIBP will remain a independent entity for the foreseeable future.

You Were Warned

Remember the Microsoft Exchange vulnerability from last week? Attack tools have been written, and the internet-wide scans have begun.

Ridl Me This, Chrome

We’ve seen an abundance of speculative execution vulnerabilities over the last couple of years. While these problems are technically interesting, there has been a bit of a shortage of real-world attacks that leverage those vulnerabilities. Well, thanks to a post over at Google’s Project Zero, that dearth has come to an end. This attack is a sandbox escape, meaning it requires a vulnerability in the Chrome JS engine to be able to pull it off.

To understand how Ridl plays into this picture, we have to talk about how the Chrome sandbox works. Each renderer thread runs with essentially zero system privileges, and sends requests through Mojo, an inter-process communication system. Mojo uses a 128 bit numbering system to both identify and secure those IPC endpoints.

Once an attacker has taken over the unprivileged sandbox process, the next step is to figure out the port name of an un-sandboxed Mojo port. The trick is to get that privileged process to access its Mojo port name repeatedly, and then capture an access using Ridl. Once the port is known, the attacker has essentially escaped the sandbox.

The whole read is interesting, and serves as a great example of the sorts of attacks enabled by speculative execution leaks.

Project Rubicon: The NSA Secretly Sold Flawed Encryption For Decades

There have been a few moments in the past few years, when a conspiracy theory is suddenly demonstrated to be based in fact. Once upon a time, it was an absurd suggestion that the NSA had data taps in AT&T buildings across the country. Just like Snowden’s revelations confirmed those conspiracy theories, a news in February confirmed some theories about Crypto AG, a Swiss cryptography vendor.

The whole story reads like a cold-war era spy thriller, and like many of those novels, it all starts with World War II. As a result of a family investment, Boris Hagelin found himself at the helm of Aktiebolaget Cryptograph, later renamed to Crypto AG (1952), a Swedish company that built and sold cipher machines that competed with the famous Enigma machine. At the start of the war, Hagelin decided that Sweden was not the place to be, and moved to the United States. This was a fortuitous move, as it allowed Hagelin to market his company’s C-38 cipher machine to the US military. That device was designated the M-209 by the army, and became the standard in-the-field encryption machine.

Continue reading “Project Rubicon: The NSA Secretly Sold Flawed Encryption For Decades”