Microsoft’s Patch Tuesday just passed, and it’s a humdinger. To add the cherry on top, two seperate BSOD inducing issues led to Microsoft temporarily pulling the update.
Among the security vulnerabilities fixed is CVE-2021-26897, another remote code exploit in the Windows DNS server. It’s considered a low-complexity attack, but does require local network access to pull off. CVE-2021-26867 is another of the patched vulnerabilities that sounds very serious, allowing an attacker on a Hyper-V virtual machine to pierce the barrier and run code on the hypervisor. The catch here is that the vulnerability is only present when using the Plan 9 filesystem, which surely limits the scope of the problem to a small handful of machines.
The most interesting fixed flaw was CVE-2021-26411 a vulnerability that allowed remote code execution when loading a malicious web page in either IE or pre-chromium Edge. That flaw was actively being exploited in a unique APT campaign, which we’ll cover right after the break.
Targeting the Researchers
A group, thought to be state-sponsored actors from North Korea, ran an impressive misinformation campaign, establishing a convincing-yet-fake security research group. To construct a convincing facade they created not only a bogus blog, but also fake Twitter, GitHub, and LinkedIn accounts. Their next step was to reach out to legitimate researchers, and invite them to cooperate on research projects. The “research project” was actually a series of malware 0-days. Microsoft’s coverage of the events back in January identified ZINC, AKA Lazarus, as the actors behind the attacks.
One of the 0-days that was used was just fixed this month, the last CVE discussed above. Once each machine had been compromised, the malicious activity seemed to be limited to surveillance and snooping through the file system. Let’s theorycraft for a moment. Why would a North Korean APT team use a 0-day to target security researchers? Probably to obtain more 0-day vulnerabilities, discovered by the targeted researchers.
Big Companies Behaving Badly
Last week it was Xerox, abusing a legal threat to shut down a virtual conference talk. This week it was Apple, seemingly taking advantage of a security researcher’s work. [Jai Kumar Sharma] discovered a big loophole in the AppleID password change procedure. To change that password, you first have to verify that you are indeed the account owner by signing back in with your password. He discovered that you can sit down at a logged-in Mac, and instruct it to logout of the AppleID account. Once again, the dialog prompts for username and password first. The catch is that once you fail the password prompt three times, you’re simply invited to set the new password.
You might point out that this isn’t an RCE, and shouldn’t be considered a high priority problem, because it requires logged-in physical access. Yes, that’s correct. It also misses the point. Apple’s response was that this wasn’t a bug, but a few months later, they fixed it quietly, without even crediting the reporter.
One more entry in this category, an unnamed company is going after researcher [Rob Dyke] for finding and reporting a public GitHub repo with private keys and passwords in the clear. He got a “thank you” for reporting the findings, shortly followed by an official notice of legal action. After getting some quick attention from other researchers, he has raised enough cash to retain a law firm that understands security research, so hopefully this particular story will have a happy ending.
Watching All the Cameras
Verkada offers a comprehensive security solution, built in the cloud to be accessible from any browser. Among other things, their platform aggregates live surveillance camera feeds for off-site monitoring. What could possibly go wrong?
Someone compromised the web interface and was able to access all the camera feeds. The guilty group provided proof of their misdeeds to Bloomberg. Apparently access was gained through the simplest of methods, an administrator username and password exposed on the internet. This sort of exposure usually happens through something like a GitHub repository that was never intended to be made public, or even the unintentional exposure of an internal document storage server.
From a security perspective, the most alarming element of the story is the note that this account gave attackers the ability to run code on the cameras themselves, meaning an instant foothold on the networks that host the cameras. This is a case of “Why is there a button that does that?”
Updates and Errata
I’m not sure if [Alex Birsan] knew the security storm he was unleashing when he let the dependency confusion attack loose on the world. We have covered this technique a few times, but something new has been confirmed — it’s being used in real attacks. Researchers from Sonatype discovered packages that were designed to send a copy of the /etc/shadow
and .bash_history
files back to the attackers.
The Silver Sparrow malware campaign was speculated about recently, with many of us suspecting that it was produced by a nation-sponsored group. ESET took a close look at the campaign, and has a very different opinion on what is going on. They point out that the potential payload would be distributed via an Amazon s3 bucket, which does not support delivering different content based on IP or geolocation. All told, their conclusion was that it’s likely just another adware campaign.
The Exchange hacking campaign we talked about last week has really exploded, with estimates putting the number of compromised machines at 30,000. Now that the vulnerabilities have been discovered and patches made available, it seems that attackers have abandoned a limited scope. Instead, the current campaign appears to be scanning the entire ipv4 address space, and trying to attack every Exchange server that’s found. As some servers seem to have been attacked multiple times, there is a possibility that other groups are now targeting the vulnerability as well.
I’ve heard sysadmins asking for help in cleaning their servers after being compromised. There is a recommended set of steps for responding to an attack like this. Unplug the network cable, power off the server, put the hard drives in a sealed bag. Put a new drive or drives in the machine, and install your OS from scratch, and start restoring known-good backups. Then, look very closely at everything else on your network to see if other devices have been compromised. Sound paranoid? Just remember, this isn’t Grandma’s computer that has bad browser extensions. This is an email server that probably runs your business, and it’s been attacked by a nation-backed APT group.
” One more entry in this category, an unnamed company is going after researcher [Rob Dyke] for finding and reporting a public GitHub repo with private keys and passwords in the clear. He got a “thank you” for reporting the findings, shortly followed by an official notice of legal action. ”
Well that’s idiotic, hope the judge throws it out straight away, doesn’t even deserve to be heard.
And hopefully the company name leaks out. I guess they cannot sue him anonymously.
Agreed. It’s very useful to know which companies behave this way, to avoid them as much as possible.
Why the sealed bag for the drives? (Is it to prevent stupid Johnny from across the hall from reinstalling them?)
That’s half of it. Keep them clear of dust and moisture, too. Also, if you get a follow up visit from a suit who wants to look at the compromised drive, the bag makes it easier to find.
> Unplug the network cable, power off the server, put the hard drives in a sealed bag. Put a new drive or drives in the machine, and install your OS from scratch, and start restoring known-good backups.
That assumes that no firmware anywhere in the server was “upgraded”
Yeah, that’s a fair point. I think lateral movement to other machines is a more likely scenario, but the firmware angle is worth considering.
With things like the Intel ME that give networked access to the low level stuff even a lateral movement to new machines on the same network might not be safe – the low level stuff is so complex and self defining now there really isn’t such a thing as known safe bios/firmware. The whole complex including networking gear can act as a reinfection method… Unless the networks are air gaped from each other..
With the one almost exception being firmware that is unwritable without desoldering the chip, bridging the trace, throwing on that jumper. In that case it is almost certain that the bios, and firmwares are just as secure as they were when initially burned no matter what the higher levels try to do to them – if the chip really can’t be written, it can’t be changed, which is usually rather clear on the devices that are set up like that…
So in many cases after spotting the big breach you could argue you need to purge everything, and reset not just the computers, but networking gear too. Which is just a little daft, but if you really want to be sure you purged every intentional security hole got to be sure all the now very smart networking gear is also safe etc…
Which is why any management network is physically separate from any other.
Apple promised to pay bounties for security bugs, and here they didn’t even credit the researcher. Hmmm.
Also got Windows update KB5000802 crashing some Win10 installs when you try to print. SO thats fun.
” This is an email server that probably runs your business, and it’s been attacked by a nation-backed APT group.”
In that case run it through the shredder and buy a new machine. Just to be sure.
I once ‘accidentally hacked’ the local weather cam. I was looking for a way to make a time-lapse video. I right-click on the picture, open in new tab, and there was the picture and a url with credentials. I went to this site, successfully logged in and changed the caption of the city-name by one character, to make it look funny. It worked, and took about two weeks for someone to see and correct it.
The plan 9 filesystem attack sounds like something the NSA tailored access people would do… to target one of the two plan 9 users out there… or maybe… both?
The set of users vulnerable due to 9pfs ovwr VirtIO is not as small as you may think. There is a fairly widely used Docket wrapper that runs your container in a VM “for security” and tunnels the Docker APIs over 9p.
“The catch here is that the vulnerability is only present when using the Plan 9 filesystem, which surely limits the scope of the problem to a small handful of machines.”
Doesn’t WSL2 use plan9 filesystems to access Linux files from windows and vice versa?