If you run an OpenBSD server, or have OpenSMTPD running on a server, go update it right now. Version 6.6.2, released January 28th, fixes an exploit that can be launched locally or remotely, simply by connecting to the SMTP service. This was found by Qualys, who waited till the update was released to publish their findings.
It’s a simple logic flaw in the code that checks incoming messages. If an incoming message has either an invalid sender’s username, or invalid domain, the message is sent into error handling logic. That logic checks if the domain is an empty string, in which case, the mail is processed as a local message, sent to the localhost domain. Because the various parts of OpenSMTPD operate by executing commands, this logic flaw allows an attacker to inject unexpected symbols into those commands. The text of the email serves as the script to run, giving an attacker plenty of room to totally own a system as a result.
Browser Locker
“Your browser has been locked to prevent damage from a virus. Please call our Windows help desk immediately to prevent further damage.” Sound familiar? I can’t tell you how many calls I’ve gotten from freaked-out customers, who stumbled upon a scare-ware site that locked their browser. This sort of scam is called a browlock, and one particular campaign was pervasive enough to catch the attention of the researchers at Malwarebytes (Note, the picture at the top of their article says “404 error”, a reference to a technique used by the scam. Keep reading, the content should be below that.).
“WOOF”, Malwarebyte’s nickname for this campaign, was unusual both in its sophistication and the chutzpah of those running it. Browsers were hit via ads right on the MSN homepage and other popular sites. Several techniques were used to get the malicious ads onto legitimate sites. The most interesting part of the campaign is the techniques used to only deliver the scareware payload to target computers, and avoid detection by automated scanners.
It seems that around the time Malwarebytes published their report, the central command and control infrastructure behind WOOF was taken down. It’s unclear if this was a coincidence, or was a result of the scrutiny they were under from the security community. Hopefully WOOF is gone for good, and won’t simply show up at a different IP address in a few days.
Kali Linux
Kali Linux, the distribution focused on security and penetration testing, just shipped a shiny new release. A notable new addition to the Kali lineup is a rootless version of their Android app. Running an unrooted Android, and interested in having access to some security tools on the go? Kali now has your back.
Not all the tools will work without root, particularly those that require raw sockets, and sending malformed packets. It’s still a potentially useful tool to put into your toolbox.
Cacheout, VRD, and Intel iGPU Leaks
Intel can’t catch a break, with three separate problems to talk about. First up is cacheout, or more properly, CVE-2020-0549, also known as L1DES. It’s a familiar song and dance, just a slightly different way to get there. On a context switch, data in the Level 1 cache isn’t entirely cleared, and known side-channel attacks can be used to read that data from unprivileged execution.
VRD, Vector Register Sampling, is another Intel problem just announced. So far, it seems to be a less exploitable problem, and microcode updates are expected soon to fix the issue.
The third issue is a bit different. Instead of the CPU, this is a data leak via the integrated GPU. You may be familiar with the most basic form of this problem. Some video games will flash garbage on the screen for a few moments while loading. In some cases, rather than just garbage, images, video stills, and other graphics can appear. Why? GPUs don’t necessarily have the same strict separation of contexts that we expect from CPUs. A group of researchers realized that the old assumptions no longer apply, as nearly every application is video accelerated to some degree. They published a proof of concept, linked above, that demonstrates the flaw. Before any details were released, Phoronix covered the potential performance hit this would cause on Linux, and it’s not great.
Unintended Legal Consequences
Remember the ransomware attack that crippled Baltimore, MD? Apparently the Maryland legislature decided to step in and put an end to ransomware, by passing yet another law to make it illegal. I trust you’ll forgive my cynicism, but the law in question is a slow-moving disaster. Among other things, it could potentially make the public disclosure of vulnerabilities a crime, all while doing absolutely nothing to actually make a difference.
GE Medical Equipment Scores 10/10
While scoring a 10 out of 10 is impressive, it’s not something to be proud of, when we’re talking about a CVE score, where it’s the most critical rating. GE Healthcare, subsidiary of General Electric, managed five separate 10.0 CVEs in healthcare equipment that they manufacture, and an 8.5 for a sixth. Among the jewels are statements like:
In the case of the affected devices, the configuration also contains a private key. …. The same private key is universally shared across an entire line of devices in the CARESCAPE and GE Healthcare family of products.
The rest of the vulnerabilities are just as crazy. Hard-coded SMB passwords, a network KVM that has no credential checking, and ancient VNC versions. We’ve known for quite some time that some medical equipment is grossly insecure. It will apparently take a security themed repeat of the Therac-25 incident before changes take place.
Odds’n’ends
The Windows 7 saga continues, as Microsoft’s “last” update for the venerable OS broke many users’ desktop backgrounds. Microsoft plans to release a fix.
Firefox purged almost 200 extensions from their official portal over the last few weeks. It was found that over 100 extensions by 2Ring was secretly pulling and running code from a central server.
The Citrix problems we discussed last week has finally been addressed, and patches released, but not soon enough to prevent the installation of future-proof backdoors on devices in the wild. There are already plenty of reports of compromised devices. Apparently the exploitation has been so widespread, that Citrix has developed a scanning tool to check for the indicators of compromise (IoCs) on your devices. Apply patch, check for backdoors.
What the hell is OpenSTMPD? (wrong in header, correct in article)
Oh, good. I wasn’t the only one confused by that… :)
Oof, dumb typo. Thanks, fixed.
one more Ope’m’SMTPD
From a ZDNet article on the GE Healthcare equipment debacle:
> The healthcare device vendor also says that if vendors configure these devices properly, on isolated networks, the danger is much lower to hospitals and their patients.
Hospitals have been notified since last year
> “GE Healthcare began sending letters to customers globally on November 12, 2019, which reminds users of the proper configuration of the patient monitor networks,” a GE spokesperson told ZDNet.
As if “properly” isolating your network is the fix for crappy security. Ask the people at Natanz if that worked for them, huh?
TBH Natanz was attacked by malware specifically designed for the task and probably had a humongous R&D budget, which you’re unlikely to see in malware attacking medical devices…
Just by following GDPR requirements you can make said data worthless, as medical data that can’t be paired to names have little usefulness outside of academic fields.
Malwarebytes link 404’s.
Hah, it does look like it, doesn’t it. I think you did end up at the right article, the picture at the top says 404.
Thanks to this post I discovered the OpenSMTPD vulnerability, and quickly updated the instance I was running. Unfortunately while searching through the logs it looks like some attack attempts have already been made.
These weekly security posts are very useful, keep them coming!
Thank you for making “This Week In Security”
:o)
I was recently hospitalised (thank-you diverticulitis, home now) and had to be hooked up to an IV drip, through which I also received antibiotics.
The device I was hooked up to, appeared to have some sort of network connectivity. I didn’t look into exactly what. Aside from the annoying “Partial Occlusion – Patient Side” errors that cropped up, it was doing the job it was supposed to. I wasn’t sure what the “network” end was monitoring, I suspect just status only, which is fine, so long as there wasn’t a vulnerability that allowed someone to remotely fiddle with the dosage there wasn’t an issue.
That said, one evening I was due for my dose of antibiotics, I had this video playing on my laptop:
https://www.youtube.com/watch?v=5XDTQLa3NjE
One nurse did ask about the video and I explained a little about it… also commented that in some ways, the canula pump had a similar problem. I think the similarities flew over her head higher than SpaceX.
GE and other imaging companies have had more issues over the years than I care to think of. Unpatched workstations being one of the more impressively stupid ones. You find the OS image is vulnerable and you isolate or shutdown the machine. Contact GE/whoever support and they come out, patch the machine using their approved image, and leave. Machine gets put back on the network and…SURPRISE, they patched using the original image and actually did nothing. That only took 4ish months to fix. They have a laundry list of directories on C: that *must* be excluded from AV scans and at no point should anyone scan the machine with any type of scanner like nessus or nmap since it will crash.
As someone who supports a cortisone environment, I can say they have been very helpful with mitigating and then patching CVEs.
Also, if you didn’t apply mitigation steps pretty quickly, why? It’s a really straightforward change to make.
I took a class (Had to for a free emulator / programmer) on a embedded micro I liked. The vendor made a big deal about the addition of the “watchdog timer” which was new to me. I asked what it was and some guys started giving me grief that I was so dumb as to not know about this timer interrupt that was supposed to just randomly stop to check its code to see if it was still running. Or so I gathered from how they explained it. I understand if your running a micro with hardradiation that might randomly flip a bit from 0 to 1 (Like space craft) but these guys were so into this I said if your code doesn’t run what makes you think the routine you write in the watchdog timer is going to do you any good? Why not write code that doesn’t need a “crash expector” This really spun them up and I said “I hope your not writing firmware for medical devices……” Guess what? Ah hah…. Be afraid be very very afraid. Hacking pacemakers etc… Why do people spend so much time fucking with everyone instead of writing actual code that does something? Other then the rivers of cash shooting out of ATMs of course…
It’s not cool to give people grief about not knowing something. Watchdogs are great for high reliability, but also for hacky abuses. You should certainly look into them.
The idea is that it’s an independent expiring timer that hard resets the system. Your code, when it’s working, re-fills the timer periodically. (“Feeds the watchdog.”) If your code fails in any way, the chip gets a hard reboot automatically.
How hardened, how independent, and thus how reliable the watchdog timer is, is worth thinking about. For instance, if a glitch in your code can disable the watchdog, that’s a path to failure. Most chips have watchdogs enabled in flash fuses, for instance, to make an accidental disabling less likely, but some take extra precautions.
Will a brownout that pulls the CPU down also pull the watchdog hardware down? What sets the timing on the watchdog? Can it be set into a permanent-loop state? Etc. There are tons of failure modes that watchdogs can/should help with, but it gets tricky.
Many microcontrollers have a long watchdog timer that you can use as an automatic wake-up-from-sleep if you don’t otherwise need the watchdog. Hacky, but helps with low-power applications.