Samba has a very serious vulnerability, CVE-2021-44142, that was just patched in new releases 4.13.17, 4.14.12, and 4.15.5. Discovered by researchers at TrendMicro, this unauthenticated RCE bug weighs in at a CVSS 9.9. The saving grace is that it requires the fruit
VFS module to be enabled, which is used to support MacOS client and server interop. If enabled, the default settings are vulnerable. Attacks haven’t been seen in the wild yet, but go ahead and get updated, as PoC code will likely drop soon.
Crypto Down the Wormhole
One notable selling point to cryptocurrencies and Web3 are smart contracts, little computer programs running directly on the blockchain that can move funds around very quickly, without intervention. It’s quickly becoming apparent that the glaring disadvantage is these are computer programs that can move money around very quickly, without intervention. This week there was another example of smart contracts at work, when an attacker stole $326 million worth of Ethereum via the Wormhole bridge. A cryptocurrency bridge is a service that exists as linked smart contracts on two different blockchains. These contracts let you put a currency in on one side, and take it out on the other, effectively transferring currency to a different blockchain. Helping us make sense of what went wrong is [Kelvin Fichter], also known appropriately as [smartcontracts].
Alright. I figured out the Solana x Wormhole Bridge hack. ~300 million dollars worth of ETH drained out of the Wormhole Bridge on Ethereum. Here's how it happened.
— smartcontracts (@kelvinfichter) February 3, 2022
When the bridge makes a transfer, tokens are deposited in the smart contract on one blockchain, and a transfer message is produced. This message is like a digital checking account check, which you take to the other side of the bridge to cash. The other end of the bridge verifies the signature on the “check”, and if everything matches, your funds show up. The problem is that one one side of the bridge, the verification routine could be replaced by a dummy routine, by the end user, and the code didn’t catch it.
It’s a hot check scam. The attacker created a spoofed transfer message, provided a bogus verification routine, and the bridge accepted it as genuine. The majority of the money was transferred back across the bridge, where other user’s valid tokens were being held, and the attacker walked away with 90,000 of those ETH tokens.
The 9.8 CVE That Wasn’t
Dealing with security reports can be challenging. For example, English isn’t everyone’s first language, so when an email comes in with spelling and grammar mistakes, it would be easy to dismiss it, but sometimes those emails really are informing you of a severe problem. And then sometimes you get a report because someone has discovered Chrome’s DevTools for the first time, and doesn’t realize that local changes aren’t served to everyone else.
CVE-2022-0329 was one of those. The package in question is the Python library, loguru
, which boasts “Python logging made (stupidly) simple”. A serious CVE in a logging library? The internet briefly collectively braced for another log4j
style problem. Then more people started looking at the vulnerability report and bug report, and casting doubt on the validity of the issue. So much so, that the CVE has been revoked. How did a non-bug get rated as such a high security issue, that GitHub was even sending out automated alerts about it?
The theoretical vulnerability was a deserialization problem, where the pickle
library, included as a dependency of loguru
, does not safely deserialize untrusted data. That’s a valid problem, but the report failed to demonstrate how loguru
would allow untrusted data to be deserialized in an unsafe way.
There’s a concept at play here, the “airtight hatchway”. In any codebase or system, there will be a point where manipulating program data can lead to code execution. This is behind the airtight hatchway when performing that attack requires already having control over the program. In this case, if you can build the object that pickle
will deserialize, you already have arbitrary code execution. That’s not to say it’s never appropriate to fix such an instance, but that’s code hardening, not fixing a vulnerability.
That’s where this went off the rails. [Delgan], the developer behind loguru
was convinced this wasn’t a true vulnerability, but he wanted to do some code hardening around the idea, so marked the original vulnerability report as accepted. This set the automated machinery in motion, and a CVE was issued. That CVE was set as extremely serious, based on a naive understanding of the issue, maybe also an automated action. This automated frenzy continued all the way to a Github advisory, before someone finally stepped in and cut the power to the out-of-control automaton.
Windows EoP PoC
In January, Microsoft patched CVE-2022-21882, an Escalation of Privilege in the Win32 code of Windows. Don’t let that fool you, it’s present in 64-bit versions of Windows, too. If you’re behind on your updates, you might want to get busy, as a Proof-of-Concept has now dropped for this bug. This has been reported as a patch bypass, making this essentially the same underlying problem as CVE-2021-1732.
QNAP Forced Pushed an Update
And Users Are Ticked
QNAP and other NAS manufacturers have been forced to step up their security game, as these style devices have become yet another tempting target for ransomware thieves. So when QNAP discovered a flaw that was being exploited in the “deadbolt” malware campaign, they opted to do a force push of the update to every user that had auto-update enabled. This means that where updates would normally install, and ask for permission to reboot, this one rebooted spontaneously, maybe causing data loss in the worst case.
QNAP has given their thoughts in a Reddit thread on the subject, and there’s some disagreement about how exactly this worked. At least one user is quite emphatic that this feature was disabled, and the update still auto-installed. What’s going on?
There is an official answer. In an earlier update, a new feature was added, the Recommended Version. This serves as an automatic update, but only when there’s a serious issue. This is the setting that allows forced pushes, and it defaults to on. (In fairness, it was in the patch notes.) Dealing with updates on appliances like these is always difficult, and the looming threat of ransomware makes it even stickier.
So what do you think, was QNAP just taking care of customers? Or was this akin to the notice of destruction of Arthur Dent’s house, posted in the basement in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.’? Let us know in the comments, or if Discord is your thing, the new channel dedicated to the column!
Share and enjoy. Reality restored. Real men use irc btw.
Real men don’t brag about what real men do or use ;)
If QNAP were more skilled then they would simply restart the affected services because it wasn’t a kernel exploit they were patching. For good measure they could put a “reboot needed” warning with a countdown timer to force a reboot.
I agree, but here’s a counter-point for the sake of conversation:
That forced reboot timer would almost certainly go off for all the same people who got bitten by the forced reboot that actually happened, since users who didn’t open their NAS’s UI to see the “Recommended Version” option in the previous update probably wouldn’t have seen the timer counting down.
Anyone taking care of their NAS will have some way to receive notifications from it – e.g. Synology can send push notifications to your phone when a backup fails etc.
I’d say there is more that enough elements in common with poor Mr Dent’s story here, but at least it was for good reason, hyperspace bypasses (*oops sorry data security) is vitally important….
Seriously I think they probably took the right approach really – with the level of contact they can make with all the owner/user/IT managers using their products bein so low – if you can push out a message and know all the users will get it and update ASAP but at their convince you really should do it that way so the user is in control. But otherwise serious flaws in gear like this, which most users probably never do any admin on it just sits in its corner doing its job, never ever to be thought of, at least until it goes wrong… (For instance even a tech savie person like myself failed to realize my fancy headphone mixer amp thing was running very very old firmware and one of the minor flaws I’d experienced with it may well now be fixed because only today some 7 years late debugging what seems to be broken Toslink did I think to look if there was an update – though obviously this isn’t internet connected directly its just another example of a headless device)…
As with all changes to settings like this even if you make it a jump up and down, shouting at the top its lungs obvious addition in the patch notes it is still going to be easy to miss, or especially with language barriers assume it doesn’t effect you as you already have auto updates turned off and this feature sounds like adding more granular control to auto-updates not an override on your existing setting…