This Week In Security: Bitwarden, Reverse RDP, And Snake

This week, we finally get the inside scoops on some old stories, starting with the Bitwarden Windows Hello problem from last year. You may remember, Bitwarden has an option to use Windows Hello as a vault unlock option. Unfortunately, the Windows credential API doesn’t actually encrypt credentials in a way that requires an additional Windows Hello verification to unlock. So a derived key gets stored to the credential manager, and can be retrieved through a simple API call. No additional biometrics needed. Even with the Bitwarden vault locked and application closed.

There’s another danger, that doesn’t even require access to the the logged-in machine. On a machine that is joined to a domain, Windows backs up those encryption keys to the Domain Controller. The encrypted vault itself is available on a domain machine over SMB by default. A compromised domain controller could snag a bitwarden vault without ever even running code on the target machine. The good news is that this particular problem with Bitwarden and Windows Hello is now fixed, and has been since version 2023.10.1.

Reverse RDP Exploitation

We normally think about the Remote Desktop Protocol as dangerous to expose to the internet. And it is. Don’t put your RDP service online. But reverse RDP is the idea that it might also be dangerous to connect an RDP client to a malicious server. And of course, multiple RDP implementations have this problem. There’s rdesktop, FreeRDP, and Microsoft’s own mstsc that all have vulnerabilities relating to reverse RDP.

The technical details here aren’t terribly interesting. It’s all variations on the theme of not properly checking remote data from the server, and hence either reading or writing past internal buffers. This results in various forms of information leaks and code executions problems. What’s interesting is the different responses to the findings, and then [Eyal Itkin]’s takeaway about how security researchers should approach vulnerability disclosure.

So first up, Microsoft dismissed a vulnerability as unworthy of servicing. And then proceeded to research it internally, and present it as a novel attack without properly attributing [Eyal] for the original find. rdesktop contained quite a few of these issues, but were able to fix the problem in a handful of months. FreeRDP fixed some issues right away, in what could be described as a whack-a-mole style process, but a patch was cooked up that would actually address the problem at a deeper level: changing an API value from the unsigned size_t to a signed ssize_t. That change took a whopping 2 years to actually make it out to the world in a release. Why so long?

Two reasons for that long time lag. First off, it was a hardening change, not a response to a single vulnerability. It would have prevented a bunch of them all at once, but wasn’t a required change to fix any of them individually. But even more importantly, this was an API change. It would break things. So, throw it into the major version branch and wait. And here’s where there’s a bit of a dilemma. Should a researcher blast the problem online, or wait patiently? There’s no single solid answer here, as every situation has its own complexities, but [Eyal] makes the case that security researchers ought to be more concerned with projects getting fixes applied, and not just content to score another CVE.

Crawl Networks with SSH-Snake

We just discovered this clever tool this week: SSH-Snake. The concept is simple. The script looks for any SSH private keys, then tries them on the list of known ssh hosts. For each host that accepts a key, the script runs again. It doesn’t drop any files on the filesystem, and runs automatically without intervention, compiling a nifty graph of accessible systems at the end. Definitely a worthwhile tool to keep in your digital toolbox.

Bits and Bytes

In an amusing turn of online play, Mandiant lost control of their X account for a while this week. It was a fun cat-and-mouse game as posts pushing crypto scams would appear, disappear, and appear again. One can only imagine the frantic work done behind the scenes as this played out. Hopefully we can share a Mandiant blogpost about this in a few weeks. And yes, there’s an XKCD about that.

If you still have a Lastpass account, you may have gotten emails this week about a master password requirement change in the works. The TL:DR is that Lastpass has previously “required” a 12 character password. Starting soon, all password will actually have to be 12 characters long, including those from older accounts. It’d probably be best to get out ahead of that change anyway, if you have a shorter password.

It does seem a bit tone-deaf, that 23andMe blames the victims for the recent account breaches there. “users used the same usernames and passwords used on as on other websites that had been subject to prior security breaches, and users negligently recycled and failed to update their passwords following these past security incidents”. Except, that’s technically correct. Users really were re-using passwords. And users really did opt in to sharing details with their genetic matches. The only real failure was that nobody at 23andMe spotted the credential stuffing attack as it was happening, but that’s admittedly difficult to discern vs normal traffic. So probably an A- for the technical point. And a D for the delivery.

10 thoughts on “This Week In Security: Bitwarden, Reverse RDP, And Snake

  1. >Should a researcher blast the problem online, or wait patiently?

    For me that is easy, if the software is being fixed keep quiet even if the full release that makes the fix the norm takes a while. Though hopefully the program is putting in where possible smaller patches that cover the worst holes, releasing info that a more major change is in the works and it has security implications – lots of folks won’t change to v2 or even bump up to V1.4 for years if v1.2 is working stable without a push anyway.

    It is not so easy to do a major change that might well then require many changes to the other programs that rely on it. Breaking everything for ‘security’ doesn’t really fly – function is usually rather important than absolute security, and if it wasn’t the entire world would be much more closed and quiet (which might not seem like a bad thing, but I’d miss HAD and XKCD if browsers couldn’t exist as they can’t be secure enough…

    1. I’ve not been in exactly this sort of position, but in similar dilemmas about warnings, I have a different preference for which parts to share and which to gloss over. I want to share enough that the people who both need to know and are willing to listen will hear what they should be wary of before it’s too late. Most people may not care about whatever warning I have, but the ones who do ought to be allowed to benefit while the ones who don’t can disregard the warning.

      I can wait to make public the details about how the problem could be reproduced; there’s other sources of credibility such as letting trusted parties reproduce it for confirmation. But generally, the people responsible for the problem will drag their feet and undersell the severity if I let them control the disclosure.

      So I won’t announce to everyone I see “Hey, you know, the side door will let anyone in if they jiggle the knob and lift up.” but I might say “I was able to get in the locked door without a key yesterday, it took about 10 seconds and no particular skill. We should get someone out to look at it sooner than later and maybe bar it when no-one’s around. “

      1. But even announcing the mere presence of the vulnerability without details is enough to trigger attackers’ interest.

        They have a choice between time consuming research of targets likely to not pay off, and time consuming research with a definite achievable reward…

        1. Indeed, as soon as you say ‘oh by the way …’ you have put enough information out there to let everyone that hears it narrow down where to look a fair bit.
          Also I did say ‘if the software is being fixed’ – if as spaceminions suggests they are doing nothing but dragging their feet then you have to let everyone know and hope there is an alternative software its not too bad to transition too or method to lock down the flaw somewhere else in the software stack, even if its really annoying…

          1. Oh, sure, and if you’re sure your flaw is one that nobody else is going to find before it’s already fixed, then maybe you can give them some extra time. It’s just easy to be overconfident about how many people might have the same idea independently.

        2. Yeah, but at the same time, it’s easy to underestimate how easy it is to independently discover the same thing – even if the ones who have the problem don’t see it that way. A lot of times in history, people have independently invented the same thing at nearly the same time. Or in a cybersec context, people often become interested in looking for a group of similar flaws in unrelated software after the idea has circulated a bit. So if your discovery is “hey, I tested an idea that was going around and found that XYZ has a vulnerability too” and the company doesn’t bother to have a response ready anywhere near the end of the relevant disclosure window, or their response is “don’t worry about it”, well… Best let the customers of XYZ have a chance to be forewarned, even if it risks hastening the use of the flaw slightly.

  2. > Should a researcher blast the problem online, or wait patiently?

    If a major corporation dismisses your report so they don’t have to pay a bug bounty, IMMEDIATELY BLAST THE PROBLEM ONLINE. Simple as that.

    They are telling you “haha it’s no big deal we don’t care” — so post it.

    Actually a critical vulnerability that results in the power grid getting hacked? Well, they should have thought about that before trying to grub you out of the pittance of a bug bounty.

    Obviously you should show some lenience to a FOSS team if their bug triage sucks.

    1. Hard agree. I think it should be standard bug-hunter policy that if a vuln (an actual one with provable damage potential) is immediately closed as NOTABUG by a big corporation just to save on bug bounty money, that should be treated as full authorization for immediate public disclosure both of the problem AND the corporation’s response… with caveats.

      If the vuln is with a single corporation’s product? Absolutely. If, like this one, it’s with multiple companies, one reacts “not worth fixing, fsck off”, one reacts “we’ll fix it, but the fix requires API breaking changes and thus can’t go out until the next major version”, and one goes “yeah alright, beep boop, it’s patched”… would it be unethical to (rightfully) put the “not a bug” company on blast, knowing the “we’re fixing it but the release cycle is forcing us to wait” company will be harmed in the process? That’s the difficult question…

  3. RDP is put online all the time, often with mitigating measures like a firewall whitelist. There are more modern ways of doing it, like using a VPN, but plenty of Really Serious (TM) organisations are being run or administered that way.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.