This Week In Security: 1Password, Polyglots, And Roundcube

This week we got news of a security incident at 1Password, and we’re certain we aren’t the only ones hoping it’s not a repeat of what happened at LastPass. 1Password has released a PDF report on the incident, and while there are a few potentially worrying details, put into context it doesn’t look too bad.

The first sign that something might be amiss was an email from Okta on September 29th — a report of the current list of account administrators. Okta provides authentication and Single Sign-On (SSO) capabilities, and 1Password uses those services to manage user accounts and authentication. The fact that this report was generated without anyone from 1Password requesting it was a sign of potential problems.

And here’s the point where a 1Password employee was paying attention and saved the day, by alerting the security team to the unrequested report. That employee had been working with Okta support, and sent a browser session snapshot for Okta to troubleshoot. That data includes session cookies, and it was determined that someone unauthorized managed to access the snapshot and hijack the session, Firesheep style.

Okta logs seemed to indicate that the snapshot hadn’t been accessed, and there weren’t any records of other Okta customers being breached in this way. This pointed at the employee laptop. The report states that it has been taken offline, which is good. Any time you suspect malicious action on a company machine, the right answer is power it off right away, and start the investigation.

And here’s the one part of the story that gives some pause. Someone from 1Password responded to the possible incident by scanning the laptop with the free edition of Malwarebytes. Now don’t get us wrong, Malwarebytes is a great product for finding and cleaning the sort of garden-variety malware we tend to find on family members’ computers. The on-demand scanning of Malwarebytes free just isn’t designed for detecting bespoke malicious tools like a password management company should expect to be faced with.

But that turns out to be a bit of a moot point, as the real root cause was a compromised account in the Okta customer support system, as revealed on the 20th. The Okta report talks about stolen credentials, which raises a real question about why Okta support accounts aren’t all using two-factor authentication.

DICOM Polyglot

Researchers at Shielder were running a red-team test against a customer, and discovered a vulnerable install of Orthanc, software used to handle medical imaging. So, they rolled up their sleeves, reverse engineered the patch, and developed an exploit. And in order to exploit this particular flaw, they used one of my favorite tricks — a polyglot file. That’s when a given file is valid when interpreted as multiple file types.

The flaw is an unrestricted file upload. Important to note here, the unrestricted element is the file location. The file must still be a valid DICOM image file, but once uploaded it can be written anywhere on the file system. Now, DICOM files are weird. Namely, the first 128 bytes are reserved as an “Application Profile”, and are not used as magic bytes to determine whether a file is valid DICOM. It’s like it’s a custom-made format for building a polyglot. One might go so far as to say that’s a security weakness within the DICOM file format itself.

The question becomes, what can you do with just 128 bytes? Normally I’d try to think of some way to stuff 128 bytes of shellcode in there, and write it over some binary that’s sure to be run. But that’s way too complicated, given the tools on hand. The solution Shielder went with was to put a brief JSON config in those 128 bytes, and throw in a NULL to get the JSON parser to ignore the rest of the file. That config turns on an API endpoint that executes any LUA script you send it, likely intended for debugging. Another API call reboots the server to apply the new settings, and the nut is cracked. Polyglots are fun!

Patch Your Roundcube!

The Roundcube webmail platform released a series of updates on the 14th, fixing a 0-day Cross Site Scripting (XSS) attack that was being used in the wild. The exploit used an svg tag with base64 encoded HTML to bypass the sanitization code in Roundcube. This one is nasty, in that it simply requires a user to view the email in order to run JS in the browser, with full access to the webmail interface.

To date, this exploit has been seen in the wild used against European governments and NGOs. With that said, enough details about the exploit have been released to trivially put together a Proof of Concept. And sending email is easy, so it’s probably just a matter of time until this exploit is included with all the other spam and malware in our inboxes.

Roundcube is used widely, and gets included in other solutions like iRedMail, but usually doesn’t get updated automatically. Thankfully the update process is pretty simple, though I did hit a headscratcher on one of the instances I worked to upgrade. There, the permissions on the config file were modified during the upgrade, and an unhelpful error message was accompanied by silence in the error logs. Fixing the permissions made everything work as expected.

Zenbleed from the Browser

When we covered Zenbleed, one of the worst-case scenarios was the flaw being exploitable from right in the browser. The good news is that none of the JavaScript engines that Trent and David tested ever use the vzeroupper instruction that triggers the bug. However, when paired with another exploit to escape the JS interpreter and run actual shellcode, Zenbleed does work even within the browser sandbox.

For bonus points, this attack makes the captured system memory available to the JavaScript code, and the test page just displays it as part of the web page. In a real attack, that data would silently get uploaded for later analysis. So, Zenbleed can’t run simply from JavaScript, but with a bit of work, and another exploit, it can run from within the browser. Click through to the article to see the code and Russ’s excellent notes on it.

Trusting Trust Goes Open Source

Fourty years ago, Ken Thompson published his landmark paper, “Reflections on Trusting Trust” (PDF). It turns out, the actual source code referenced in that tale was never released — until now. The demo is a compiler that compiles a password stealer into any binary it touches, including another compiler. If you’re using this compiler, even completely open source code isn’t trustable.

In response to a Keynote from Thompson earlier this year, Russ Cox sent an email asking for the legendary source, and got a copy, much to our delight. The actual code is short, and only has a few magic bits to make it work. What’s even more interesting is that the self-replicating backdoor did escape out into the wild shortly, but was squashed because of a bug, where the compiler would grow in size each time it was compiled.

Bits and Bytes

There’s a newly-discovered malware framework, Stripedfly, that has quietly been infecting Windows and Linux computers for the last six years. It was first dismissed as a simple crypto-miner, but more recent analysis has found it to be a much more comprehensive tool, probably a true Advance Persistent Threat — APT being a nice way of saying government-backed malware.

Open redirects are usually a bad idea, but they’re extra double bad when they’re in an OAuth login flow. This was a problem in the Harvest time-tracking system, particularly in the integration with Outlook Calendar. Convince a user to click on a link using that redirect, and the OAuth token is leaked.

12 thoughts on “This Week In Security: 1Password, Polyglots, And Roundcube

    1. When I was younger I disassembled things like the 1701 and 1704 viruses (really really old floppy disk viruses – the 1701 refers to the size by which a file increased with additional compressed code and was also a reference to Star Trek’s USS Enterprise – NCC 1701, the 1704 was just a variant) by hand with a pencil and some very large sheets of paper one byte of hexadecimal machine code at a time using a 8088 datasheet! Now I am old I read about what the kids today are at an think about the The Four Yorkshiremen Sketch ( https://www.youtube.com/watch?v=VKHFZBUTA4k ).

    1. For a personal account, you are right: that’s a bad idea.

      For large enterprises, it’s not that simple. If you as a person loses your password vault, you screw up yourself. If as an employee you lose access to your password vault, you could potentially put a company out of business. I’ve seen more than once a company losing access to their cold Bitcoin wallet and die.

      On a large company, there are SEVERAL different systems, with different password requirements, things recent, things ancient, and they all must work. If you have an offline password manager and it breaks somehow, systems you manage would break too. If there’s a cloud component, someone with management or service access could open your vault and get your passwords.

      Other resource from cloud password managers: auditing trails. If your password manager is offline, you can extract the plaintext, leak it (on purpose or not) and it’s difficult to pinpoint when and where the password leaked. An online password manager have exact when each password leaked, on which computer, and from whom.

      There are downsides, as with everything, but companies measure them against the upsides, and decide it’s a better choice for them.

      1. If the password manager leaks, it’s broken, and you probably can’t trust the audit trail.

        There’s plenty of ways to keep passwords safe, ensure continuity of someone loses them, and even share them, without relying on the cloud.

        1. > If the password manager leaks, it’s broken, and you probably can’t trust the audit trail.

          Not really. A cloud password manager have different subsystems, so even if there’s a leak somewhere, it does not mean everything is broken.

          > There’s plenty of ways to keep passwords safe, ensure continuity of someone loses them, and even share them, without relying on the cloud.

          Yes, surely! But for a lot of companies (both small and huge, and everyone in the between) the cost of maintaining an in-house password manager does not make sense. Or an offline password manager.

  1. “Any time you suspect malicious action on a company machine, the right answer is power it off right away, and start the investigation.”

    Are we sure about this? I’ve known and read plenty of cyber-security “experts” to claim otherwise so as not to alert the perpetrator and/or to maintain a chain of evidence during the investigation. Seriously, I’d like some comments about this

    1. Powering it off seems like the best choice to me, and aught not to be that alarming to whoever remotely owns your system, especially if it is a windoze machine – updates do that often. Have to ask yourself are you more interested in hunting the perpetrator while risking ever more of your data and systems by leaving that vulnerable spot open or patching the hole… I’d suggest it would be a rare individual or company that would choose to leave the rest exposed in the hope you can learn something more.

      The only reason that really makes any sense to me to leave it on is because you are not the computer tech and don’t know what you are seeing really is a problem – if it is your companies in house code that has gone wonky, some blip in the hardware, a background update slowing things down or something similarly insignificant security wise leaving it on for the technician to come look at it first makes sense – it might not even be anything more than a PEBCAK and if not anything in the temp files and memory that could be lost on a reboot may be useful to know what caused the behaviour.

    2. There are two approaches:
      – hunt them with the dogs
      – stop the damage

      If you want to find who is behind the attack and this is more important than the damage done, keep systems running, try to limit the damage working around the leaks (like removing permissions from affected accounts) and put a team overnight grabbing as much evidence as possible.

      If your systems have priority over identifying the attackers, pull the plug. Disconnect every infected device from the network, lock every account, rebuild from backups. As most attackers aren’t from the same jurisdiction as you and most of the time going to police means little, this is what most companies usually do.

      1. I see that as a mostly false dichotomy. Because any really sophisticated attacker is also going to work on scrubbing logs. (Or the in-process ransomware attack will erase evidence.) Yanking the power cord preserves the evidence. Yes, there are some instances where you get more data by waiting, but that’s beginning to sound like a Honeypot more than a deployed system.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.