This Week In Security: Good Faith, Easy Forgery, And I18N

There’s a danger in security research that we’ve discussed a few times before. If you discover a security vulnerability on a production system, and there’s no bug bounty, you’ve likely broken a handful of computer laws. Turn over the flaw you’ve found, and you’re most likely to get a “thank you”, but there’s a tiny chance that you’ll get charged for a computer crime instead. Security research in the US is just a little safer now, as the US Department of Justice has issued a new policy stating that “good-faith security research should not be charged.”

While this is a welcome infection of good sense, it would be even better for such a protection to be codified into law. The other caveat is that this policy only applies to federal cases in the US. Other nations, or even individual states, are free to bring charges. So while this is good news, continue to be careful. There are also some caveats about what counts as good-faith — If a researcher uses a flaw discovery to extort, it’s not good-faith.

Digital ID

In New South Wales, Australia, citizens can use digital driver’s licenses. This is done via the Service NSW app, available on Android and iOS. What could possibly go wrong? There is a glaring problem with this, it’s a terrible idea to voluntarily hand your phone to a law enforcement officer. That aside, the app generates the ID image on-the-fly from data stored on the device. On a jailboken phone, this is trivial to modify, but on any other iPhone, one can manipulate the app’s data using a backup and restore. ServiceNSW encrypts this data… using a 4 digit numeric code. It’s trivial to manipulate the data stored on the phone, and therefore the ID presented. Bizarrely, after the initial pull, the app never verifies its data store against the official database. The app even includes a pull-to-refresh function that claims to update the ID data. This function updates the date, time, and QR code, but not the potentially spoofed data.

The ability to modify an ID, as well as spoof someone else’s, means that the app makes identity theft painfully easy. The QR code does pull up-to-date information when scanned, but only the name and under-18 status. The picture isn’t part of that data. Steal an ID, slap your picture on it, and the QR code will check out. Service NSW has responded, issuing a statement that clearly indicates they don’t understand the problem:

This issue is known and does not pose a risk to customer information. The blogger has manipulated their own Digital Driver Licence (DDL) information on their local device. No other customer data or data source has been compromised. It also does not pose any risk in regard to unauthorised access or changes to backend systems such as Drives. Importantly, if the tampered licence was scanned by police, the real time check used by NSW Police (scanning mobipol) would show the correct personal information as it calls on DRIVES. Upon scanning the licence it would be clear to law enforcement that it has been tampered with. Altering the DDL is against the law. The DDL has been independently assessed by cyber specialists and is more secure than the plastic card.

Just Here For the i18ntranslation

Bonita is a business automation platform, mainly designed to let businesses put together workflows with minimal code. It’s a Java application, typically running on Tomcat, and distributed as a docker image among other channels. That Docker image, with it’s over five million downloads, had a big problem. The web.xml file contains filter stanzas used for controlling how requests are handled. A pair of those filters were intended to match i18n (internationalization) files, and deliver those endpoints without any authorization checks. This makes sense, as it allows a user to change the interface language on the login page. It’s a naive filter, literally matching any url containing i18ntranslation. So, any endpoint can be appended with ;i18ntranslation, and an unauthorized user has access. Whoops! The Docker image and other releases have been updated to fix the issue.

Zoom Fixed, Update!

First, if you have zoom installed, go check the version. If you’re older than 5.10.4, go trigger an update. And if you run Zoom on Linux, you’ll probably have to go download the installer again manually to update, though that makes things a bit safer in this case.

With that out of the way, let’s talk about the series of issues that could have allowed Remote Code Execution (RCE). Zoom does XMPP messaging, which is massages messages over XML. Zoom also sends control messages over this XML stream. The trick is that the server uses one library to validate those XML messages, and the client uses a different one, with different quirks. Sound familiar? Classic request smuggling material. One of the fun tricks is to send a clusterswitch message, pointing a client to a different server, potentially controlled by an attacker.

If a MitM attack wasn’t bad enough, an attacker can then send an “update” on Windows, consisting of a .exe installer, and a .cab file to install. The running Zoom client checks the exe to confirm that it’s signed, then executes. A modern Zoom installer also confirms the cab file signature, but a downgrade attack is possible. Send an older version, like 4.4, and a malicious .cab file. The exe is signed, so Zoom runs it, and this one doesn’t check the .cab, leading to easy RCE. The request smuggling was fixed server-side in February, but the client fixes didn’t land til April in 5.10.4.

Quick Tip

This week, I was helping a friend think through how to configure a Google account for some unorthodox utility usage. He was forced to turn on two factor authentication, but found that to be quite the pain, as he re-installs Android often for development and testing. If only, we mused, you could install Google Authenticator on a Linux machine, and back up the key yourself. And thus this tip, as you can indeed do this. Google Authenticator is just a TOTP, Time-based One Time Password. It takes a secret key and current time, and runs them though an algorithm to produce a (in this case) 6-digit code.

So how do you get that secret key out of your device? On a rooted phone it’s easy enough to extract from the sqlite database. Thankfully, the authenticator app can export a saved key as a QR code. Capture the data contained in the QR code, and then use this handy Python script to convert it back to the raw secret. (In many cases, you can even get the secret key directly by saying that the QR code didn’t work.) From there it’s an easy command: oathtool --totp -b secret_key

If you want to see how TOTP works under the hood, we wrote about that a while back.

26 thoughts on “This Week In Security: Good Faith, Easy Forgery, And I18N

  1. So 16 year old learner drivers can now easily generate perfect ID’s with changed birthdates to enter licensed premises and buy all the alcohol and tobacco they want. So much better than a physical card!

    1. I don’t trust to have important data on a device that will stop functioning anywhere in the next 24 hours (or immediately when mishandled).
      Say, I accidentally get an accident with my bicycle and fall into a ditch, wetting my phone. I have no other form of identification on me (my bank cards are also on my phone). How can emergency services check my drivers license (which in this country is valid as ID) to warn my relatives (or even pull out my medical file)?

      On the other hand, a plastic card driver’s license in the center of a wallet will survive being under water, in the heat, in the cold for years, and will survive even modest fires.

      People joke at me with my fat wallet with 10 plastic cards and at least €100 in cash, but it has saved me from numerous inconveniences.

      1. Sadly, it is challenging to explain these type of scenarios to those that are clueless about tech and its potential issues. I had a customer that required their office access control system to only use smartphone loaded credentials. I explained multiple cases where only having a smart phone credential would be a big issue… I suggested instead to have smart phone credentials in addition to other access credentials to minimize user access issues. Whether they listen is still uncertain. Sometimes you can only lead a horse to water…

        1. i think the reasoning is that in that jurisdiction, the drivers licence is only legally valid with the police (other uses are just a convenience) if you are stopped by the police and are asked for ID, they will be checking it against the government copy and you would be caught. If you were using it for some underage thing, that would be fraud, but someone would have to catch up with you in order to prosecute it.

      2. I ran into just this in an airport lounge, with boarding passes, tickets, and only means to contact my wife being my phone, I dropped it in a large coffee. Thankfully the staff had a crude DIY toolbox and I dismantled the phone, dried it, reassembled it and powered it up and wrote down everything I needed. 1,hour later the touchscreen died. What an eye opener. Now I take a paper backup, but with “the cloud” it’s not such a big deal… Untill 2fa becomes mandatory then I need the phone again !

        1. Don’t take your personal phone when travelling internationally. Remember that foreign border guarda can and do take and search/clone phones from visitors, just looking for a reason to detain them. If the border guards don’t ruin your trip, international roaming fees will!

          If you really “need” a phone, carry the cheapest, dumbest phone you can, and buy a local simcard from the first supermarket in your destination country. You’ll save some money and a lot of potential hassles.

      1. strongly-typoed massages are the best massages. They really work out the adhesions and stress the importance of well-formed style and elements. I think it’s totally valid.

    1. Totally off topic but I’m interested in which keycap model is used on the keyboard in the thumbnail of the video. It looks a bit like DSA, but it has more rounded corners. I searched for a name but could not find it.
      Does anyone know?

  2. I hate 2fa, I now need my phone to hand to do things I used to be able to do without it. Not a big deal most of the time but sure annoying if your trying to do something on a strange computer because you don’t have your phone..

    1. Yeah cuz we all know for sure that “strange computers” don’t have hidden password loggers, you really can trust the USB flash drive you found on the sidewalk, it’s okay to use the same password everywhere, Mr Ponzi is offering an excellent investment rate, and Bitcoin is a safe investment.

      1. As Dan says I’d not trust a strange computer, ever. I only trust most of my own grudingly as with out opensource verfied BIOS, and with that pesky management engine peeking at everything in the system there is only so far you can trust them.

        And 2fa isn’t some black magic that makes everything secure – its easy to have 2fa and be less secure than without it even! Used correctly it can be very good, but smartphone as 2fa device is usually pretty damn pointless – all your passwords and other secrets are usually on that one device anyway (for most folks that really use their smartphone its got it all) and the 2fa talks to the same damn box, meaning it really just makes life more annoying and probably less secure for it – folks assume 2fa means its all good so the bank/vendor systems or human will not look at the out of character bill so hard as its ‘authenticated’ already, and because its an added time delay and ‘secure’ folks are more likely to pick short passwords and reuse them as 2fa magic means I don’t need to practice proper password methodology…

        1. In practice 2FA is way more secure than only using a password. I have seen multiple security incidents (phishing, but also ransomware) that were possible because of the lack of 2FA. 2FA is not perfect and hackers are catching up, but best security advise is still to enable 2FA if possible.
          From the 2FA methods FIDO2 (for instance via yubikeys) is the best, it can also work via NFC on smartphones so it could be still available when the smartphone is not.

          1. Didn’t say anything against it as a concept, as I said it can be very good – but with the prevalence of smartphones holding all the users data, likely caching passwords and then being the device that receives the SMS 2FA, can pick up the emailed 2FA code, etc – its all the eggs in one rather delicate basket and most of the time you won’t even need to be able to unlock a phone to see the 2fa code sent, so no particularly high tech skills are needed if the user has shoddy password practices.

            Its the belief 2fa is ‘way more secure’ than only a password that is a large part of the problem – as it makes people less concerned, and sloppier with their own security. Which when many 2fa authentication methods basically work out at adding no extra security in some situations – like the hacked/stolen phone, and are quite likely not even going to be enough to protect from fishing or keystroke loggers etc, as there is always something in the chain that can be compromised entirely – the account recovery team or that backup email account all the recovery info goes to that doesn’t have any other methods of authentication so can’t even raise the eyebrow at an unusual IP address etc, and if you were to fall victim to the keylogger or scam artist odds are you wouldn’t be saved by 2fa, its not even going to make their job all that much harder in many cases, but the right 2fa setup absolutely would… Which is really the key point – for 2fa to really reliably do you any good it has to be deployed correctly and as part of a whole collection of proper practices.

    1. The methods with text messages or push notifications will stop working, but the time-based codes work with no network access on the phone. Often, the push notification 2FA is just a convenience shortcut that saves you from entering the code, but the code still works as a backup. (And text messages are probably the most robust feature of the mobile network, and work where calls or data don’t.) A code by email is the worst. Steam, I’m looking at you.

  3. My first thought on the good faith security research thing was…
    “Uhh. Yeah? This has pretty much been the game the whole time for security research.”
    Then I actually READ it. And… Holy shit.
    There is one MAJOR change that this article totally glosses over.
    This policy change now applies to “…accessing a computer…”

    In case you weren’t aware. The previous policy was that you had free reign on your own hardware. And you could do pretty much anything with software. But it ALL needed to be on machines you control (or had explicit permission to hack). You had absolutely no protection if you strayed onto ‘private property’.

    This NEW policy flips that on it’s head.

    You now have (some) federal protection while accessing other people’s systems, as long as you can show that it was being done responsibility and you were doing actual security research.

    You can even directly implement mitigations or other changes in the name of plugging a hole. (Though, you should absolutely be letting the admin do it when possible.)

    And, after a careful reading, it doesn’t seem like the system owners get a say in this. They can set policies, but they can’t stop people who are doing actual “good faith” research.

    Of course, they can still drag you over the coals in state courts…

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.