This Week In Security: Discord, Chromium, And WordPress Forced Updates

[Masato Kinugawa] found a series of bugs that, when strung together, allowed remote code execution in the Discord desktop app. Discord’s desktop application is an Electron powered app, meaning it’s a web page rendered on a bundled light-weight browser. Building your desktop apps on JavaScript certainly makes life easier for developers, but it also means that you inherit all the problems from running a browser and JS. There’s a joke in there about finally achieving full-stack JavaScript.

The big security problem with Electron is that a simple Cross Site Scripting (XSS) bug is suddenly running in the context of the desktop, instead of the browser. Yes, there is a sandboxing option, but that has to be manually enabled.

And that brings us to the first bug. Neither the sandbox nor the contextIsolation options were set, and so both defaulted to false. What does this setting allow an attacker to do? Because the front-end and back-end JavaScript runs in the same context, it’s possible for an XSS attack to override JS functions. If those functions are then called by the back-end, they have full access to Node.js functions, including exec(), at which point the escape is complete.

Now that we know how to escape Electron’s web browser, what can we use for an XSS attack? The answer is automatic iframe embeds. For an example, just take a look at the exploit demo below. On the back-end, all I have to do is paste in the YouTube link, and the WordPress editor does its magic, automatically embedding the video in an iframe. Discord does the same thing for a handful of different services, one being Sketchfab.

This brings us to vulnerability #2. Sketchfab embeds have an XSS vulnerability. A specially crafted sketchfab file can run some JS whenever a user interacts with the embedded player, which can be shoehorned into discord. We’re almost there, but there is still a problem remaining. This code is running in the context of an iframe, not the primary thread, so we still can’t override functions for a full escape. To actually get a full RCE, we need to trigger a navigation to a malicious URL in the primary pageview, and not just the iframe. There’s already code to prevent an iframe from redirecting the top page, so this RCE is a bust, right?

Enter bug #3. If the top page and the iframe are on different domains, the code preventing navigation never fires. In this case, JavaScript running in an iframe can redirect the top page to a malicious site, which can then override core JS functions, leading to a full escape to RCE.

It’s a very clever chaining of vulnerabilities, from the Discord app, to an XSS in Sketchfab, to a bug within Electron itself. While this particular example required interacting with the embedded iframe, it’s quite possible that another vulnerable service has an XSS bug that doesn’t require interaction. In any case, if you use Discord on the desktop, make sure the app is up to date. And then, enjoy the demo of the attack, embedded below.

Chromium Freetype Overflow

Chromium 86 has a fix for a particularly nasty bug. Tracked as CVE-2020-15999, this is a bug in how FreeType fonts are rendered. Now that Microsoft has switched to Edgium (Chromium powered Edge), we get two-for-one deals on Chromium vulnerabilities. This bug is interesting because it’s reportedly being actively exploited already. Google has marked the bug public, so we can take a closer look at exactly what happened.

The problem is in the FreeType library, regarding how fonts are handled when they contain embedded PNGs. To put it simply, the PNG width and height are stored in the font as 32-bit values, but those values are truncated to 16-bit before the buffer is allocated. After this, the PNG is copied to the buffer, but using the non-truncated values. A check is then performed to make sure the copy didn’t overflow, but unhelpfully, this was checked *after* the copy had taken place. The bug includes a test case, so feel free to go check your devices using that code. It’s not clear how long this bug has existed, but it’s possible it also affects Android’s System WebView, which is much slower to update.

Step-by-step of Chrome Exploit

[Man Yue Mo] recently published a detailed report on a Use-After-Free Chrome bug he discovered back in March, tracked as CVE-2020-6449. What makes this one worth looking at is the detailed account he gives us of the process of developing a working exploit from the bug. The whole account is a masterclass in abusing JavaScript to manipulate the state of the underlying engine. As a bonus, he gives us a link to the PoC exploit code to look at, too.

FBI Warning

The FBI, along with CISA and HHS, has issued a warning (PDF) about an ongoing redoubling of ransomware attacks against US hospitals and other healthcare providers. This attack is using the Trickbot malware and the Ryuk ransomware. They also note the use of DNS tunneling for data exfiltration, and specifically mention Point of Sale systems as a target.

The mitigation steps are particularly interesting in trying to read between the lines here. Before we look too deeply, I have to call out an outdated piece of advice: “Regularly change passwords”. This has been the bane of many users and administrators, and leads to weaker security, not stronger. With that out of the way, let’s look at the other recommendations.

A few recommendations are boiler-plate, like two-factor authentication, install security updates, have backups, etc. I was surprised to see the recommendation to allow local administration, in order to get things working again. What might be the most interesting is the recommendation to take a hard look at any RDP services that are running. Does this mean that some healthcare PoS system is running an out-of-date Windows, with a vulnerable RDP service open to the network by default, and it’s suddenly being targeted? Maybe. I’ve learned not to put too much stock in these advisories, unless actual details are given, and this particular example is quite light on details.

Loginizer’s SQL Injection

The popular Loginizer WordPress plugin is intended to protect your site’s login page from attack. It can add two-factor authentication, CAPTCHAs for repeated login attempts, and even detect brute-force attempts and blacklist the offending IP. That last one is where the problem lies. Incoming login attempts are logged to a SQL database, and that logging wasn’t properly sanitized, nor were prepared statements used. Because of this, the login page was subject to a very simple SQL injection attack. The Lesson? Sanitize your inputs, and use prepared statements! The latest update fixes this, as well as a separate but similar security issue.

What makes this bug novel is that WordPress found it a big enough problem to break the glass and push the big red button labeled “Force Update”. I didn’t know the folks at WordPress had a button that did that, but for particularly bad bugs like this one, it’s a useful capability. A few users complained that this update was installed even though they had auto-updates disabled. It’s a fine line to walk here, but it seems like WordPress should make it clear in the settings that this feature exists, and include a way to opt-out of forced updates like this one.

42 thoughts on “This Week In Security: Discord, Chromium, And WordPress Forced Updates

  1. Damn nice collection this time.
    For once a forced update doesn’t seem so bad – infact i would almost call it entirely good. A cripplingly bad bug that should never ever be allowed to exist (and should never have been in the first place SQL injection is stupidly old and well known as a vulnerability vector) is fixed – Unless they did the M$ thing of loading you up with even more M$ branded/sponsored bloat in the process of fixing this one bug (therfore introducing more bugs than they fixed most likely as Windows is like that) I think applauding this response is the way to go.

    Would be nice if they made it clear they could and would do such a thing, but in this case much like the Cannonical snap install silently via apt stuff they have done something that on the whole works well for the user. So be annoyed enough to point out they should be clear about the what, when and where so the user is adequately informed, but at least its not doing anything but making their product work right. Which is something in my experience most forced updates don’t do – its all about more ad revenue, or data harvesting just about anything that is good for them and does nothing for the user…

    1. the practice of wrapping something simple like a communications application such as discord in a shrinkwrap layer of adware and freemium features has got to stop. even my virus scanner does that.

      of course then you have the problem of coders living in cardboard boxes.

      1. Plenty of money to be made programming without the adware bloat. Heck now if you are M$ just quietly and subtly push your cloud platforms in windows for example – all those coders paid for by getting punters to opt into the ‘easy/convenient/obvious choice because most folks are too ignorant of or uninterested in look for the better for them alternatives – they just want something that sort of does what they (think they) need… There is also enough money in selling OS and support for it, selling services and support for them that you don’t need to get greedy and throw in all the crud as well.

        Most of the big names of the gnu-linux world are paid for by supporting their brand of open-source goodness, while their costs might be lower with community patches being free etc its still a provable success story for programmers getting paid.

        The real problem is if you make programmers work nearer the hardware without all the abstractions and beat security into them – a global tinfoil shortage will ensue, can’t have enough hats.. no we can’t prescious…

  2. This article is proof positive that humans are incapable of writing solid code. Buffer overruns and SQL injection have both been well understood for decades now. Our entire software development process is BROKEN, needs to be redone from the ground up when we STILL cannot avoid these basic problems. No other industry would accept such shoddy workmanship.

    1. It looks like we need a level of software complexity that existed in DOS or first (BASIC interpreters) or second generation home computers. Then, a single person can understand the entire software and hardware stack, and abominations that we have today would never get so out of hand.

      1. Everything today is built up in layers (of trust). The assumption is that the people who did the lower level knew what they were doing, and that if you mess up at a higher level, the lower levels will automagically detect and block your stupid foobar. But the people at the bottom look at it the other way, performance above all else and the people on the upper layers they know what they are at, they are not total idiots. They will take care of the big issues, and they will take the performance hit to sanitise everything. Everyone assumes that someone else is doing the right thing and everyone wants maximum performance.

        One person could know the entire software and hardware stack if they spent about 30-50 years, but guess what it is cheaper to higher 20 really crap programmers who barely know how to wipe their ass, let alone how everything works, and they will bang out a ton of code (of questionable quality). And they will sit the API for their program on top of several dozen API’s for other libraries to reduce their workload, and not understand anything about the good, bad or ugly of the lower libraries that they are using.

        The real problem is that the level of complexity has started to exceed the lifetime of one human being to understand it all (and few are willing to pay for that level of knowledge). And to workaround this we have specialisations in compression, networking, graphics, DSP, storage, UI design, databases, encryption, scalability, input, drivers, video, USB, ….. But 99 out of 100 times the cheapest bidder is selected. And when everything eventually goes foobar, you are going to end up bringing back the people who know that system best, the very same people who designed it wrong in the first place, especially in governmental work. So designing an OK system can actually end up generating more money long term than good or great code. Currently there is no legal blow-back for crappy code.

          1. If a building or a bridge collapses a large number of people die, there are regulations and laws covering what is allowed and not allowed. And if you design an irrigation system that mostly grows bacteria, and poison/kill a large number of people, there are regulations and laws covering what is allowed and not allowed (in most countries). Software is the wild wild west. Are there any laws, regulations or even methods to inspect and identify poorly written killer code ?

          2. It would be intriguing to try to measure the complexity of a bridge, vs a program. The biggest difference is that the bridge is expected to fail when it’s attacked, whereas the program is expected to survive. On top of that, we only have 75 years of experience in writing programs (Maybe a bit more if you include Babbage’s work), and multiple thousands of years of experience in building bridges.

      2. when i write code i try to use as few 3rd party libraries as possible. needless to say it takes me forever to get anywhere. when my projects get large enough where i have trouble understanding my own code, they typically get abandoned.

        1. But with time, you start to build your own libraries, and then you are the best to know how and when to use them. So that your newer projects can benefit from code you wrote for the other projects.

          1. You clearly have never done this before, experienced developers say all the time that they do not remember anything about what they wrote last month or even last week. It is the nature of the work that one must forget about other stuff in order to hold the whole concept of the current work. This is why people put comments in their code but it is not enough and there is much wasted work

            In order for us to write better software we need to better understand the human brain.

          2. I’m glad I’m not one of the ‘experienced developers’ X refers to, as I have no trouble using (and understanding) code I wrote more than thirty years ago – the oldest library I wrote that I still use now was written in 1990, and I recently gave it to my son to use. He had no trouble following the source code (it’s c++), and using it.. I agree it’s only about 5K lines, but it’s been possible to build reasonable code for a very long time..
            The thing is, people don’t. And uni’s don’t teach it either – I was an industry rep on a adviser committee to a local uni a few years ago, and I just couldn’t get them to understand the concept…

            The mistakes in the article above were mainly newbie mistakes. And I think we would find it was probably newbies who made them, with managers who weren’t much better…

            And unfortunately there is no vaccine for human stupidity…

        2. While I like the idea or avoiding ’em – re-inventing the wheel that often is silly, so you really should look for some well documented (hopefully widely used as well) library that cover your common needs, study them for a bit and stick to those.
          If your code is getting that incomprehensible you probably need to break the projects down into smaller parts and make a good flow chart of what each part does and why/how it contributes to the end goal. Still only going to help so much, but if each part is relatively simple and has a nice name and initial comment describing what its supposed to do it really helps (Though only if there is a great master flowchart of the logic IMO – so you don’t spend forever digging around looking for whatever handles x and all that blocks its relationships)
          And also works towards what Rogfanther says – if each module is sensibly written many of them will be reusable, so you start to build up your own collection and won’t have to do the work again every time.

          X is definitely somewhat right though I would hope they all remember the gist of anything they had written that recently..

      3. The human brain is too primitive to hold the information in a complex program. You can’t even write a proper payroll program that way because no human can even hold the tax codes in their head.

        This is stupid anyway, we can team up to design bridges and buildings that do not fall over, but software developers are proven to fail to interact. Maybe programmers are defective as humans, they certainly fail to interact well.

        1. Even FOSS developers are generally working for profit – the pay to let them live. If they don’t turn up useful working code often enough the sponsors/company paying for their dev work might pull out.
          FOSS devs are probably under less hassle, and errors are more likely to be caught by its open nature but its still got some of the same pains on the money front.

          Humans can write excellent code, there are plenty of examples of it out there. What we are screwing up is making good code in every level of the ever more removed from the baremetal of modern software stacks. Also too many library with overlapping functions that don’t always play nice, take up lots of extra ram every time another lib is called because dev B uses the similar function from LibC not LibA so lots of waste there in more than one way.

          1. But you can’t even provide a single example of a “well written program” because they do not exist. Humans can’t write good code any more than they can throw a football 400 yards or high jump 30 feet. You are a human and yet you have no clue about your limits. Wow

          2. X there is lots of examples of reliable code, some of it has been working since the dawn of silicon chips and is still going.. The key is most of these well written reliable programs had both real investment in them and clearly defined simple goals so it was not a cluster of cludges on somebody else somewhere else cludges, built on libs nobody has looked at or sanity checked ever that just barely work… You build it from well understood and properly crafted components and a program can be well written – just not something that happens much (if at all) in todays software world..

            Like the hello world and blink LED programs – unless your language/IDE is fucked or you are somehow magically the the most obtuse programmer ever actively trying to make ’em buggy and fail often…its a 100% reliable program leaving only hardware issues to worry about – as its basically impossible for it to fuck up (for a programmer) being such a simple premise.

        2. @Foldi-One
          Lets say that there is perfect bug free code to print “Hello World!” or blink a LED on and OFF. There is a lot of code behind both of those functions, unless you are bare-metal programming.

          And even if the code started out perfect, what will happen if someone moves the machines into either an environment where either Ionizing radiation randomly flips bits (at a higher rate than normal*) or the device is placed inside the near field (less than a few wavelengths) of an intense source of electromagnetic radiation. *Even machines in a normal environment still suffer (SEU) Single Event Upset at Ground Level at a rate of about ~1 and 20 upsets per bit per 10^13 hours from natural Cosmic rays. So how does your perfect code behave on hardware that is not theoretical perfect.

          1. Exactly as I said the hardware becomes the issue once the code is ‘perfect’
            You are correct there is lots of code behind those examples if is not done baremetal, but all very basic simple stuff from libs that should be after 20+ years for most languages stupendously well bug checked and being that old written in a time when craftsmanship was part of programming ethos – the jam everything together from 1000’s of sources with a little hot glue over the cracks here and there while not understanding how any of the bits function (todays modus operandi for most programmers it seems) is what generates unreliable garbage.

      1. Nah, they’re perfectly capable. What’s at fault is the same as every other industry – an expectation that things don’t need to be built to last. Whether it’s fast-fashion clothes, low quality cars, “planned obsolescence” in tech… everyone expects it won’t last. So there’s no management incentive to make it work well. And because no one expects to be in the same job in 5 years, they expect to be gone before any personal bugs catch up with them

        Some FOSS is great (eg Apache), but so much is crap – buggy to the point it can’t even perform its basic function. And if it’s got a security hole, who cares. At least companies make a basic effort as they might get sued.

        Some FOSS was great, but is quickly heading in the direction of the worst ways to monetise it (WordPress, annoyingly).

    2. My current view of something like this is using a system description language, rather than a programming language as such. Basically just writing in a “pseudocode” language that doesn’t contain implementation details and letting another program handle generation of code to address the flaws that humans miss like overflows. Allow the program to automatically generate and run unit tests and that should allow a basic standard of competency for code where no one has the time/expertise/robotic perfectionism/foresight to make sure the system is secure.

      Any ideas?

    1. Yeah because every bug is always fixed perfectly and you don’t ever need to worry about edge cases that the developers missed! And of course the testing that failed to catch these bugs for decades has now been fixed! Yes indeed nothing to worry about!

    2. yea. developers of a certain game i play are trying to push discord on all its users for official communication with the developer and game community as a whole. im of the opinion that forums are the superior form of communication. at least now i have a link to use when some kid says “ok boomer” as a rebuttal.

      humans seem to flock to the worst possible option with regards to what software they use.

  3. Humans don’t even begin to understand that their brains are bad at writing code, their stupid little brains think that their code is just fine, they need to experience a tragic data loss or pay a ransom before they realize that all human written code is bogus crap. Until the space aliens arrive to help us, the only answer is test, test, test and test some more.

  4. A lot of it is folks learning just one language then using it places where it wouldn’t be the best choice just because they’re familiar with it.
    With the above they’re using javascript for the Discord app, ok so far, I see web based code as the future of a lot of user interface user cases.

    But then they’re allowing content to be pulled in from the internet
    with an electron app that’s not such a good idea unless you’re limiting the scope of what you’re pulling in or isolating the front end from the backend (which I think electron doesn’t do in JS)
    A better approach might be to use a statically typed managed language for the back end and JS for the front end. That way if someone breaches JS then it’s just the user interface and not full on access to the File System.
    The idea is instead of trying to write perfect code, use a language that has safeguards built into it’s design so that even if you write really crappy code it will be less of a problem.

    Statically typed means you can’t just put random junk into fields of class’s for example (this can be a good thing or bad thing depending on how it’s used, python is dynamically typed for example).
    JS is dynamic typed, Typescript static typed.
    Managed means the compiler is doing the job of managing the memory (C / C++ is typically unmanaged which is why it suffers from so many buffer overrun / underrun or pointer related bugs then you get into using tools like valgrind)
    CSharp / Java / higher level languages might be an example of a “managed” language that has more of a safer wrapper on it’s library calls and auto manages its memory allocations / deallocations (no need to worry about pointer or pointer to a pointer or pointer to a pointer of pointers pointing to another pointer).

    JS is one of those languages that could have been designed a lot better if folks knew how it was going to be used, but that’s basically hindsight. It’s got to the point now where we’re actually compiling stuff into JS such as vue components, react, babel etc.
    I think eventually webassembly will make this redundant as a type of IL, although the only people using it at the moment seem to be bitcoin miners on torrent sites

  5. I do backups prior to updating, but this latest WP auto-update has broken quite a few things, that I am unable to fix.

    I had to delete several plugins and their data, just to get my site working. I also have to re-make a bunch of galleries on a lot of posts.

  6. Ah, Electron, all the Swiss cheese security and complexity & resource usage of a web browser for every program and application under the sun, which the managers can only be arsed to pay a already underpaid web developer to create and maintain.

  7. Electron light-weight? You’re kidding, right? Obviously the phrase “light-weight” is officially meaningless.

    Lets see: Slack ~1GB, MatterMost ~1GB, Chrome(ium) 1GB, FireFox 1GB, Thunderbird 1GB. Their initial footprint may be less, but after a bit of use they bloat to this level or more. I figured this out when my station started swapping. A lot. I’m used to having at least half my RAM empty if I’m not running many VMs. Hmm…. I wonder how Electric Pencil does (XUL core). The one thing these all have in common is the browser engine. T-Bird being FF wrapped in a different UI and I/O. The first two being Electron based. And Electron being Chromium.

    Software: the worlds only competition to see who can do the least with the most! Go Gates disciples!

    Let the flames begin…

    1. Electron is very bloaty.. But then again, so is just about everything.. I really don’t know how they use all that space..

      I recently did a contract job for someone, and sent them the program to test. They replied that there must be something wrong as it was 6mbytes – where was the rest of it? No, I said, that’s it. Ok, they said, what are all the dependences ie .net etc etc. Nope, I said, none of those either. But, they said, how are doing everything it is doing? ‘Must be magic I replied’…. Of course, in fact it was c++ using win32 api calls directly, and no libraries… They were also surprised how fast it was…

      On that (speed) my legit copy of excel 2002 runs faster on my 6 year old main pc (4790K) than the current version of excel does on my sons high end pc of 6 months ago.. We have actually done timing tests as he couldn’t believe how snappy it was when he saw me using it..

      1. Wow! I’m not the only one left. Although 6MB sounds a bit big to me. ;-) And I could write volumes on size/speed comparisons. Like: writing “hello world” to the console in DOS is a 20ish *BYTE* program, when written in assembly. Probably wouldn’t be much more to it in Linux. To use GTK to put a calendar widget on the screen only takes about 40K in FPC, the truth being the actual code to command GTK is probably less than 1K, the rest is FPC overhead. To do the same with Lazarus, also written in FPC and using GTK, would take about 1MB last I looked, which was why I grabbed the GTK docs for the exorcise. It seemed excessive.

        Truth is law 2 bites! When M$ told Crenshaw that they “don’t try to optimize anymore. The hardware will catch up.” (condensed) Crenshaw bemoaned that it hasn’t yet (decades later). See for the interview, about half way down the page.

        @Ian 42, if you’re still watching here: please use the contact page on the site of that first link to drop me a line.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.