This Week In Security: In Mudge We Trust, Don’t Trust That App Browser, And Firefox At Pwn2Own

There’s yet another brouhaha forming over Twitter, but this time around it’s a security researcher making noise instead of an eccentric billionaire. [Peiter Zatko] worked as Twitter’s security chief for just over a year, from November 2020 through January 2022. You may know Zatko better as [Mudge], a renowned security researcher, who literally wrote the book on buffer overflows. He was a member at L0pht Heavy Industries, worked at DARPA and Google, and was brought on at Twitter in response to the July 2020 hack that saw many brand accounts running Bitcoin scans.

Mudge was terminated at Twitter January 2022, and it seems he immediately started putting together a whistleblower complaint. You can access his complaint packet on archive.org, with whistleblower_disclosure.pdf (PDF, and mirror) being the primary document. There are some interesting tidbits in here, like the real answer to how many spam bots are on Twitter: “We don’t really know.” The very public claim that “…<5% of reported mDAU for the quarter are spam accounts” is a bit of a handwave, as the monetizable Daily Active Users count is essentially defined as active accounts that are not bots. Perhaps Mr. Musk has a more legitimate complaint than was previously thought.

Over 30% of Twitter’s employee computers had security updates disabled on some level, and about half of Twitter staff had access to production systems. At one point, [Mudge] felt the need to “seal the production environment”, fearing vandalism from an internal engineer in response to political upheaval. To his astonishment, there was nothing in place to prevent, or even track that sort of attack. Another worrying discovery was a lack of a disaster plan around a multi-node failure. The details are redacted, but some number of data centers going down gracefully at the same time would cripple Twitter’s infrastructure for weeks or longer, with the note that bootstrapping back to service would be a challenge of unknown difficulty. Interestingly, this exact scenario almost took Twitter down permanently in Spring 2021. I’ll note here that this also implies that Twitter could feasibly suffer from a split-brain scenario if network connectivity between its data centers were interrupted. This is an effect in high-availability systems where multiple systems are running in master mode, and the shared data-set converges.

There was some odd pushback, like the request that [Mudge] give his initial overview of problems orally, and that he not send the written report to board members. It’s never a good sign when you get a request not to put something in writing. Later, [Mudge] brought an outside firm in to prepare reports on how well Twitter was doing combating the spam and bot problem. Twitter’s executives hired a law firm, routing the reports to the firm first, where they were scrubbed of the most embarrassing details, and only then delivered to [Mudge]. Astounding.

An internal-facing system for Twitter engineers was seeing nearly 3,000 daily failed logins. No one knew why, and it was never addressed. Employee workstations did not have functioning backups, and the response from executives was that at least this gave them a reasonable excuse to not comply with official requests for records. As of earlier this year, Twitter had an estimated 10,000 services that may have Log4j vulnerabilities, and no workable plan to address the possible vulnerabilities. If you wanted a bug bounty from Twitter, this seems like a great place to start.

Things didn’t get better. [Mudge] tried to blow the whistle internally, on what he considered to be a fraudulent report presented December 16th to Twitter’s board. This effort percolated through Twitter’s internal structure for a month, and on January 18th he stated that he had an accurate report (PDF, and PDF mirror) nearly ready to present to the board. In an apparent desperate attempt to prevent this report from being delivered, [Mudge] was fired the next day, January 19th.

My initial response is well summed up by Martin McKeay, ironically on Twitter.

And when you’re looking for a well-reasoned dissent, Robert Graham is usually a good source. He doesn’t disappoint on this topic, making the case that while many of [Mudge]’s concerns are valid, the overall package is overblown. He points to several sections in the complaint that are statements of opinion instead of statements of fact, stating, “It makes him look like a Holy Crusader fighting for a cause, rather than a neutral party delivering his professional expertise.”

In-App Browser

iOS and Android apps have picked up a new habit — opening links in the app itself instead of opening them in your primary browser. You may not have thought anything about the in-app browser, but [Felix Krause] sure has. See, when an app runs its own browser, that app is the boss now. Developer wants to inject some CSS or JS on a site, or every site? No problem. And here, HTTPS won’t save you. But surely none of the popular apps would take advantage of this, right?

That brings us to inappbrowser.com. Send that link to yourself, and open it in the app. It searches for odd or known-dangerous JS objects, and lists everything it finds. Keep in mind that not all injected code is malicious, it might just be themeing a page, or adding functionality to already existing content. There are a few apps that seem particularly troublesome, like Instagram, Facebook, and TikTok. TikTok, to no one’s surprise, captures every screen tap and keyboard press while using the inn-app browser. And while most other in-app browsers have a button to open in your primary browser, TikTok leaves that one out, making it even harder to escape their garden. These issues were specifically observed on iOS, but it’s very likely that similar problems exist in Android apps.

Firefox At Pwn2Own

This story comes to us by way of Pwn2Own Vancouver 2022, where [Manfred Paul] demonstrated a novel attack on Mozilla’s Firefox browser. By chaining multiple prototype corruption vulnerabilities, an attack goes all the way from running JavaScript code on a website, to arbitrary code execution on the host computer. It’s a single-click exploit, a really nasty one, but thankfully it was demonstrated at Pwn2Own and fixed in Firefox 100.0.2. Zero Day Initiative has the write-up for us, and part one details the first exploit, jumping from JavaScript execution to arbitrary code execution, but still inside the render sandbox.

The starting point here is to understand that Firefox implements some of its features entirely inside JavaScript, and all the Javascript that runs inside the renderer sandbox is running in the same context. One of those features that is implemented in JavaScript is top level await, a way to load a JavaScript module in the background. If the loaded module overrides the array prototype in a particular way, that override gets called in the feature code. Once called, a handle to the module gets leaked back to the untrusted code. This handle isn’t intended to be exposed, and calls to its functions can be unsafe. This allows breaking out of the JavaScript engine and writing values to arbitrary memory locations — albeit all still inside the browser’s sandbox. Another clever trick is used to actually execute arbitrary code. Floating-point constants are stored inline in WebAssembly methods, and these are executable sections of memory. So snippets of code to run can be encoded as floating-point numbers, and the return pointer overwritten to jump into the code. This isn’t practical for a larger payload, so this technique is used to mark a larger ArrayBuffer object as executable, and then jump to that, which provides arbitrary execution of much larger bits of code.

Part two of the post is all about how to escape the sandbox and get code running on the system. And in this story, it’s JavaScript Prototype Pollution all the way down. Even outside the sandbox, various bits of the Firefox browser are implemented in JS, and there are several interfaces through which sandboxed code talks to the parent process. So the attack code fiddles with the Object prototype, and then needs to get the tab.setAttribute() function to run, where the manipulated prototype will inject an attribute. The easiest way to pull this off is to crash the tab in question, and since we have arbitrary memory access, it’s trivial. The attribute that gets added is how to handle a title text overflow, and the action to take is to set the tab’s sandbox level to 0. Sneaky.

Google Has Entered the DDoS Fight

Google has quite the feather in their DDoS mitigation cap, having successfully stopped the largest HTTPS DDoS attack on record, handling 46 million requests per second against one of their customers. The idea of this attack is that it’s computationally expensive to perform an HTTPS handshake, and if enough new connections arrive at the same time, the servers backing the service can’t keep up. Keep the pressure on, and the service is totally inaccessible.

The real challenge in trying to stop this particular attack is to discern the malicious traffic from legitimate users. This customer had already been using Google’s Adaptive Protection, so a fingerprint of legitimate traffic had been gathered. The exact details on how malicious traffic was matched hasn’t been published, but one could guess that multiple new connections from the same IP address, and known compromised IPs could be part of that solution. Regardless, it’s an impressive feat.

What Could Go Wrong? This.

And finally, pointed out by user [Hecker] on the Hackaday Discord server, we see the ugly unintended side effect of scanning users’ photos for illegal material. It seems that the mere act of backing up photos to Google Photos triggers such a scan, and this turned into a nightmare scenario for one user. A picture taken for a medical diagnosis led to account termination and a police inquiry, though the detective assigned to the case determined that the whole situation was ridiculous and no crime had occurred. Privacy really does matter, especially for the innocent.

LastPass

And some very last minute news, LastPass has published a notice that they detected unauthorized access to their source code via a compromised developer account. It sounds like they are doing a thorough investigation of the incident. LastPass is designed so that all the secrets are stored on the user’s computer, so it’s unlikely that any user data has been compromised. Yet. The one attack LastPass could be vulnerable to is the introduction of malicious code into the browser plugin and mobile apps, and a compromised dev account is in some sense the worst case, so it’s good that they caught it. I’m quite confident their experts are combing through their development environment and codebase with fine toothed combs at this very moment. More next week if there are updates to be had.

19 thoughts on “This Week In Security: In Mudge We Trust, Don’t Trust That App Browser, And Firefox At Pwn2Own

  1. “fixed in Firefox 100.0.2”

    it would be instructive if exploit hunters could also tell us what the earliest version is, where their exploit would work.

    That would expose the futility of the upgrade treadmill.

    1. Sometimes the earliest vulnerable version is spelled out. Often times it’s not trivial to determine. The initial vulnerability may be there, but enough internal stuff is different that the chain only works on the one version. It is a very involved process to develop a full exploit chain like this, so testing multiple product versions would be a huge task.

    2. How does knowing how long a bug has been around ‘expose the futility of the upgrade treadmill’?

      It is not like every bug is created by correcting previous ones… Many, probably even most bugs are introduced with new features folks actually need/want for the product to stay useful (potentially in the hardware as well as the software) – in the case of web browsing it is of no use to most folks these days if it can’t handle widevine media drm for instance. And some bug chains have nothing to do with the program that is the start of the chain at all, its working in spec but the OS layer beneath has a flaw, that may or may not have been there for ages.

      If you really really must know how old or if a specific version is vulnerable do the work yourself! Once the exploit is well documented its not hard to try it, or even variations on that theme if you care enough..

      1. Strange, but most people never wanted the ability to play DRM’d stuff. We were quite happy with non-DRM’d stuff.

        Many of these upgrades are only necessary because sites have adapted to use the new tech. As an example, I can save me maybe a half second of typing by using a js lambda instead of an anonymous function. And so I do. And now my code doesn’t run on older browsers. What’s improved? Nothing really. Even over a year it won’t save me much.

        1. Be nice if there wasn’t drm, but the world we live in does rather require something that looks alot like drm – and ultimately its not what the individual single user wants that is always the reason for change, sometimes its what is good for the companies involved, or what the industry they are creating the tech for seems to require.

          And its not just drm, even sticking in the world of media you need new codec to efficiently deal with larger more detailed video, to stream high quality audio over a network in less data intense fashion, or even just to fit music onto your portable media player of choice – lots and lots of things change for good reasons, that do nothing but make the technology better.

          Not all changes are ground breaking, or backwards compatible. Both of which are usually in reality a good thing, at least where bugs are concerned. Every time the wheel gets reinvented new bugs will be created and maintaining strict backwards compatibility makes certain you can’t actually remove many bugs.

  2. What was meant by: “So snippets of code to run can be encoded as floating-point numbers, and the return pointer overwritten to jump into the code.” I’m primarily focused on “encoded as floating-point number”. I’m guessing you mean that they aren’t really floating-point numbers, but simply coded placed in a area designated as floating-point numbers, right? It’s like when the linker screws up and places stuff in the wrong section or at the wrong address — which can result in bad things happening and makes it really hard to diagnose because you’re (desperately) assuming that your tools are “perfect”. Stuff like that happened too often in the “good old days”.

    That also tells me that Java-script doesn’t do any verification of “sections” or data types? But then, I expect that something as densely defined as floating-point numbers only has a few values that are invalid as floating-point numbers and that may make it harder to detect that kind of subterfuge.

    Why can we all just be nice? Seriously!

    1. I think they took their machine code instructions, and converted that into floating point values, such that when the JavaScript engine loaded their JS code, the floating point values in memory were the same bytes as the instructions they wanted to run. Then jmp into the data section of the JS, and it also works as executable.

  3. I don’t know how valid the “that’s just your opinion” argument is when debunking claims like this.

    For one thing, while it may be an opinion, it’s the opinion of an expert in the field. For another, Graham seems to be objecting to the tone of the documents more than the substance. I don’t know that “he’s coming off as a cowboy” is a valid debunk either.

    Some statements are likely opinions, and Graham points out that he has different opinions, but without references there’s no way to determine who is right. For example, Mudge points out that servers are not updated with the most current security patch/level, and Graham points out that that’s common and not considered a problem.

    Is it common? Is it generally not considered a problem? Who knows?

    I would think that the rational course of action is to take Mudge’s claims and go through them one by one, prioritize them to urgency, and determine yes/no whether they need fixing. For example, someone should determine whether having all the servers up-to-date is important, decide the correct course of action, and get management buy-in on the decision.

    Do this explicitly for each complaint, have a record of what you decided and *why* it was decided that way, and occasionally revisit and update that document.

    Then again, I don’t own a company that’s been telling the SEC for years that we have less than 5% bot users.

  4. “[Peiter Zatko] worked as Twitter’s security chief”

    Something doesn’t had up here. Not that I don’t believe Peiter’s claims (which are probably true for other big tech companies as well). But if he was Twitter’s security chief, why wasn’t he having this staff investigate the incidents he described? (3000 daily failed logins, 10000 with Log4J vulns, etc.)

    As for the bot problem, was he being actively told not to work on solving it?
    Seems a little like a good engineer may have been in-over-his-head in a leadership position, and ended up focusing on “the wrong things”(TM) (i.e. contradicting other execs).

    1. It’s not unusual to hire someone with good reputation to cover up faulty things instead of fixing it, especially if these things are part of the business model or expensive to fix.

    2. As for the bot problem, was he being actively told not to work on solving it?

      Follow the money. For years Twitter has filed SEC documents stating <5% bots. Twitter's advertising revenue is directly dependent on that <5% number. What is the outcome of solving the bot problem? Either A) It shows 5% bots. Either nothing happens or Twitter gets sued for millions in damages and stock value plummets. Zero upside with a hefty down side risk. Why risk it?

      1. For the bot problem I was considering “solving it” to be making sure that the <5% bots was factual, rather than just measure it (although accurate measurement is an important step in towards solving the problem).

        1. I don’t think bots on a platform are really a bad thing as they can be useful…
          BUT its important that bots are known to be so – which is no longer trivial when AI chatterboxs are starting to get good and some supposedly real humans are nonsensical on such platforms, and you are not falsely advertising your advert delivery system as reaching huge numbers more people than it does!

    1. Which is often par for the course with this sort of stuff it seems – so many stories from the trenches say far too many company, but especially the giants are run entirely by and for the benefit of the marketing and business school graduates that have no clue at all about the product and its production, so sell the impossible. All while the engineers and designers are saying we need to do x,y,z and are being instructed to ONLY ‘fix’ the one bad publicity symptom of the flaws and do it yesterday if they want to keep their job type shit. Under no circumstance are they to go digging around and creating costly changes that take more work, but actually fix the problem for good…

  5. > TikTok, to no one’s surprise, captures every screen tap and keyboard press while using the inn-app browser.

    Is Inn-App the hotel manager’s answer to AirBNB?

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.