This Week In Security: Psychic Paper, Spilled Salt, And Malicious Captchas

Apple recently patched a security problem, and fixed the Psychic Paper 0-day. This was a frankly slightly embarrasing flaw that [Siguza] discovered in how iOS processed XML data in an application’s code signature that allowed him access to any entitlement on the iOS system, including running outside a sandbox.

Entitlements on iOS are a set of permissions that an application can request. These entitlements range from the aforementioned to platform-application, which tells the system that this is an official Apple application. As one would expect, Apple controls entitlements with a firm grip, and only allows certain entitlements on apps hosted on their official store. Even developer-signed apps are extremely limited, with only two entitlements allowed.

This system works via an XML list document that is part of the signed application. XML is a relative of HTML, but with a stricter set of rules. What [Siguza] discovered is that iOS contains 4 different XML parsers, and they deal with malformed XML slightly differently. The kicker is that one of those parsers does the security check, while a different parser is used for that actual permission implementation. Is it possible that this mismatch could contain a vulnerability? Of course there is.

Using a pair of illegal comment tags confuses the two libraries in different ways. The library responsible for the security check sees the entire block as a comment, and therefore never examines the entitlements. The code that actually implements the entitlements sees the malformed comment tags as being self contained, and evaluates the payload.

This clever hack was fixed in the iOS 13.5 beta, by adding a check that compares the outputs of the two parser libraries.

Please Pass the Salt

Several high profile projects were compromised recently as a result of a vulnerability in Salt, a server management application. Salt uses ZeroMQ, a standardized message protocol, to communicate with the controlled machines. Two ZeroMQ instances are used: A publish server where command messages are available, and a request server where the results are sent back. The request server exposed a pair of functions to unauthenticated clients, _prep_auth_info and _send_pub, that were intended to be private. _send_pub allows any client to publish command messages and the other, and _prep_auth_info returns the shared credentials used for authentication to the clients.

An unrelated problem, a directory traversal bug, allows an attacker to read arbitrary files on the filesystem. The get_token() method isn’t properly sanitized, the presence of .. in a path isn’t stripped out, so an attacker can specify any file to attempt to read. The only caveat is that the file has to be parsable by the function underlying get_token().

On to the list of compromised: Ghost, Digicert, and Lineage:

The update fixing the vulnerability was published on 4/29, and the write-up on the 30th. Two days later, on May 2nd, the Lineage infrastructure was compromised. I’ve heard from a Lineage dev that rather than try to clean up the affected servers, the decision was made to wipe those machines and rebuild. Because of how difficult it can be to positively remove all the damage done in a machine compromise, starting over with a clean slate is a good approach.

Android Woes

Google has published the list of fixes for the Android May security update. The most serious flaw appears to be CVE-2020-0103, a system level remote code execution vulnerability, affecting Android 9 and 10. The details haven’t been published yet, but this appears to be a potentially serious problem.

On the vendor side, Samsung is rolling out a fix for a vulnerability found by [Mateusz Jurczyk] of Project Zero. As part of their customization of the Android OS, Samsung has added support for QM/QG image format. I’ll follow the disclosure in referring to this format as QMG.

It seems that Samsung’s QMG support was written specifically for user interface elements, with an emphasis on small file sizes and speed, and essentially no thought given to security. [Jurczyk] walks us through his process of getting AFL (American Fuzzy Lop, a code fuzzing utility) running on the QMG library. He found a mind-boggling 5218 unique crashes.

As demonstrated below, [Jurczyk] put together a proof-of-concept attack that abuses the Samsung SMS app in order to obtain a reverse shell into the target device. The vast majority of the attack is spent defeating ASLR. Because modern SMS applications support delivery confirmation, it’s possible to probe a device’s memory through sending messages with malicious QMG attachments. If a confirmation isn’t received, the message app must have crashed, leaking data about the memory layout.

Large Scale Snooping, or Nothing To See?

Forbes covered a huge privacy violation by Xiaomi. Devices and even their browsers on the play store are sending far too much identifiable data back to their analytics servers, even when in incognito mode.

Except, according to Xiaomi, that’s an entirely unwarranted characterization. They value user security and privacy above all, and their data collection policies are reasonable and secure. They have made changes in how they collect data during incognito mode.

So which is the truth? Probably somewhere in-between. It’s likely that more data was collected than was really needed, but Xiaomi was likely not acting maliciously, either. They have already made some positive changes in their data collection practices, like turning it off for incognito mode. It remains to be seen whether those changes will be sufficient to allay the reasonable concerns raised in the article.

Another Ransomware Closes

The actors behind the Shade ransomware have turned over a new leaf, and released the decryption codes for all the victims of their ransomware. They made a surprising apologetic statement as a part of the release: “We apologize to all the victims of the trojan and hope that the keys we published will help them to recover their data.” It’s unclear what led to this disclosure, but it’s not the first time something similar has happened.

reCaptcha in Phishing

A new technique to watch out for is the use of reCaptcha to protect phishing attacks from detection. As companies are using more automated phishing detection, maybe this is the inevitable next step. Rather than sending a user directly to a fake form, they are sent first to a captcha, making the attack even more believable.

Favicon Attacks

A fake icon hosting service pulled off a particularly sneaky e-skimming attack, by embedding their code in the favicons they were hosting. Researchers at Malwarebytes discovered the attack as a result of a known-suspicious IP address hosting the icons.

At first, their research hit a dead end, as the favicon seemed entirely benign. It turns out that when the loaded page contains the checkout form, the normal favicon is replaced by a script overwrites parts of the page, stealing credit card info when the purchase is completed.

If your first response is “Why in the world is JavaScript running when it’s being loaded as an image for the Favicon?”, then you’re not alone. It’s a good question, but the answer is likely SVG. Browsers need to support SVG favicons, which means parsing text files. It would be better if JavaScript and HTML were disallowed in that context. Now that this research has been published, we may see browsers implementing extra protections to prevent this sort of attack.

10 thoughts on “This Week In Security: Psychic Paper, Spilled Salt, And Malicious Captchas

  1. Ha! Everyone else’s software has to work within “permissions” from the operating system. Apple has “Entitlements”!

    You couldn’t write a better joke than that if you wanted to, so I’ll leave it there.

  2. So, just to bl clear about your opinion on this John. You DON’T believe that Xiaomi was intentionally collecting that data?
    The same Xioami that has shown it’s views on privacy to be aligned with the mainstream Chinese opinion that ranges from ‘It’s silly’ to ‘It’s only a concern if you are a criminal’?
    The SAME Xiaomi that has shown repeatedly that ‘foreign’ ethical concerns are absolutely NOT ‘concerns’ for them?
    The same Xiaomi that has shown itself to be so profit hungry that it will move production to a new area so they can keep using the same processes, rather than comply with new environmental regulations, just to keep costs down?

    THAT Xiaomi?

    I can’t say with certainty that they intentionally collected that data. But should we REALLY be giving a company with their record the benefit of the doubt on this? Or ANY purely profit centered company with a shady history for that matter?

    I hope they get absolutely SLAMMED with lawsuits over this. Even if it wasn’t intentional, it is FAR too big a fuck-up for them to walk away with nothing but a little embarrassment.

    1. To be clear, I’m not sure. I presented the two different accounts, and then took a guess. I don’t think they were breaking into people’s accounts and stealing their Bitcoin. But it sounds like they were in the wrong, particularly in regards to incognito mode.

      Essentially every free service you use collects data on you. Xiaomi is obviously collecting data. The two big questions: First, do they actually anonymize it effectively. And second, are they collecting too much data.

      I don’t think there was a company memo where the CEO told the engineers to spy on the gullible Americans (though that’s not entirely outside the realm of possibility, see Crypto AG). I think it’s more likely that they just got data-greedy for more statistics to feed into their marketing machinery, and there wasn’t a privacy guy there to tell them it was too much.

      Hope that clarifies.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.