Apple recently patched a security problem, and fixed the Psychic Paper 0-day. This was a frankly slightly embarrasing flaw that [Siguza] discovered in how iOS processed XML data in an application’s code signature that allowed him access to any entitlement on the iOS system, including running outside a sandbox.
Entitlements on iOS are a set of permissions that an application can request. These entitlements range from the aforementioned
platform-application, which tells the system that this is an official Apple application. As one would expect, Apple controls entitlements with a firm grip, and only allows certain entitlements on apps hosted on their official store. Even developer-signed apps are extremely limited, with only two entitlements allowed.
This system works via an XML list document that is part of the signed application. XML is a relative of HTML, but with a stricter set of rules. What [Siguza] discovered is that iOS contains 4 different XML parsers, and they deal with malformed XML slightly differently. The kicker is that one of those parsers does the security check, while a different parser is used for that actual permission implementation. Is it possible that this mismatch could contain a vulnerability? Of course there is.
RIP my very first 0day and absolute best sandbox escape ever:
— Siguza (@s1guza) April 29, 2020
Using a pair of illegal comment tags confuses the two libraries in different ways. The library responsible for the security check sees the entire block as a comment, and therefore never examines the entitlements. The code that actually implements the entitlements sees the malformed comment tags as being self contained, and evaluates the payload.
This clever hack was fixed in the iOS 13.5 beta, by adding a check that compares the outputs of the two parser libraries.
Please Pass the Salt
Several high profile projects were compromised recently as a result of a vulnerability in Salt, a server management application. Salt uses ZeroMQ, a standardized message protocol, to communicate with the controlled machines. Two ZeroMQ instances are used: A publish server where command messages are available, and a request server where the results are sent back. The request server exposed a pair of functions to unauthenticated clients,
_send_pub, that were intended to be private.
_send_pub allows any client to publish command messages and the other, and
_prep_auth_info returns the shared credentials used for authentication to the clients.
An unrelated problem, a directory traversal bug, allows an attacker to read arbitrary files on the filesystem. The
get_token() method isn’t properly sanitized, the presence of
.. in a path isn’t stripped out, so an attacker can specify any file to attempt to read. The only caveat is that the file has to be parsable by the function underlying
Around 8PM PST on May 2nd, 2020 an attacker used a CVE in our saltstack master to gain access to our infrastructure.
We are able to verify that:
– Signing keys are unaffected.
– Builds are unaffected.
– Source code is unaffected.
See https://t.co/85fvp6Gj2h for more info.
— LineageOS (@LineageAndroid) May 3, 2020
The update fixing the vulnerability was published on 4/29, and the write-up on the 30th. Two days later, on May 2nd, the Lineage infrastructure was compromised. I’ve heard from a Lineage dev that rather than try to clean up the affected servers, the decision was made to wipe those machines and rebuild. Because of how difficult it can be to positively remove all the damage done in a machine compromise, starting over with a clean slate is a good approach.
Google has published the list of fixes for the Android May security update. The most serious flaw appears to be CVE-2020-0103, a system level remote code execution vulnerability, affecting Android 9 and 10. The details haven’t been published yet, but this appears to be a potentially serious problem.
Today I'm happy to release new research I've been working on for a while: 0-click RCE via MMS in all modern Samsung phones (released 2015+), due to numerous bugs in a little-known custom "Qmage" image codec supported by Skia on Samsung devices. Demo: https://t.co/8KRIhy4Fpk
— j00ru//vx (@j00ru) May 6, 2020
On the vendor side, Samsung is rolling out a fix for a vulnerability found by [Mateusz Jurczyk] of Project Zero. As part of their customization of the Android OS, Samsung has added support for QM/QG image format. I’ll follow the disclosure in referring to this format as QMG.
It seems that Samsung’s QMG support was written specifically for user interface elements, with an emphasis on small file sizes and speed, and essentially no thought given to security. [Jurczyk] walks us through his process of getting AFL (American Fuzzy Lop, a code fuzzing utility) running on the QMG library. He found a mind-boggling 5218 unique crashes.
As demonstrated below, [Jurczyk] put together a proof-of-concept attack that abuses the Samsung SMS app in order to obtain a reverse shell into the target device. The vast majority of the attack is spent defeating ASLR. Because modern SMS applications support delivery confirmation, it’s possible to probe a device’s memory through sending messages with malicious QMG attachments. If a confirmation isn’t received, the message app must have crashed, leaking data about the memory layout.
Large Scale Snooping, or Nothing To See?
Forbes covered a huge privacy violation by Xiaomi. Devices and even their browsers on the play store are sending far too much identifiable data back to their analytics servers, even when in incognito mode.
Except, according to Xiaomi, that’s an entirely unwarranted characterization. They value user security and privacy above all, and their data collection policies are reasonable and secure. They have made changes in how they collect data during incognito mode.
So which is the truth? Probably somewhere in-between. It’s likely that more data was collected than was really needed, but Xiaomi was likely not acting maliciously, either. They have already made some positive changes in their data collection practices, like turning it off for incognito mode. It remains to be seen whether those changes will be sufficient to allay the reasonable concerns raised in the article.
Another Ransomware Closes
The actors behind the Shade ransomware have turned over a new leaf, and released the decryption codes for all the victims of their ransomware. They made a surprising apologetic statement as a part of the release: “We apologize to all the victims of the trojan and hope that the keys we published will help them to recover their data.” It’s unclear what led to this disclosure, but it’s not the first time something similar has happened.
reCaptcha in Phishing
A new technique to watch out for is the use of reCaptcha to protect phishing attacks from detection. As companies are using more automated phishing detection, maybe this is the inevitable next step. Rather than sending a user directly to a fake form, they are sent first to a captcha, making the attack even more believable.
A fake icon hosting service pulled off a particularly sneaky e-skimming attack, by embedding their code in the favicons they were hosting. Researchers at Malwarebytes discovered the attack as a result of a known-suspicious IP address hosting the icons.
At first, their research hit a dead end, as the favicon seemed entirely benign. It turns out that when the loaded page contains the checkout form, the normal favicon is replaced by a script overwrites parts of the page, stealing credit card info when the purchase is completed.