This Week In Security: Nintendo Accounts, Pernicious Android Malware, And An IOS 0-day

A rash of Nintendo account compromises has made the news over the last week. Nintendo’s official response was that they were investigating, and recommended everyone enabled two factor authentication on their accounts.

[Dan Goodin] over at Ars Technica has a canny guess: The compromised accounts were each linked to an old Nintendo Network ID (NNID). This is essentially a legacy Nintendo account — one made in the Wii U and 3DS era. Since they’re linked, access via the NNID exposes the entire account. Resetting the primary account password doesn’t change the NNID credentials, but turning on two factor authentication does seem to close the loophole. There hasn’t yet been official confirmation that NNIDs are responsible, but it seems to fit the situation. It’s an interesting problem, where a legacy account can lead to further compromise.

Just Can’t Lose You: xHelper

xHelper, an Android malware, just won’t say goodbye. xHelper looks like a cleaner application, but once installed it begins rather stubbornly installing itself via the Triada trojan. The process begins with rooting the phone, and then remounting /system as writable. Binaries are installed and startup scripts are tampered with, and then the mount command itself is compromised, preventing a user from following the same steps to remove the malware. Additionally, if the device has previously been rooted, the superuser binary is removed. This combination of techniques means that the infection will survive a factory reset. The only way to remove xHelper is to flash a clean Android image, fully wiping /system in the process. Continue reading “This Week In Security: Nintendo Accounts, Pernicious Android Malware, And An IOS 0-day”

GitHub On The Go

It is hard to find anyone that does any kind of software development that doesn’t have some interaction with GitHub. Even if you don’t host your own projects there, there are so many things to study and borrow on the site, that it is nearly ubiquitous. However, when you’ve needed GitHub on the run, you’ve probably had to turn to your phone browser and had a reduced experience. GitHub for Mobile is now out of beta and promises a more fluid phone-based GitHub experience.

In addition to working with tasks and issues, you can also review and merge pull requests. The app sends your phone notifications, too, which can be handy. As you might expect, you can get the app for Android or iPhone in the respective stores.

Continue reading “GitHub On The Go”

This Week In Security: Mass IPhone Compromise, More VPN Vulns, Telegram Leaking Data, And The Hack Of @Jack

In a very mobile-centric installment, we’re starting with the story of a long-running iPhone exploitation campaign. It’s being reported that this campaign was being run by the Chinese government. Attack attribution is decidedly non-trivial, so let’s be cautious and say that these attacks were probably Chinese operations.

In any case, Google’s Project Zero was the first to notice and disclose the malicious sites and attacks. There were five separate vulnerability chains, targeting iOS versions 10 through 12, with at least one previously unknown 0-day vulnerability in use. The Project Zero write-up is particularly detailed, and really documents the exploits.

The payload as investigated by Project Zero doesn’t permanently install any malware on the device, so if you suspect you could have been compromised, a reboot is sufficient to clear you device.

This attack is novel in how sophisticated it is, while simultaneously being almost entirely non-targeted. The malicious code would run on the device of any iOS user who visited the hosting site. The 0-day vulnerability used in this attack would have a potential value of over a million dollars, and these high value attacks have historically been more targeted against similarly high-value targets. While the websites used in the attack have not been disclosed, the sites themselves were apparently targeted at certain ethnic and religious groups inside China.

Once a device was infected, the payload would upload photos, messages, contacts, and even live GPS information to the command & control infrastructure. It also seems that Android and Windows devices were similarly targeted in the same attack.

Telegram Leaking Phone Numbers

“By default, your number is only visible to people who you’ve added to your address book as contacts.” Telegram, best known for encrypted messages, also allows for anonymous communication. Protesters in Hong Kong are using that feature to organize anonymously, through Telegram’s public group messaging. However, a data leak was recently discovered, exposing the phone numbers of members of these public groups. As you can imagine, protesters very much want to avoid being personally identified. The leak is based on a feature — Telegram wants to automatically connect you to other Telegram users whom you already know.

By default, your number is only visible to people who you’ve added to your address book as contacts.

Telegram is based on telephone numbers. When a new user creates an account, they are prompted to upload their contact list. If one of the uploaded contacts has a number already in the Telegram system, those accounts are automatically connected, causing the telephone numbers to become visible to each other. See the problem? An attacker can load a device with several thousand phone numbers, connect it to the Telegram system, and enter one of the target groups. If there is a collision between the pre-loaded contacts and the members of the group, the number is outed. With sufficient resources, this attack could even be automated, allowing for a very large information gathering campaign.

In this case, it seems such a campaign was carried out, targeting the Hong Kong protesters. One can’t help but think of the first story we covered, and wonder if the contact data from compromised devices was used to partially seed the search pool for this effort.

The Hack of @Jack

You may have seen that Twitter’s CEO, Jack [@Jack] Dorsey’s Twitter account was hacked, and a series of unsavory tweets were sent from that account. This seems to be a continuing campaign by [chucklingSquad], who have also targeted other high profile accounts. How did they manage to bypass two factor authentication and a strong password? Cloudhopper. Acquired by Twitter in 2010, Cloudhopper is the service that automatically posts a user’s SMS messages to Twitter.

Rather than a username and password, or security token, the user is secured only by their cell phone number. Enter the port-out and SIM-swap scams. These are two similar techniques that can be used to steal a phone number. The port-out scam takes advantage of the legal requirement for portable phone numbers. In the port-out scam, the attacker claims to be switching to a new carrier. A SIM-swap scam is convincing a carrier he or she is switching to a new phone and new SIM card. It’s not clear which technique was used, but I suspect a port-out scam, as Dorsey hadn’t gotten his cell number back after several days, while a SIM swap scam can be resolved much more quickly.

Google’s Bug Bounty Expanded

In more positive news, Google has announced the expansion of their bounty programs. In effect, Google is now funding bug bounties for the most popular apps on the Play store, in addition to Google’s own code. This seems like a ripe opportunity for aspiring researchers, so go pick an app with over 100 million downloads, and dive in.

An odd coincidence, that 100 million number is approximately how many downloads CamScanner had when it was pulled from the Play store for malicious behavior. This seems to have been caused by a third party advertisement library.

Updates

Last week we talked about Devcore and their VPN Appliance research work. Since then, they have released part 3 of their report. Pulse Secure doesn’t have nearly as easily exploited vulnerabilities, but the Devcore team did find a pre-authentication vulnerability that allowed reading arbitraty data off the device filesystem. As a victory lap, they compromised one of Twitter’s vulnerable devices, reported it to Twitter’s bug bounty program, and took home the highest tier reward for their trouble.

This Week In Security: SWAPGS, Malicious Shaders, More IOS Woes, And WPA3

I’m sure you’ve heard of Spectre, which was the first of many speculative execution vulnerabilities found in modern processors. A new one just popped up this week. At Blackhat on Tuesday, CVE-2019-1125 was announced by Bitdefender as SWAPGS.

SWAPGS is an x86_64 instruction that is intended for use in context switching, that is when execution is transferred from a user-space program back into the kernel. Specifically, SWAPGS swaps the value of the GS register so that it refers to either a memory location in the running application, or a location in the kernel’s space. An unprivileged program can attempt to call this instruction and leak kernel memory contents as a result of the processor speculatively executing the instruction (this is similar to Spectre). Even though the instruction will ultimately not be executed, because a userspace program doesn’t have sufficient privilege to do so, the contents of the system cache have already been sufficiently altered, and an attack could feasibly leverage this to read arbitrary kernel memory.

While the initial reports have mentioned both AMD and Intel products, AMD has released a statement:

AMD is aware of new research claiming new speculative execution attacks that may allow access to privileged kernel data. Based on external and internal analysis, AMD believes it is not vulnerable to the SWAPGS variant attacks because AMD products are designed not to speculate on the new GS value following a speculative SWAPGS. For the attack that is not a SWAPGS variant, the mitigation is to implement our existing recommendations for Spectre variant 1.

Patches for Windows and Linux have been released, and Red Hat has an informative write-up on the vulnerability. I would have reviewed Bitdefender’s whitepaper on the vulnerability, but rather than make it freely available, they have opted to require a name and email address. While I would like to see their work, I refuse to sell my contact information in exchange for access.

A Malicious Shader?

This is the first time I can remember hearing of a malicious pixel shader. Cisco Talos announced a set of vulnerabilities targeting VMware and NVIDIA graphics drivers.

Shaders are specialized programs that run on a video card, and are generally used to apply effects like blur, lighting, bump mapping, and more. Most of the graphical improvements in the last few years of gaming is a result of shaders.

Talos researchers were specifically looking at how to compromise a VM Hyper-visor from inside a guest OS, and they discovered that when a host provides 3d acceleration to the guest, shaders are passed directly through to the system drivers without verification. Because the NVIDIA drivers are also vulnerable, this could allow a malicious program on the host to run arbitrary code on the hypervisor.

While this is troubling enough, the topper is that a malicious shader could potentially be run via WebGL. Taken together, this represents a real danger where simply loading a malicious WebGL enabled page could compromise not only a conventional machine, but could also compromise the bare-metal OS even when run on a guest instance.

Both NVIDIA and VMware have already released driver updates that fixes the flaw, so go update!

iOS Problems

Natalie Silvanovich of Google’s Project Zero released a set of 5 iOS vulnerabilities on Wednesday the 7th. These are not garden variety bugs, but so-called “zero click” problems where no user interaction is required for exploit.

The first exploit, for example, is a spoofed visual voicemail message. Visual voicemail notifications are sent as specially formatted text messages and contain information about the message and the address of an IMAP server to connect to and download the message. That information can be spoofed, leading a device to try to download a message from an IMAP server in the control of an attacker. From that point, finding a bug in the iOS IMAP handling code was relatively easy.

5 vulnerabilities have been fixed in iOS updates. There is a 6th vulnerability, CVE-2019-8641, that has yet to be fixed. While a few hints about this problem are given, the details have been withheld until an update has been released to fully fix the problem. One could be a bit cynical and point out that it’s the Google research team announcing these flaws. While there is certainly a self-serving angle to consider, it’s much better for iOS and consumers if flaws are fixed and publicized, rather than kept secret and sold to an offensive security vendor.

One more iOS story is Apple Bleee. Bluetooth Low Energy is an extremely useful communication protocol, allowing Apple devices to perform many of their seemingly magic functionality. The downside is that to make the magic happen, iOS devices are constantly sending BLE signals, probing for other devices. The researchers at Hexway realized that these signals leak lots of data about your device, potentially including your phone number.

iOS uses a SHA256 hash of the device’s phone number as an identifier when using AirDrop. A SHA256 is still a reasonably secure one-way hash, so there’s no problem, right? The clever realization is that while the hash is secure, and the output space is too large to attack, the input space is small enough to be manageable. An attacker could target the most common area codes in their area, limiting the target space further. From there, the SHA256 hashes for all valid numbers can be pre-calculated and stored in a lookup table.

More WPA3 Problems

We’ve discussed Dragonblood, a WPA3 analysis project. A new problem has been identified, a timing analysis attack that leaks information about the internal state of the encryption algorithm.

This Week In Security: Ransomware Keys, IOS Woes, And More

Remember the end of GandCrab we talked about a couple weeks back? A new wrinkle to this story is the news that a coalition of law enforcement agencies and security researchers have released a decrypter and the master decryption keys for that ransomware. It’s theorized that researchers were able to breach the command and control servers where the master keys were stored. It’s yet to be known whether this breach was the cause for the retirement, or was a result of it.

Apple’s Secure Enclave is Broken?

A Youtube video and Reddit thread show a way to bypass the iPhone’s TouchID and FaceID, allowing anyone to access the list of saved passwords. The technique for breaking into that data? Tap the menu option repeatedly, and cancel the security prompts. Given enough rapid tries, the OS gives up on the validation and simply shows the passwords!

The iPhone has an onboard security chip, the Secure Enclave, that is designed to make this sort of problem nearly impossible. The design specification dictates that data like passwords are encrypted, and the only way to decrypt is to use the Enclave. The purpose is to mitigate the impact of programming bugs like this one. It seems that the issue is limited to the iOS 13 Beta releases, and you’d expect bugs in beta, but a bug like this casts some doubt on the effectiveness of Apple’s Security Enclave.

URL Scheme Hijacking

Our next topic is also iOS related, though it’s possible the same issue could effect Android phones: URL scheme problems. The researchers at Trend Micro took a look at how iOS handles conflicting app URLs. Outside of the normal http: and https: URLs, applications can register custom URL schemes in order to simplify inter-process communication. The simplest example is something like an email address and the mailto: scheme. Even on a desktop, using one of these links will open a different application to handle that request. What could go wrong?

One weakness in using URL schemes like this is that not all apps properly validate what launched the request, and iOS allows multiple apps to use the same URL scheme. In the example given, a malicious app could register the same URL handler as the target, and effectively launch a man-in-the-middle attack.

Bluekeep, and Patching Systems

It has been five weeks since Bluekeep, the Remote Desktop Protocol vulnerability, was revealed. Approximately 20% of the vulnerable systems exposed to the internet have been patched. Bitsight has been running scans of the remaining vulnerable machines, and estimates about 800,000 remaining vulnerable systems. You may remember this particularl vulnerability was considered so problematic that even the NSA released a statement encouraging patching. So far, there hasn’t been a worm targeting the vulnerability, but it’s assumed that at least some actors have been using this vulnerability in attacks.

Does Library Bloat Make Your Smartphone App Look Fat?

While earlier smartphones seemed to manage well enough with individual applications that only weighed in at a few megabytes, a perusal of the modern smartphone software store uncovers some positively monstrous file sizes. The fact that we’ve become accustomed to mobile applications requiring 100+ MB downloads on what’s often a metered Internet connection in only a few short years is pretty crazy if you stop to think about it.

Seeing reports that the Nest app for iOS tipped the scales at nearly 250 MB, [Alexandre Colucci] decided to investigate. On his blog he not only documents the process of taking the application apart piece by piece to find out just what’s eating up all that space, but lists some potential fixes which could shave a bit off the top. Even if you aren’t planning a spelunking expedition into your pocket supercomputer’s particular variant of the Netflix app, the methodology and tools he uses here are fascinating in their own right and might be something worth adding to your software bag of tricks.

By passing the application’s files through a disk usage visualizer called GrandPerspective, [Alexandre] immediately identified some rather large blocks of content. The bundled Apple Watch version of the app takes up 23 MB, video and audio used to walk the user through the device setup weigh in at 22 MB, and localization files for various languages consumes a surprising 33 MB. But the biggest single contributor to the application’s heft is the assorted libraries and frameworks which total up to an incredible 67 MB.

Of course the question is, how much of it is really necessary? It’s hard to be sure from an outsider’s perspective, but [Alexandre] notes that a few of the libraries used seem to be redundant or obsolete. In some cases this could be the result of old code still lurking in the project, but the four different libraries used for user tracking probably aren’t in there by accident. It also stands to reason that the instructional videos could be offloaded to something like YouTube, so that only users who need to view them have to expend their bandwidth on it.

Getting a little deeper into things, [Alexandre] notes that some of the localization images appear to be redundant. As a specific example, he points to the images of the Nest itself displaying Fahrenheit and Celsius temperatures. While logically this should only be two image files, there are actually eight copies of the Celsius image, each filed away as language-specific. These redundant localization images could easily be stripped out, but with gains measured in only a few hundred kilobytes, it probably wasn’t considered worth the effort during development.

In the end there’s really not as much bloat as we might like to believe. There were some redundant files, maybe a few questionable library inclusions, and the Apple Watch version of the app could surely be separated out. All together, it might get you a savings of 30 – 40%, but still not enough to bring it down under 100 MB.

All signs point to the fact that modern smartphone software development is just a lot more burdensome than us hackers might like. Save for projects looking to put control back into the hand’s of the users, it looks like mobile operating systems aren’t going to be slimming down anytime soon.

Ask Hackaday: Why Aren’t We Hacking Cellphones?

When a project has outgrown using a small microcontroller, almost everyone reaches for a single-board computer — with the Raspberry Pi being the poster child. But doing so leaves you stuck with essentially a headless Linux server: a brain in a jar when what you want is a Swiss Army knife.

It would be a lot more fun if it had a screen attached, and of course the market is filled with options on that front. Then there’s the issue of designing a human interface: touch screens are all the rage these days, so why not buy a screen with a touch interface too? Audio in and out would be great, as would other random peripherals like accelerometers, WiFi, and maybe even a cellular radio when out of WiFi range. Maybe Bluetooth? Oh heck, let’s throw in a video camera and high-powered LED just for fun. Sounds like a Raspberry Pi killer!

And this development platform should be cheap, or better yet, free. Free like any one of the old cell phones that sit piled up in my “hack me” box in the closet, instead of getting put to work in projects. While I cobble together projects out of Pi Zeros and lame TFT LCD screens, the advanced functionality of these phones sits gathering dust. And I’m not alone.

Why is this? Why don’t we see a lot more projects based around the use of old cellphones? They’re abundant, cheap, feature-rich, and powerful. For me, there’s two giant hurdles to overcome: the hardware and the software. I’m going to run down what I see as the problems with using cell phones as hacker tools, but I’d love to be proven wrong. Hence the “Ask Hackaday”: why don’t we see more projects that re-use smartphones?

Continue reading “Ask Hackaday: Why Aren’t We Hacking Cellphones?”