This Week In Security: XcodeSpy, Insecure SMS, And Partial Redactions

There seems to be a new trend in malware, targeting developers and their development and build processes. The appeal is obvious: rather than working to build and market a malicious application, an attacker just needs to infect a development machine. The hapless infected developers can now do the hard work to spread the malicious payload.

The newest example is XcodeSpy, discovered by a researcher who chose to remain anonymous. It works by using the Xcode IDE’s Run Script function to, well, run a script that completely backdoors your computer. The instance was found in a repackaged open source project, TabBarInteraction, but they’re just innocent victims. It was simple enough for someone to insert a script in the build process, and distribute the new, doped package. It’s probably not the only one out there, so watch out for Run Scripts with obfuscated payloads.

Drupal Module Security

Drupal is much like WordPress, in that the core project has very few serious vulnerabilities, but very serious problems are often found in the library of available extensions. Case in point, the Fast Autocomplete module has a “moderately critical” vulnerability. There are two interesting points here. The first is the problem itself, which is a data exposure issue. This extension provides a search bar experience that shows suggested auto-completions, and provides snippets associated with that auto completion. By default, those search results are limited to what an anonymous user can access on the site. The extension also provides the option to search private content, using the permissions available to the user accessing the site. The problem is that the extension caches all those results, and doesn’t properly segregate the results in the cache. So, once a privileged user has searched for something private, any user can repeat the search and access the snippets, even though that information is on a non-accessible page.

Drupal does something interesting, called their Security Advisory Policy. To put it in a nutshell, the Drupal team selects a handful of extensions that are widely used, and provide limited security support for those projects. This seems primarily mean coordinating vulnerability announcements.

Cell Number Dangers

You know how almost every online service wants to know you cell number? One of the reasons is that receiving a text message is one of the most popular second factor of authentication, not to mention account recovery mechanisms. While this is popular, it’s a horrible idea. There have been multiple attacks against SMS that easily lead to account takeover through this recovery procedure. An attacker can call your mobile provider and request a new SIM card for your number, known as a SIM swap attack. They can spoof your identity to the SS7 network, leading to SMS and voice call spying. And don’t forget the ever popular number migration fraud, where an attacker claims to be you, moving your number to a new provider.

It turns out, there’s an even easier way to intercept SMS messages. [Lucky225] has been intrigued by SMS fraud for years, and brings us his work on SMS Routing. NetNumber ID is a routing service intended for VoIP and business users to handle text messages, even though they aren’t using a traditional SMS device. There is a distinct lack of oversight over this process, and until recently, it was possible to hijack any cell number’s SMS routing through a simple request. Vice has a rather nice example of [Lucky225] demonstrating the attack, using $16 and a fake Letter of Authorization.

Zoom Showing Too Much

Oversharing on Zoom is one of the fun, cringey, and sometimes disturbing collective memories we have of 2020. From roommates walking into the shot, to meetings sans pants, it was a crazy time. There’s another way to overshare, at least when you’re sharing a video feed of your desktop or an open application. It’s common sense not to leave anything sensitive open on your machine when you’re sharing your desktop view. However, a pair of researchers from SySS discovered that even when sharing only an application, other application windows may briefly appear in the video feed.

When a different application is drawn on top of the one captured by Zoom, a few frames of the sent video may contain the image of that application. If the call is recorded by one of the other parties, they can pull the frame and see exactly is unintentionally visible. Now this is usually going to be as mundane as seeing what browser tabs are open, or getting a look at notes for the call. From a password manager, to personal information, there are certainly ways this bug could end very badly.

Partial Redaction

Mostly Redacted RSA KeySpeaking of unintentional data exposure, I came across a fun story about a partially redacted RSA key that was posted online. As our lot are wont to do, a few crypto geeks set about trying to figure out the whole key from the partial screenshot. Within three hours, they had deduced the full key. The write-up states that the hardest and most time consuming element was converting the screenshot back into text.

There have been many stories over the years about redaction failures.  Redacted PDFs can sometimes be read through simple copy and paste. In some images, you can figure out text based on a single row of pixels visible above and below the deleted text. And finally, the simple blurring tools from photo editing suites are reversible, leading to easily recovered text. All this to say, doing redaction properly can be very difficult, and as the writeup concludes, “if you find something private, keep it that way.”

25 thoughts on “This Week In Security: XcodeSpy, Insecure SMS, And Partial Redactions

  1. Read Ken Thompson’s Turing address, “Reflections on Trusting Trust”.

    Then consider that being done to gcc and distributed as a binary pkg via one or more of the major distros.

    It is highly likely this has already been done more than once. Because gcc has had so many extensions added, it’s unlikely that any other compiler is capable of compiling it. A thousand eyes can examine the source code but won’t find a trace. The extensions prevent using a known clean compiler to build gcc.

    Game over.

    1. The problem is that it makes the assumption that you are compiling the compiler with only one compiler. If you compile GCC 6 twice, once using GCC 5 and once more using Clang, then you can compare the binaries produced by the resulting GCC 6 builds. Compromising one compiler is hard enough but compromising every compiler is only something hard AI is capable of.

      There is also the CompCert C compiler which does formal verifications on every part of it’s build process.

        1. Also different forms of the same instruction e.g. xchg ax,bx versus xchg bx,ax. Both instructions will perform the same function but use different opcodes. I believe that this has been used to fingerprint compilers and assemblers in the past.

        2. No, he’s got a point. Compile the same GCC 6 codebase using various (possibly untrusted) compilers. Then compile the actual application using these GCC 6 builds. The outputs should be bit-for-bit equal (assuming that GCC 6 is deterministic). If not, then something shady is going on….

    2. “Then consider that being done to gcc and distributed as a binary pkg via one or more of the major distros.”

      That has already been done, it is called EMACS.

      (grinning, ducking, and running)

    3. gcc does most certainly compile with the Solaris compiler, the AIX compiler, and with Visual C++. It also compiles with crappy old versions of gcc.
      Think about it, how else does gcc get onto these platforms?

      1. I’ll try building the current gcc suite with an old Oracle/Sun/Forte compiler. The fact remains that Thompson’s comments stand. If *most* people recompile gcc with an infected compiler, then *most* programs are infected.

        Having recently installed an Ubuntu gnuplot package which did not have any X graphical terminals because the person who built the package was somewhat clueless, I don’t have a lot of confidence that the 3 letter agencies in the US, Russia and China haven’t “fixed” gcc to suit their needs.

    4. I’ve been fairly reliably told a story of the gcc used by a specific company being compromised (likely by a state-level actor). Story dates back to early internet days, and the gcc in question was for an unusual architecture, so the version they used was shared round the company rather than everyone getting a fresh clean copy from the distribution.

  2. Heh heh, kinda like the fact that Google Form quizzes (that are supposed to have stages you have to pass through) send the answers right there in the BOM. While this isn’t as big of a deal, it seems like some dev somewhere just wanted it to be really easy to find those answers. They’re all grouped together, unobfuscated in any way, and completely in order on the client side.

    (Makes me really confident in this company’s ability to handle my personal info \s)

    1. “As our lot are wont to do” in the first paragraph of the last section, not sure what that was supposed to be but auto correct or something made it pretty confusing hehe.

      1. No, no, that’s exactly what it’s supposed to say. “Our lot”, meaning security minded hackers and tinkerers. “Wont” means being in the habit or custom of doing something. So to reword: “as geeks like us tend to do”.

        1. tekkieneet: I’m rather heavily immersed into CS with a master’s degree in CS and having worked in software my entire life, and even I recognize that rather well known (if slightly clique) phrase. My I suggest you read some fiction (non-SF) novels or enjoy a play or two?

      2. That way of speaking is pretty common, meaning “the way our group of people use to do…”. Maybe more used in some parts than the others, or more in the US than in the UK, but not that strange a way of phrasing it.

  3. “An attacker can call your mobile provider and request a new SIM card for your number, known as a SIM swap attack.”

    Seems an alert to your number or E-mail would be called for alerting you to something unauthorized happening. Plus the new card should go to the address on file.

    “And don’t forget the ever popular number migration fraud, where an attacker claims to be you, moving your number to a new provider.”

    Porting, although when I did it, it took a few days. It also seems an alert of some kind would help. Remember a lot of this works because the original owner doesn’t know what’s happening in their name. Make the whole process visible is a counter to the secrecy crime depends upon.

  4. So … with only 40% of the private key, it was possible to re-derive the entire key. In this specific case, how much more would have been necessary to redact to prevent this?

  5. It’s not about how much was redacted, it’s about which how much remains. In this case it was so easy because of having two of the parameters in their entirety, one of which was a big prime. Redacting every other line, being 50% of the private key (less redaction!), would have been much more difficult.

  6. Why even do a bizarre brag post with the partial key? The only people that could verify it would the the owners so posting it is pointless. I hope it was a made up post with a fake key.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.