This Week In Security: Peering Through The Wall, Apple’s GoFetch, And SHA-256

The Linux command wall is a hold-over from the way Unix machines used to be used. It’s an abbreviation of Write to ALL, and it was first included in AT&T Unix, way back in 1975. wall is a tool that a sysadmin can use to send a message to the terminal session of all logged-in users. So far nothing too exciting from a security perspective. Where things get a bit more interesting is the consideration of ANSI escape codes. Those are the control codes that moves the cursor around on the screen, also inherited from the olden days of terminals.

The modern wall binary is actually part of util-linux, rather than being a continuation of the old Unix codebase. On many systems, wall runs as a setgid, so the behavior of the system binary really matters. It’s accepted that wall shouldn’t be able to send control codes, and when processing a message specified via standard input, those control codes get rejected by the fputs_careful() function. But when a message is passed in on the command line, as an argument, that function call is skipped.

This allows any user that can send wall messages to also send ANSI control codes. Is that really a security problem? There are two scenarios where it could be. The first is that some terminals support writing to the system clipboard via command codes. The other, more creative issue, is that the output from running a binary could be overwritten with arbitrary text. Text like:
Sorry, try again.
[sudo] password for jbennett:

You may have questions. Like, how would an attacker know when such a command would be appropriate? And how would this attacker capture a password that has been entered this way? The simple answer is by watching the list of running processes and system log. Many systems have a command-not-found function, which will print the failing command to the system log. If that failing command is actually a password, then it’s right there for the taking. Now, you may think this is a very narrow attack surface that’s not going to be terribly useful in real-world usage. And that’s probably pretty accurate. It is a really fascinating idea to think through, and definitively worth getting fixed.

Edge’s Private API

So there’s a funny thing that happens when you visit in the Edge browser. There are buttons labeled “Try now” for various features, like the Drop file sharing function. When you click the button, the browser opens the drop sidebar to show off how it works. In retrospect, that should have seemed really odd. The secret is that when Microsoft builds Edge from Chromium source, it adds the edgeMarketingPagePrivate API, giving a certain list of Microsoft pages more permissions to do things in the browser.

One of those permissions has a bit of a problem: Installing themes. The dirty secret is that a Chromium theme is really a Chromium extension, with a subset of features and permissions. Edge gives Javascript from Microsoft pages the special permission to install a theme. The actual vulnerability here is that this API also unintentionally allows the silent installation of any extension, not just themes. And extensions can be particularly powerful, with the ability to read and modify web pages, access cookies, and more.

While that’s obviously not great, there is the limited attack surface to think about. To abuse this, an attacker needs to be able to put JS on a Microsoft site. There are some far-fetched but not impossible scenarios, like a rogue actor at Microsoft, or an XSS (Cross Site Scripting) vulnerability discovered on one of those sites. Then there are more feasible attack vectors, like a malicious browser extension with few permissions, that uses this bug to install an extension with every permission. Or what about an enterprise security appliance that has a trusted SSL certificate, that can snoop on web pages? It seems feasible that if such a device was compromised, slipping a bit of Javascript into a Microsoft page isn’t impossible.

Regardless, version 121.0.2277.98 of Edge contained a fix, adding a check that only themes can be installed via this API. This fix landed just shy of 90 days, on February 9th.


At least one notorious Internet personality has referred to the latest Apple vulnerability disclosure as a backdoor. This seems to over-hype the problem a bit. What we really have is a side-channel that can expose keys. Apple’s M1 and M2 processors have a Data Memory-dependent Prefetcher (DMP) that looks ahead in program execution, and attempts to load memory into cache before it is needed.

The problem is that one of the techniques to pull this off is to look at program memory for pointer-like values, and cache the contents of the memory at those locations. This means that an otherwise black-box cryptography operation can change the system state in detectable ways. The end result is that if an attacker controls the data being acted on in a cryptographic process, and can run a second process on the same machine, the keys themselves can be derived.

SHA-256 Collisions — Nearly Halfway There

This is the sort of thing that makes a security nerd’s blood run cold for a moment. We now have practical attacks against SHA-256 — for the first 31 steps. This requires a bit of context. SHA-256 is a cryptographic hashing function that takes an input, and lays it out into a Message Schedule, and then performs 64 steps of mixing operations. It’s those mixing steps that accomplish the one-way nature of SHA-256. What’s claimed here is that if made a version of SHA-256 that only used 31 mixing steps, we could perform collision attacks.

The paper for this work has landed, and is as full of heavy cryptography as one would expect. The good news is we’re still a *very* long ways from an actual SHA-256 attack, and the state-of-the-art is moving quite slowly. Yes, your bitcoins are still safe.

It’s Not a Vulnerability

But servers are still getting compromised. The Ray framework is getting widespread adoption as an easily deploy-able service to get AI models up and doing real work. And unfortunately, the Ray framework is getting attacked in an ongoing campaign. And a Ray instance is quite the juicy target, with plenty of data to scrape, as well as lots of juicy compute infrastructure to mine cryptocurrency on. What’s interesting is that Ray doesn’t have an authentication layer by design.

Due to Ray’s nature as a distributed execution framework, Ray’s security boundary is outside of the Ray cluster

This isn’t the first popular application designed this way, and the common lesson is that when you hand users a footgun like this, some size-able percentage of them will happily use it. A CVE was issued for the lack of authentication, but was (rightfully) disputed by Anyscale.

There is an important distinction to make here, that just because this issue isn’t a proper vulnerability, it doesn’t mean that it isn’t a problem, or shouldn’t be improved. And that’s finally the conclusion Anyscale has come to. What’s now available is an official test script, slated to get included in Ray 2.11, that looks for exposure and warns about it. Time will tell if a future version of Ray will get full authentication by default.

Bits and Bytes

In an interesting first, Zenhammer allows the flipping of DDR5 memory bits in a rowhammer attack, though in only one of the ten sticks of memory tested. To successfully pull off the attack against a Zen processor, the DRAM address obfuscation function had to be reverse-engineered, and a few other Zen-specific techniques had to be used. From my read, Micron seemed to come out the winner of the small sample size used.

A pair of SharePoint vulnerabilities used at last year’s Pwn2Own contest has now made the list of actively exploited vulnerabilities. It’s a bit humorous that the vulnerabilities has been known for over a year, and only now are US federal agencies actually forced to fix them.

Speaking of which, this year’s Pwn2Own contest just wrapped up. Over a million dollars was won by researchers, with Manfred Paul taking the top spot. We look forward to all of this year’s bugs getting fixed and disclosed.

And finally, Google was paying attention to the Loop DoS announcement, and has a report out about a real-world DoS attack that included a presumably unintentional loop element. CLDAP, a UDP partial implementation of LDAP, was used several years ago in a reflection attack against Google’s QUIC infrastructure. The QUIC servers responded with Reset packets to each of the CLDAP servers. And a handful of those servers sent the reset packets right back, resulting in a 20 million packet-per-second loop across the Internet. The solution is fascinating too: Ensure that Reset packets are always shorter than the packet being responded to, down to a threshold where packets were just ignored. Nifty.

One thought on “This Week In Security: Peering Through The Wall, Apple’s GoFetch, And SHA-256

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.