This Week In Security: DDoS Techniques, Dirty Pipe, And Lapsus$ Continued

Denial-of-Service (DoS) amplification. Relatively early in the history of the Internet — it was only 14 years old at the time — the first DoS amplification attack was discovered. [TFreak] put together smurf.c, likely in 1997, though it’s difficult to nail the date down precisely.

The first real DoS attack had only happened a year before, in 1996. Smurf worked by crafting ICMP packets with spoofed source addresses, and sending those packets to a network’s broadcast address. A host that received the request would send the packet to the target, and if multiple hosts responded, you got a bigger DoS attack for free. Fast forward to 1999, and the first botnet pulled off a Distributed DoS, DDoS, attack. Ever since then, there’s been an ongoing escalation of DDoS traffic size and the capability of mitigations.

DNS and NTP quickly became the popular choice for amplification, with NTP requests managing an amplification factor of 556, meaning that for every byte an attacker sent, the amplifying intermediary would send 556 bytes on to the victim. You may notice that so far, none of the vulnerable services use TCP. The three-way handshake of TCP generally prevents the sort of misdirection needed for an amplified attack. Put simply, you can’t effectively spoof your source address with TCP.

There are a pair of new games in town, with the first being a clever use of “middleboxes”, devices like firewalls, Intrusion Prevention Systems, and content filters. These devices watch traffic and filter content or potential attacks. The key here is that many such devices aren’t actually tracking TCP handshakes, it would be prohibitively memory and CPU intensive. Instead, most such devices just inspect as many packets as they can. This has the unexpected effect of defeating the built-in anti-spoofing of TCP.

An attacker can send a spoofed TCP packet, no handshake required, and a vulnerable middlebox will miss the fact that it’s spoofed. While that’s interesting in itself, what’s really notable is what happens when the packet appears to be a request for a vulnerable or blocked resource. The appliance tries to interrupt the stream, and inject an error message back to the requester. Since the requestor can be spoofed, this allows using these devices as DDoS amplifiers. As some of these services respond to a single packet with what is essentially an entire web page to convey the error, the amplification factor is literally off the charts. This research was published August 2021, and late February of this year, researchers at Akamai have seen DDoS attacks actually using this technique in the wild.

The second new technique is even more alien. Certain Mitel PBXs have a stress-test capability, essentially a speed test on steroids. It’s intended to only be used on an internal network, not an external target, but until a recent firmware update that wasn’t enforced. For nearly 3,000 of these devices, an attacker could send a single packet, and trigger the test against an arbitrary host. This attack, too, has recently been seen in the wild, though in what appears to be test runs. The stress test can last up to 14 hours at worst, leading to a maximum amplification factor if over four billion, measured in packets. The biggest problem is that phone systems like these a generally never touched unless there’s a problem, and there’s a decent chance that no one on site has the login credentials. That is to say, expect these to be vulnerable for a long time to come.

Dirty Pipe

This Linux vulnerability was found in the wild — not as a vulnerability, but just a regular old bug. [Max Kellermann] of CM4all had a customer that was seeing corrupted log archives. A single corrupted file isn’t unheard of, but this was the same daily log archive, corrupted in the same way repeatedly. This sort of reproducibility tends to make developers excited, because it means a specific bug that can be tracked down and fixed. So, he started looking for a bug in his code. After eliminating his own code as the culprit, he eventually concluded this was a kernel bug.

When you have excluded the impossible, whatever remains, however improbable, must be the truth.

— Sherlock Holmes

The bug turned out to be CVE-2022-0847, demonstrated by a pair of simple programs:

#include <unistd.h>
int main(int argc, char **argv) {
  for (;;) write(1, "AAAAA", 5);
}
// ./writer >foo 

and

#define _GNU_SOURCE
#include <unistd.h>
#include <fcntl.h>
int main(int argc, char **argv) {
  for (;;) {
    splice(0, 0, 1, 0, 2, 0);
    write(1, "BBBBB", 5);
  }
}
// ./splicer <foo |cat >/dev/null 

I was able to replicate this bug on one of my machines, by first creating a file, touch foo. Next, start the splicer program running: ./splicer /dev/null. Then finally run the writer program: ./writer >foo Let it run for a few seconds, and then terminate both processes. If there is no vulnerability, then foo will only contain a long string of “AAAAA”s. On the machine with a vulnerable kernel, grep revealed a multitude of “BBBBB”s mixed in.

The key here is the logic behind splice(). This system call is extremely useful for moving data quickly, as it asks the kernel to do the data copy between file descriptors without the need to move any bits into userspace. The problem is that you can call a splice on a pipe, a one-way communication tool, from the wrong side. In the example code above, the redirect operator < is used to read data from a file into a process’s standard input. Under the hood, that uses a pipe, and you can splice() into that pipe. The syscall moves the requested data into a cache, and then a check is performed as to whether the splice is allowed to complete. An old quirk relating to how this cache was handled was finally turned into a serious bug by a refactor in Linux 5.8. Suddenly, some of those splice() requests complete when they shouldn’t.

This bug allows any user to write data to any file they have read access on. That’s bad. There are a couple caveats — mainly that the user must have read access to the path and file to be tampered with. On my system, that means that /root/.ssh/authorized_keys can’t be tampered with. Next, this vulnerability can’t change the size of a file — it will only overwrite data, and that overwrite can’t include the first character of the file. At first glance, this blunts the severity quite a bit, but there’s another approach that a slightly more sophisticated actor could use: modifying setuid binaries. If you can change binaries, you can introduce your own vulnerability. In short? This vulnerability gives elevation to root to anyone with an interactive shell.

Edit: Thanks to Gravis for pointing out the missing information here. DirtyPipe was introduced as part of a refactor in kernel 5.8, and fixed in 5.16.11 and 5.15.25. LTS kernel 5.10.102 has also received the backported fix.

More Lapsus$ News

This story keeps giving, and there are a bunch of developments to cover. First, part of the Nvidia leak that has been released was an old signing key. Rather then being a useless proof of the hack, this has proven to be a potent tool for attackers. Why? Windows honors drivers signed with expired signatures. And already, the leaked sig has been used in attacks.

The next turn in this story is that Samsung has been breached by the same group, and nearly 200 GB of that data has been released. Keys and source code are part of the release, so watch out for more developments as researchers and malware authors work through that trove.

Bits and Bytes

There’s a rather hilarious occurrence, that also manages to be extremely annoying, when a Google ad manages to trigger Google Assistant on the very device that plays the ad. Leave it to security researchers, to find a way to weaponize this quirk of modern devices. Approach #1 is a malicious radio station. Just like listening to your favorite tech podcast, and the host accidentally mentions Alexa. Some of the details have been patched, but smart devices can still be vulnerable in some cases.

Google has announced their acquisition of Mandiant, one of the larger cybersecurity firm. This is the group that led the investigation of both the SolarWinds and Colonial Pipeline attacks. Google’s plan is to put this new subsidiary in their cloud division, in order to bulk up their security offerings. Read more at Google’s announcement.

There’s a series of bugs found in the VoipMonitor platform, the worst of which being an RCE chain. VoipMonitor is used to monitor VoIP traffic — it’s right there in the name — and keep track of jitter and call quality. The fixes have landed in 24.97, so if your organization uses this software, make sure it’s up-to-date!

A pair of Firefox vulnerabilities have been patched, both use-after-free problems. Present in both Firefox and Thunderbird, these flaws have been reported to be in use in the wild. The fixes landed in Firefox 97.0.2 and Thunderbird 91.6.2.

Adafruit has announced a breach of customer data. A data set built from real data was being used for employee training, and when that employee left the company, a GitHub repository was unintentionally made public with that data. The data exposed was limited, and affected users are being notified via email.

And finally, [Troy Hunt] is back with a devilish idea — the kind of mischief we can get behind. Tired of spammers, he is creating a password purgatory to send these malcontents to. To accomplish their spammy ends, a spammer just needs to create an account. Except, of course, this is purgatory. No matter what password is chosen, it violates some odd password rule. You want to use P@ssw0rd!337? Sorry, this password must not contain two “s” characters in a row. P@5sw0rd!337 This password must include at least two capitol letters. Ad nauseum. Will it dissuade any spammers from their annoying ways? Probably not, but we can look forward to a write-up on the fallout after the project is finished and run for a while.

26 thoughts on “This Week In Security: DDoS Techniques, Dirty Pipe, And Lapsus$ Continued

  1. “This bug allows any user to write data to any file they have read access on. … the user must have write access to the file to be tampered with.”

    Perhaps you meant to reiterate that they need *read* access, rather than contradicting yourself?

  2. If the intrusion prevention systems do not track the TCP connections, isn’t it trivial to bypass their protection? Just write your TCP / HTTP requests one byte at a time. Or do they block any connection doing that as well, such as SSH and telnet?

  3. It should be noted that the Linux flaw only effects versions starting at 5.8 and was fixed in 5.16.11, 5.15.25 and 5.10.102. RHEL 8 use kernel 4.18.0 (with backported security patches), which means businesses are largely unaffected.

  4. I have for a long time wondered why there seems to be so many services that returns large nuanced error messages to incoming requests from new connections that don’t even introduce themselves correctly. This seems like a waste of both processing power and network bandwidth.

    Secondly, can’t ISPs unspoof incorrect packages?
    After all, the ISP will have a more or less direct/dedicated connection to their customer, they know the customer is on X IP address, and every package header contains that info else the ISP can’t route the package to where it should go. If the customer suddenly sends a package stating that it is Y IP address in its header, then this is obviously incorrect since only X IP address exists on that connection.

    Connections with more than 1 IP address would need to match a list of correct IPs for the connection, and connections with an arbitrary amount of IPs behind it (like a connection between ISPs) wouldn’t do this anti spoof check since the ISP can’t know. But this anti spoofing would likely make a lot of botnets a lot less effective.

    Proxy servers wouldn’t be effected, nor would VPNs, or anything else. Since when do one need to send a package stating that one has another IP than one actually has? (I can see this as useful internally on a network for testing, but when is it applicable when leaving the local network?)

    Though, I can see how an ISP might argue that unspoofing packages would require a bit of processing power and add a tiny delay. However, one can counter that argument by just asking how many % of their traffic is just DDoS in transit, and how much of that has been amplified?

    After all, the code needed to unspoof a package wouldn’t be that big.
    All one has to do is compare the sender IP to the known correct IP of the connection, and if it matches one sends it along, if it doesn’t, one corrects it. (or drops it if the connection has more than 1 IP associated with it)

    1. They don’t really “unspoof” source addresses, they just drop traffic when the source address isn’t known to be valid, but:

      – Every residential ISP that cares at all about this issue enables “DHCP binding,” which restricts the source addresses that the customer connection is allowed to use (and, via ARP traffic, claim for delivery) to only those that have been assigned in a DHCP lease.
      – Commercial providers that assign multiple static addresses, if they care about this problem, will configure some filtering rules to accomplish the same thing in the absence of DHCP. They might even segment away customer links from each other, just to make sure customers don’t hijack other customers.
      – Big carriers, if they care, enable reverse-path filtering, which checks that if a packet with source address A arrives on interface X, then there must be some route to address A involving interface X, or else the packet is dropped. This is a little more limited at the core of the internet though: the internet has so many redundant paths that virtually all X can reach A (unless A is not valid at all).

      The weakness in all of these methods lies in the phrase “if they care about this,” and in all cases where you see a spoofed packet in the wild, that packet came from a host on an ISP that doesn’t care. If that ISP then does deploy the source filtering that they should have had all along, then the spoofers will just switch to a new ISP.

      1. The “if they care about this” should probably be tested…

        Like if person (1) knows the IP of a friend (2) and port forward so that they can see the incoming data if any is delivered. And 1 then sends a spoofed package to some random website or the like, then 2 should get the data. If they don’t, the ISP or higher dropped the package. If 1 however receives the data, then the ISP unspoofed it.

        However, doing a test like this in practice would likely require a lot of network nerds working together for what effectively is a likely somewhat pointless test. Unless it comes with some interesting data as a result.

        But as suspected, there is already ways to actually handle spoofed packages.

        I wonder how many DDoS attacks would be hampered by detecting spoofed packages. It would make some amplification attacks untenable.

        1. This kind of test doesn’t even need reflection to do: get a server with a public address, run a packet capture on its interface, send packets (e.g. pings) to that server from several locations on the ‘net using a variety of source addresses, and see which ones make it through to show up on the packet capture. That can even give some hint as to where the filters are deployed, by studying how much the source address has to be changed before the filter kicks in.

          DDoS attacks are indeed hampered by RPF, though. Plenty of spoofed packets get eaten by the filters. The problem is just that some attack botnets are so vast that they, merely by sheer number, find enough paths where spoofing is possible that they can overwhelm their target even with that handicap.

          I do wish there was something more to be done about ISPs that don’t deploy filters, though. Something like a “wall of shame” might be fun, but of course internet miscreants would love such a list.

          1. That were an idea of doing the test, to see if one’s ISP is lazy or not.
            And if they are simply send a message to them saying that they should filter.

            If all ISPs filtered out spoofed packages, then one can’t use that route for DDoS amplification. And since spoofing the origin of a package has no practical purpose, then it really shouldn’t affect anyone except (D)DoS attacks.

            For a better internet ISPs should filter.

            Although, some botnets are sufficiently large that their raw bandwidth without amplification is sufficiently disruptive.

            One could also look at the services that allows for amplification of a request. As I stated in the first post, no service should return a package larger than what it initially received until it has finished the handshake with whoever is requesting contact.

            Since if a spoofed package comes in saying hello then one’s service will say hello back to whoever supposedly “sent” the package, but since it were spoofed, that other system will get a random response for something it never initialized itself, and this would then be ignored. And since it doesn’t return this handshake, the service won’t send any large amounts of data. Effectively making the amplification factor 1 or bellow. This would then make this service undesirable for botnets to use.

            But as of yet I haven’t seen a tutorial on making TCP/UDP connections even really talk much about this and how to in general avoid aiding botnets. So I don’t find it that surprising that a lot of various services do allow for quite generous amplification at times.

          2. Also for phone companies. This tech exists for ip addresses. Apply it to phone and spammers will die out. In the US there is a monetary fine for spamming. Extend the fine to phone companies who refuse to implement, or refuse to blacklist foreign providers who allow spam. And allow US phone companies to blacklist foreign phone providers if they have proof of spamming.

            Wall of shame is a good idea for the phones. People care about spam calls.

    1. To be fair, I think a lot of people have accidentally written one instead of the other a few times.

      Just like your and you’re gets mixed up.
      Or then and than
      Then there is though and through that are also easy to mix up.

      English has plenty of these subtle traps.

  5. I stumbled across a case of Alexa activating herself after just having read about it last week I think.

    In this case, I asked Alexa to do something with the lights, but she must have misheard and instead asked about activating some sort of home assistant (can’t remember the name). She proceeds to describe its features, which included the line “for example you can say Alexa, …..” at which point it glitched and went back to the beginning of its speech. Only to glitch again next time it got to “Alexa…”. First couple of times I thought errr Alexa’s broken, then the third time I paid more attention to realise she was triggering herself ad infinitum. So I went to record her seemingly recurring blunder, when the 4th time she managed to ignore herself and finish whatever she was saying :)

  6. I suppose I’m the weirdo who doesn’t let ads or Assistant run on his devices?

    Although you can feel the noose tighten when putting in my Samsung Buds an auto install of ‘Samsung Wear’ proceeds which then activates notifications about “Google Assistant”

    The worst is Apple showed that these actions aren’t punished, Google follows suit, and Microsoft Windows P R O has auto-reinstalling TikTok.

    Then FBmeta releases the HMD900: “release the head straps please” . . . “I Can’t do that Dave”

Leave a Reply to BTCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.