This Week In Security: TunnelVision, Scarecrows, And Poutine

There’s a clever “new” attack against VPNs, called TunnelVision, done by researchers at Leviathan Security. To explain why we put “new” in quotation marks, I’ll just share my note-to-self on this one written before reading the write-up: “Doesn’t using a more specific DHCP route do this already?” And indeed, that’s the secret here: in routing, the more specific route wins. I could not have told you that DHCP option 121 is used to set extra static routes, so that part was new to me. So let’s break this down a bit, for those that haven’t spent the last 20 years thinking about DHCP, networking, and VPNs.

So up first, a route is a collection of values that instruct your computer how to reach a given IP address, and the set of routes on a computer is the routing table. On one of my machines, the (slightly simplified) routing table looks like:

# ip route
default via 10.0.1.1 dev eth0
10.0.1.0/24 dev eth0

The first line there is the default route, where “default” is a short-hand for 0.0.0.0/0. That indicate a network using the Classless Inter-Domain Routing (CIDR) notation. When the Internet was first developed, it was segmented into networks using network classes A, B, and C. The problem there was that the world was limited to just over 2.1 million networks on the Internet, which has since proven to be not nearly enough. CIDR came along, eliminated the classes, and gave us subnets instead.

In CIDR notation, the value after the slash is commonly called the netmask, and indicates the number of bits that are dedicated to the network identifier, and how many bits are dedicated to the address on the network. Put more simply, the bigger the number after the slash, the fewer usable IP addresses on the network. In the context of a route, the IP address here is going to refer to a network identifier, and the whole CIDR string identifies that network and its size.

Back to my routing table, the two routes are a bit different. The first one uses the “via” term to indicate we use a gateway to reach the indicated network. That doesn’t make any sense on its own, as the 10.0.1.1 address is on the 0.0.0.0/0 network. The second route saves the day, indicating that the 10.0.1.0/24 network is directly reachable out the eth0 device. This works because the more specific route — the one with the bigger netmask value, takes precedence.

The next piece to understand is DHCP, the Dynamic Host Configuration Protocol. That’s the way most machines get an IP address from the local network. DHCP not only assigns IP addresses, but it also sets additional information via numeric options. Option 1 is the subnet mask, option 6 advertises DNS servers, and option 3 sets the local router IP. That router is then generally used to construct the default route on the connecting machine — 0.0.0.0/0 via router_IP.

Remember the problem with the gateway IP address belonging to the default network? There’s a similar issue with VPNs. If you want all traffic to flow over the VPN device, tun0, how does the VPN traffic get routed across the Internet to the VPN server? And how does the VPN deal with the existence of the default route set by DHCP? By leaving those routes in place, and adding more specific routes. That’s usually 0.0.0.0/1 and 128.0.0.0/1, neatly slicing the entire Internet into two networks, and routing both through the VPN. These routes are more specific than the default route, but leave the router-provided routes in place to keep the VPN itself online.

And now enter TunnelVision. The key here is DHCP option 121, which sets additional CIDR notation routes. The very same trick a VPN uses to override the network’s default route can be used against it. Yep, DHCP can simply inform a client that networks 0.0.0.0/2, 64.0.0.0/2, 128.0.0.0/2, and 192.0.0.0/2 are routed through malicious_IP. You’d see it if you actually checked your routing table, but how often does anybody do that, when not working a problem?

There is a CVE assigned, CVE-2024-3661, but there’s an interesting question raised: Is this a vulnerability, and in which component? And what’s the right solution? To the first question, everything is basically working the way it is supposed to. The flaw is that some VPNs make the assumption that a /1 route is a bulletproof way to override the default route. The solution is a bit trickier. Continue reading “This Week In Security: TunnelVision, Scarecrows, And Poutine”

This Week In Security: Default Passwords, Lock Slapping, And Mastodown

The UK has the answer to all our IoT problems: banning bad default passwords. Additionally, the new UK law requires device makers to provide contact info for vulnerability disclosures, as well as a requirement to advertise vulnerability fix schedules. Is this going to help the security of routers, cameras, and other devices? Maybe a bit.

I would argue that default passwords are in themselves the problem, and complexity requirements only nominally help security. Why? Because a good default password becomes worthless once the password, or algorithm leaks. Let’s lay out some scenarios here. First is the static default password. Manufacturer X makes device Y, and sets the devices to username/password admin/new_Complex_P@ssword1!. Those credentials make it onto a default password list, and any extra security is lost.

What about those devices that have a different, random-looking password for each device? Those use an algorithm to derive that password from the MAC address and/or serial number. That may help the situation, but the algorithm can be retrieved from the firmware, and most serial numbers are predictable in one way or another. This approach is better, but not a silver bullet.

So what would a real solution to the password problem look like? How about no default password at all, but no device functionality until the new password passes a cracklib complexity and uniqueness check. I have seen a few devices that do exactly this. The requirement for a disclosure address is a great idea, which we’ve talked about before regarding the similar EU legislation.

Continue reading “This Week In Security: Default Passwords, Lock Slapping, And Mastodown”

This Week In Security: Cisco, Mitel, And AI False Flags

There’s a trend recently, of big-name security appliances getting used in state-sponsored attacks. It looks like Cisco is the latest victim, based on a report by their own Talos Intelligence.

This particular attack has a couple of components, and abuses a couple of vulnerabilities, though the odd thing about this one is that the initial access is still unknown. The first part of the infection is Line Dancer, a memory-only element that disables the system log, leaks the system config, captures packets and more. A couple of the more devious steps are taken, like replacing the crash dump process with a reboot, to keep the in-memory malware secret. And finally, the resident installs a backdoor in the VPN service.

There is a second element, Line Runner, that uses a vulnerability to arbitrary code from disk on startup, and then installs itself onto the device. That one is a long term command and control element, and seems to only get installed on targeted devices. The Talos blog makes a rather vague mention of a 32-byte token that gets pattern-matched, to determine an extra infection step. It may be that Line Runner only gets permanently installed on certain units, or some other particularly fun action is taken.

Fixes for the vulnerabilities that allowed for persistence are available, but again, the initial vector is still unknown. There’s a vulnerability that just got fixed, that could have been such a vulnerability. CVE-2024-20295 allows an authenticated user with read-only privileges perform a command injection as root. Proof of Concept code is out in the wild for this one, but so far there’s no evidence it was used in any attacks, including the one above. Continue reading “This Week In Security: Cisco, Mitel, And AI False Flags”

This Week In Security: Putty Keys, Libarchive, And Palo Alto

It may be time to rotate some keys. The venerable PuTTY was updated to 0.81 this week, and the major fix was a change to how ecdsa-sha2-nistp521 signatures are generated. The problem was reported on the oss-security mailing list, and it’s quite serious, though thankfully with a somewhat narrow coverage.

The PuTTY page on the vulnerability has the full details. To understand what’s going on, we need to briefly cover ECDSA, nonces, and elliptic curve crypto. All cryptography depends on one-way functions. In the case of RSA, it’s multiplying large primes together. The multiplication is easy, but given just the final result, it’s extremely difficult to find the two factors. DSA uses a similar problem, the discrete logarithm problem: raising a number to a given exponent, then doing modulo division.

Yet another cryptography primitive is the elliptic curve, which uses point multiplication as the one-way function. I’ve described it as a mathematical pinball, bouncing around inside the curve. It’s reasonably easy to compute the final point, but essentially impossible to trace the path back to the origin. Formally this is the Elliptic Curve Discrete Logarithm Problem, and it’s not considered to be quantum-resistant, either.

One of the complete schemes is ECDSA, which combines the DSA scheme with Elliptic Curves. Part of this calculation uses a nonce, denoted “k”, a number that is only used once. In ECDSA, k must be kept secret, and any repetition of different messages with the same nonce can lead to rapid exposure of the secret key.

And now we get to PuTTY, which was written for Windows back before that OS had any good cryptographic randomness routines. As we’ve already mentioned, re-use of k, the nonce, is disastrous for DSA. So, PuTTY did something clever, and took the private key and the contents of the message to be signed, hashed those values together using SHA-512, then used modulo division to reduce the bit-length to what was needed for the given k value. The problem is the 521-bit ECDSA, which takes a 521-bit k. That’s even shorter than the output of a SHA-512, so the resulting k value always started with nine 0 bits. Continue reading “This Week In Security: Putty Keys, Libarchive, And Palo Alto”

This Week In Security: BatBadBut, DLink, And Your TV Too

So first up, we have BatBadBut, a pun based on the vulnerability being “about batch files and bad, but not the worst.” It’s a weird interaction between how Windows uses cmd.exe to execute batch files and how argument splitting and character escaping normally works. And what is apparently a documentation flaw in the Windows API.

When starting a process, even on Windows, the new executable is handed a set of arguments to parse. In Linux and friends, that is a pre-split list of arguments, the argv array. On Windows, it’s a single string, left up to the program to handle. The convention is to follow the same behavior as Linux, but the cmd.exe binary is a bit different. It uses the carrot ^ symbol instead of the backslash \ to escape special symbols, among other differences. The Rust devs took a look and decided that there are some cases where a given string just can’t be made safe for cmd.exe, and opted to just throw an error when a string met this criteria.

And that brings us to the big questions. Who’s fault is it, and how bad is it? I think there’s some shared blame here. The Microsoft documentation on CreateProcess() strongly suggests that it won’t execute a batch file without cmd.exe being explicitly called. On the other hand, This is established behavior, and scripting languages on Windows have to play the game by Microsoft’s rules. And the possible problem space is fairly narrow: Calling a batch file with untrusted arguments.

Almost all of the languages with this quirk have either released patches or documentation updates about the issue. There is a notable outlier, as the Java language will not receive a fix, not deeming it a vulnerability. It’s rather ironic, given that Java is probably the most likely language to actually find this problem in the wild. Continue reading “This Week In Security: BatBadBut, DLink, And Your TV Too”

This Week In Security: XZ, ATT, And Letters Of Marque

The xz backdoor is naturally still the top story of the week. If you need a refresher, see our previous coverage. As expected, some very talented reverse engineers have gone to work on the code, and we have a much better idea of what the injected payload does.

One of the first findings to note is that the backdoor doesn’t allow a user to log in over SSH. Instead, when an SSH request is signed with the right authentication key, one of the certificate fields is decoded and executed via a system() call. And this makes perfect sense. An SSH login leaves an audit trail, while this backdoor is obviously intended to be silent and secret.

It’s interesting to note that this code made use of both autotools macros, and the GNU ifunc, or Indirect FUNCtions. That’s the nifty feature where a binary can include different versions of a function, each optimized for a different processor instruction set. The right version of the function gets called at runtime. Or in this case, the malicious version of that function gets hooked in to execution by a malicious library. Continue reading “This Week In Security: XZ, ATT, And Letters Of Marque”

This Week In Security: Peering Through The Wall, Apple’s GoFetch, And SHA-256

The Linux command wall is a hold-over from the way Unix machines used to be used. It’s an abbreviation of Write to ALL, and it was first included in AT&T Unix, way back in 1975. wall is a tool that a sysadmin can use to send a message to the terminal session of all logged-in users. So far nothing too exciting from a security perspective. Where things get a bit more interesting is the consideration of ANSI escape codes. Those are the control codes that moves the cursor around on the screen, also inherited from the olden days of terminals.

The modern wall binary is actually part of util-linux, rather than being a continuation of the old Unix codebase. On many systems, wall runs as a setgid, so the behavior of the system binary really matters. It’s accepted that wall shouldn’t be able to send control codes, and when processing a message specified via standard input, those control codes get rejected by the fputs_careful() function. But when a message is passed in on the command line, as an argument, that function call is skipped.

This allows any user that can send wall messages to also send ANSI control codes. Is that really a security problem? There are two scenarios where it could be. The first is that some terminals support writing to the system clipboard via command codes. The other, more creative issue, is that the output from running a binary could be overwritten with arbitrary text. Text like:
Sorry, try again.
[sudo] password for jbennett:

You may have questions. Like, how would an attacker know when such a command would be appropriate? And how would this attacker capture a password that has been entered this way? The simple answer is by watching the list of running processes and system log. Many systems have a command-not-found function, which will print the failing command to the system log. If that failing command is actually a password, then it’s right there for the taking. Now, you may think this is a very narrow attack surface that’s not going to be terribly useful in real-world usage. And that’s probably pretty accurate. It is a really fascinating idea to think through, and definitively worth getting fixed. Continue reading “This Week In Security: Peering Through The Wall, Apple’s GoFetch, And SHA-256”