NAME:WRECK is a collection of vulnerabilities in DNS implementations, discovered by Forescout and JSOF Research. This body of research can be seen as a continuation of Ripple20 and AMNESIA:33, as it builds on a class of vulnerability discovered in other network stacks, problems with DNS message compression.
Their PDF Whitepaper contains a brief primer on the DNS message format, which is useful for understanding the class of problem. In such a message, a DNS name is encoded with a length-value scheme, with each full name ending in a null byte. So in a DNS Request, Hackaday.com would get represented as
[0x08]Hackaday[0x03]com[0x00]. The dots get replaced by these length values, and it makes for an easily parsable format.
Very early on, it was decided that continually repeating the same host names in a DNS message was wasteful of space, so a compression scheme was devised. DNS compression takes advantage of the maximum host/domain length of 63 characters. This max size means that the binary representation of that length value will never contain “1”s in the first two digits. Since it can never be used, length values starting with a binary “11” are used to point to a previously occurring domain name. The 14 bits that follow this two bit flag are known as a compression pointer, and represent a byte offset from the beginning of the message. The DNS message parser pulls the intended value from that location, and then continues parsing.
The problems found were generally based around improper validation. For example, the NetX stack doesn’t check whether the compression pointer points at itself. This scenario leads to a tight infinite loop, a classic DoS attack. Other systems don’t properly validate the location being referenced, leading to data copy past the allocated buffer, leading to remote code execution (RCE). FreeBSD has this issue, but because it’s tied to DHCP packets, the vulnerability can only be exploited by a device on the local network. While looking for message compression issues, they also found a handful of vulnerabilities in DNS response parsing that aren’t directly related to compression. The most notable here being an RCE in Seimens’ Nucleus Net stack.
Another round of browser updates is upon us, and there are a few interesting notes. In addition to the normal security fixes, Firefox has opted to remove support for opening FTP links in the browser. The reasoning seems twofold. First, one less protocol to support means one less attack surface to worry about. The second stated reason is that this move allows Firefox to drop support for an unencrypted protocol.
Google Chrome has had an interesting few weeks, with a pair of bugs being announced on Twitter. In each case, the bugs were called 0-days, but that’s not precisely true. A 0-day is a bug that is used or released in the wild before the software vendor is aware of it.
What seems to have happened here is that researchers discovered the bugs, and privately reported them to Google. Google pushed the fixes to their V8 engine, and displaying some very good security practice, wrote a test case for each, and pushed it to their test suite. Their public test suite. Yes, Google themselves leaked these vulnerabilities before fixing them in Chrome, by writing and publishing a PoC. Ouch.
WireGuard, FreeBSD, and pfSense
There has been a slow-moving trainwreck that concluded about a month ago, but I was reminded of it this week, and it’s worth covering. Ars has done a great job covering the story. To Start with, pfSense is a popular FreeBSD-based router distro. It’s sponsored and supported by Netgate, who maintain a great community edition, as well as sell hardware with the commercial version. Netgate also employs at least one of the FreeBSD developers. To be clear, Netgate seems like a great outfit, but as we’ll see, a couple bad calls has landed them in hot water.
It was decided that Wireguard would be a great addition to pfSense, and the best way to accomplish that goal was to contract a developer to add a Wireguard driver to the FreeBSD kernel. The effort went sideways as soon as [Matthew Macy] got started, and turned down assistance from [Jason Donefeld], who just happens to be the author of the WireGuard protocol, and lead developer of the official implementation. It got worse when the FreeBSD port was completed, and the code checked in without proper review. This got Donefeld’s attention, who took a look at the implementation, and started a 10 day code sprint along with a couple other developers, to try to fix the worst of the problems before FreeBSD released its 13.0 final release. While the extra attention certainly improved the code, the kerfluffle attracted the attention of the FreeBSD security team, who made the call to pull the code before release, giving plenty of time to fully fix it.
We talked to Donefeld this week on FLOSS Weekly, and this story came up. Check the link and go to 37:40 to catch his comments on the matter. It seems like the situation turned into a turf war, and while the right thing was done in the FreeBSD kernel itself, it seems that pfSense shipped a release with the broken and vulnerable code. If nothing else, this is a lesson in how the best of intentions can turn very bad without sufficient review and oversight. Be sure to check out the official Netgate response to hear the other side of the story.
Signal Hacking the Hacker
There are a few personalities who, when their name is mentioned, you know it’s going to be a good story. Among people like Cliff Stoll and Kevin Mitnick, I suggest Moxie Marlinspike should make that list. He’s been involved in security for years, and is most recently known for his work on Signal. Signal, of course, is an end-to-end encrypted messaging application that has gained a big following over the years. It’s used by journalists, researchers, political dissidents, and criminals.
With a userbase that interesting, you can imagine how many people are interested in trying to break Signal’s security. One of those companies is Cellebrite, a security company that specializes in offensive and forensic security for government and law enforcement. While Cellebrite can’t read Signal messages over the network, they do produce a forensic kit that can pull messages off a phone, given physical access to it. Through presumably devious methods, Moxie procured one of their kits, and did a full analysis of the device. I don’t know how exactly he got his hands on it, but the least credible explanation (and one that is not meant to be taken seriously) is that it “fell off a truck”.
By a truly unbelievable coincidence, I was recently out for a walk when I saw a small package fall off a truck ahead of me. As I got closer, the dull enterprise typeface slowly came into focus: Cellebrite.
Moxie found some particularly interesting things in the included software, like libraries that were multiple years out of date, not to mention Apple libraries that were probably being illegally distributed. Those out-of-date libraries contain quite the collection of vulnerabilities, and included in Moxie’s post is a demo of the Cellebrite software getting exploited when trying to read a file off a phone. The post ends with a very tongue-in-cheek, thinly veiled warning about “uninteresting” files that will be stored by Signal, and rotated over time. The unspoken promise is that these files are traps, and will launch exploits on any machine running the Cellebrite software that tries to pull Signal data off a phone.
This threat isn’t simply to annoy Cellebrite, it’s to render useless any Signal data recovered via Cellebrite tools. If Cellebrite is known to be vulnerable to this sort of compromise, any data it recovers would be automatically suspect, and potentially inadmissible in a trial. It’s a clever strategy, and time will tell if it bears the fruit intended.
Project Zero Rule Changes
Google’s Project Zero has made a name for themselves for primarily two things: First, they do some amazing security research. Secondly, they tell you when they will release vulnerability details to the public, and they stand by that no matter what.
That’s why, when they announce a change to that policy, it’s kind of a newsworthy surprise. Now to be fair, it wasn’t a big change. The old policy was 90 days, with a possible 14 day grace period if a fix was forthcoming. The vulnerability details were released at 90 days, unless the grace period was in play, in which case details became public upon release of the fix. The new policy adds a 30 day adoption period. Now, so long as a fix is released within the 90 (or 104) day window, the vulnerability release happens 30 days later, so a potential maximum of 134 days after private disclosure.