There was something of a mystery this week, with the c.root-servers.net
root DNS server falling out of sync with it’s 12 siblings. That’s odd in itself, as these are the 13 servers that keep DNS working for the whole Internet. And yes, that’s a bit of a simplification, it’s not a single server for any of the 13 entities — the C “server” is actually 12 different machines. The intent is for all those hundreds of servers around the world to serve the same DNS information, but over several days this week, the “C” servers just stopped pulling updates.
The most amusing/worrying part of this story is how long it took for the problem to be discovered and addressed. One researcher cracked a ha-ha-only-serious sort of joke, that he had reported the problem to Cogent, the owners of the “C” servers, but they didn’t “seem to understand that they manage a root server”. The problem first started on Saturday, and wasn’t noticed til Tuesday, when the servers were behind by three days. Updates started trickling late Tuesday or early Wednesday, and by the end of Wednesday, the servers were back in sync.
Cogent gave a statement that an “unrelated routing policy change” both affected the zone updates, and the system that should have alerted them to the problem. It seems there might room for an independent organization, monitoring some of this critical Internet Infrastructure.
ANSI Injection One
On to vulnerabilities, there were a pair of interesting ANSI escape sequence injection flaws discovered this week. ANSI escape codes are strings sent to the terminal that don’t get directly written to the screen, but instead instruct the terminal how to write to the screen.
Just for example, to get green text on the terminal, you can run:
printf 'Hello \033[32mTHIS IS GREEN\033[0m\007'
The first vulnerability was in WinRAR, in the handling of the comments field of a RAR file. You may already see where this is going, but the problem is that ANSI escape sequences were blindly passed through as part of a comment, when doing something like listing the contents of a directory. This would be particularly useful to overwrite the file name to be extracted, to hide an executable or even path traversal attack. It’s worth noting that the rar
and unrar
had and have patched similar problems.
ANSI Injection Two
The second ANSI injection is a bit trickier. On the Mac, terminals like iterm2 can register as the default handler for URIs, like x-man-page://
. The issue here is that some of those URIs aren’t necessarily safe, like the man
link above, which supports the -P
pager option. That flag specifies which paging utility to use to show multiple pages of text, like less
, more
, etc. Opening that from a browser will at least show a warning before launch. ANSI codes lets an attack be quite sneaky, hiding the full text inside an in-terminal clickable link. The terminal won’t warn the user about what they’re about to do, so instant execution on click. Clever.
QNAPping At The Wheel
QNAP has had its share of problems over the years. The fine folks at Watchtowr decided to pitch in and try to find a few more, and then do a responsible disclosure to try to fix them the right way. And they didn’t disappoint. The unofficial audit found fifteen issues, but this write-up focuses on CVE-2024-27130, an unauthenticated overflow leading to Remote Code Execution (RCE).
Given the history of vulnerabilities, this shouldn’t be a big surprise, but the source of QNAP OS is a mess. The underpinnings are a Linux system, but the web interface on top of that is a tangle of a custom web server written in C, CGI scripts also written in C, strange leftover code bits in languages like PHP, and at least one code snippet that looks suspiciously like a backdoor.
And that’s all before we get to the real vulnerability. The cgi-bin/filemanager/share.cgi
endpoint segfaults when providing a valid “ssid” and then an overlong file name. Inside the vulnerable code, it’s a simple strcpy()
call, that copies an arbitrary, user-provided string into a fixed-length buffer. Write past the end of it, and you overwrite local variables, and then the return address, too. And because of how returns work, you also get to set some registers, like r0, the traditional first argument register. So… what if you just set the return address to the system()
function, and put a pointer to shellcode in r0? It’s pretty much that easy, except a real exploit would also need to overcome Address Space Layout Randomization (ASLR). Watchtowr researchers opted to leave that step out, to hopefully give QNAP users a few extra days before attacks happen in the wild.
Boost Got Audited, Too
And in a win for the Open Source way, the Boost C++ library came through an audit with mostly flying colors. The most severe finding was a CRLF injection in HTTP Headers, that’s only ranked medium severity. There are four low severity flaws, and two that only rank as informational. For the breadth of code that Boost covers, that seems pretty impressive. The entire report is available.
Where’d that come from?
The Justice AV Solution Viewer is an interesting new target for malware. It was discovered that the official javs.com
website was hosting a backdoored installer for this software. The installer was signed by another valid signing key, and included an fffmpeg.exe
binary that gets up to no good on install.
The malware then proceeds to steal authentication cookies and passwords. As this software is primarily used in courtrooms, it’s unclear what the exact motivation is. One possibility is that the viewer software is used by lawyers outside the courtroom, and a law office could be a very interesting target. For any computers infected, the recommendation is to re-image, and then also do a mass password rotation, to invalidate any stolen credentials.
Phishing Fire Drills
[Matt Linton], a “chaos Specialist” at Google has some thoughts about Phishing, specifically the style of phishing tests that get routinely aimed at users at larger companies. The TL;DR here is that phishing tests are a bad idea, and we should collectively stop it. A powerful argument he makes is that the Federally mandated phishing tests require existing anti-phishing protections to be disabled. A real attack is guaranteed not to look like the tests. And the data bears this out. Phishing tests are measurably counterproductive.
His suggestion is to stop doing phishing tests, and start doing phishing drills. Just an email to remind users that phishing is a thing, with links to more information, and instructions on what to do when the real thing comes along. And just for fun, take a look at Google’s slick phishing quiz, and see how you score. Let us know in the comments!
Bits and Bytes
It’s time again to update your GitLab installs. There’s a handful of medium severity bugs, as well as one high severity fixed with this round of updates. That last one is a weakness in the GitLab VS code editor, that can enable Cross-Site Scripting attacks. It’s unclear if that results in information exfiltration, or full account compromise, or perhaps the information loss can lead to compromise. Regardless, it’s worth pulling out your console and running the update.
Lastpass has finally fixed one of its longstanding weak-points, now encrypting URLs in your secure vault. When the service first launched, URLs were deemed to computationally expensive to encrypt. In the handful of security breaches at LastPass since then, it’s become very clear that unencrypted URLs was a terrible choice, as it gave that much more information away about users. Good for LastPass for continuing to work to right the ship.
And finally, you should go check out the FLOSS Weekly interview from earlier this week! We interviewed François Proulx, and talked about Poutine, a project from Boost Security, that scans code bases for vulnerable CI pipelines. If you work with GitHub actions or GitLab pipelines, it’s worth checking out!
Nice try HAD, I ain’t clicking the google phishing quiz link without verification
While I agree that passing typical phishing tests is not a good metric for determining whether someone will no fall for a real phishing attack, someone who routinely falls for those phishing tests will absolutely fall for a real one. If you have users falling for the crapping phishing tests regularly, they absolutely should be trained and then eventually fired if they keep failing, and in that sense these tests are useful.
At my company, clicking on a link in a phishing test initiates a mandatory HR training.
“Just an email to remind users that phishing is a thing, with links to more information, and instructions on what to do when the real thing comes along”
I’m not cyber security expert (not even member of any IT worker/hobbyist). On the other hand – If you work in office where you receive 10 and more emails per day and each requires you do perform some action, the mail from IT will never be priority as it is not productive. My wife works in a company where she is the only one reading weekly bulletins from IT (that is one page written with a lot whitespaces). No one even opens them. You might think that at health industry those “medical people” have better interests. In my work electronic team does the same. Even more – company prepared special training app and you need to pass test to receive certificate. In the end people mostly just run through material and guess the answers. One of them 2 days after the training installed suspicious VPN app on company computer to watch youtube.
” start doing phishing drills. Just an email to remind users that phishing is a thing”
This is not a drill. According to Cambridge Dictionary drill is: an activity that practises a particular skill and often involves repeating the same thing several times. Nobody sends memos once in a while that fire is a thing and calls that a drill.
I don’t like the quiz. The protocol should be the same whether phishing or Real email. Go to the website on your own and verify the information. I refuse to follow a link in a legit email. If my account has a problem I will go there on my own the usual way and check it out.
Matter of fact links in the email client should not even work for most users. They should get their own secure app for accessing files
GMail doesn’t care about phishing. I have a gmail account and the main source of phishing scams I see are from gmail addresses. If google can’t filter or flag obvious misuse of their own email addresses, they sure as hell can’t be trusted to educate users on the finer points of phishing detection.