This Week In Security: DNS Oops, Novel C2s, And The Scam Becomes Real

Something rather significant happened on the Internet back in May, and it seems that someone only noticed it on September 3rd. [Youfu Zhang] dropped a note on one of the Mozilla security mailing lists, pointing out that there was a certificate issued by Fina for 1.1.1.1. That IP address may sound familiar, and you may have questions.

First off, yes, TLS certificates can be issued for IP addresses. You can even get a numeric TLS certificate for your IP address, via Lets Encrypt. And second, 1.1.1.1 sounds familiar because that’s CloudFlare’s public DNS resolver. On that address, Cloudflare notably makes use of DoH, a charming abbreviation for DNS over HTTPS. The last important detail is that Cloudflare didn’t request or authorize the certificate. Significant indeed.

This is a high-profile example of the major weakness of the TLS certificate system. There are over 300 trusted certificate authorities in the Microsoft Root Certificate Program, Financijska agencija (Fina) being one of them. All it takes is for one of those trusted roots to issue a bad certificate, to compromise that system. That it took four months for someone to discover and point out the problem isn’t great. Continue reading “This Week In Security: DNS Oops, Novel C2s, And The Scam Becomes Real”

This Week In Security: DEF CON Nonsense, Vibepwned, And 0-days

DEF CON happened just a few weeks ago, and it’s time to cover some of the interesting talks. This year there were two talks in particular that are notable for being controversial. Coincidentally both of these were from Track 3. The first was the Passkeys Pwned, a talk by SquareX about how the passkey process can be hijacked by malware.

[Dan Goodin] lays out both the details on Passkeys, and why the work from SquareX isn’t the major vulnerability that they claim it is. First, what is a Passkey? Technically it’s a public/private keypair that is stored by the user’s browser. A unique keypair is generated for each new website, and the site stores the public key. To authenticate with the Passkey, the site generates a random string, the browser signs it with the private key, and the site checks it against the public key. I stand by my early opinion, that Passkeys are effectively just passwords, but with all the best-practices mandated.

So what is the claim presented at DEF CON? Malicious code running in the context of the browser tab can hijack the passkey process. In the demonstrated attack flow, a browser extension caused the Passkey login to fail, and prompted the user to generate a new Passkey. This is an interesting observation, and a clever attack against Passkeys, but is not a vulnerability in the Passkey spec. Or more accurately, it’s an accepted limitation of Passkeys, that they cannot guarantee security in the presence of a compromised browser. Continue reading “This Week In Security: DEF CON Nonsense, Vibepwned, And 0-days”

This Week In Security: Anime Catgirls, Illegal AdBlock, And Disputed Research

You may have noticed the Anime Catgirls when trying to get to the Linux Kernel’s mailing list, or one of any number of other sites associated with Open Source projects. [Tavis Ormandy] had this question, too, and even wrote about it. So, what’s the deal with the catgirls?

The project is Anubis, a “Web AI Firewall Utility”. The intent is to block AI scrapers, as Anubis “weighs the soul” of incoming connections, and blocks the bots you don’t want. Anubis uses the user agent string and other indicators to determine what an incoming connection is. But the most obvious check is the in-browser hashing. Anubis puts a challenge string in the HTTP response header, and JavaScript running in the browser calculates a second string to append this challenge. The goal is to set the first few bytes of the SHA-256 hash of this combined string to 0.

[Tavis] makes a compelling case that this hashing is security theatre — It makes things appear more secure, but doesn’t actually improve the situation. It’s only fair to point out that his observation comes from annoyance, as his preferred method of accessing the Linux kernel git repository and mailing list are now blocked by Anubis. But the economics of compute costs clearly demonstrate that this SHA-256 hashing approach will only be effective so long as AI companies don’t add the 25 lines of C it took him to calculate the challenge. The Anubis hashing challenge is literally security by obscurity.

Continue reading “This Week In Security: Anime Catgirls, Illegal AdBlock, And Disputed Research”

This Week In Security: The AI Hacker, FortMajeure, And Project Zero

One of the hot topics currently is using LLMs for security research. Poor quality reports written by LLMs have become the bane of vulnerability disclosure programs. But there is an equally interesting effort going on to put LLMs to work doing actually useful research. One such story is [Romy Haik] at ULTRARED, trying to build an AI Hacker. This isn’t an over-eager newbie naively asking an AI to find vulnerabilities, [Romy] knows what he’s doing. We know this because he tells us plainly that the LLM-driven hacker failed spectacularly.

The plan was to build a multi-LLM orchestra, with a single AI sitting at the top that maintains state through the entire process. Multiple LLMs sit below that one, deciding what to do next, exactly how to approach the problem, and actually generating commands for those tools. Then yet another AI takes the output and figures out if the attack was successful. The tooling was assembled, and [Romy] set it loose on a few intentionally vulnerable VMs.

As we hinted at up above, the results were fascinating but dismal. This LLM successfully found one Remote Code Execution (RCE), one SQL injection, and three Cross-Site Scripting (XSS) flaws. This whole post is sort of sneakily an advertisement for ULTRARED’s actual automated scanner, that uses more conventional methods for scanning for vulnerabilities. But it’s a useful comparison, and it found nearly 100 vulnerabilities among the collection of targets.

The AI did what you’d expect, finding plenty of false positives. Ask an AI to describe a vulnerability, and it will glad do so — no real vulnerability required. But the real problem was the multitude of times that the AI stack did demonstrate a problem, and failed to realize it. [Romy] has thoughts on why this attempt failed, and two points stand out. The first is that while the LLM can be creative in making attacks, it’s really terrible at accurately analyzing the results. The second observation is one of the most important observations to keep in mind regarding today’s AIs. It doesn’t actually want to find a vulnerability. One of the marks of security researchers is the near obsession they have with finding a great score. Continue reading “This Week In Security: The AI Hacker, FortMajeure, And Project Zero”

This Week In Security: Perplexity V Cloudflare, GreedyBear, And HashiCorp

The Internet is fighting over whether robots.txt applies to AI agents. It all started when Cloudflare published a blog post, detailing what the company was seeing from Perplexity crawlers. Of course, automated web crawling is part of how the modern Internet works, and almost immediately after the first web crawler was written, one managed to DoS (Denial of Service) a web site back in 1994. And the robots.txt file was first designed.

Make no mistake, robots.txt on its own is nothing more than a polite request for someone else on the Internet to not index your site. The more aggressive approach is to add rules to a Web Application Firewall (WAF) that detects and blocks a web crawler based on the user-agent string and source IP address. Cloudflare makes the case that Perplexity is not only intentionally ignoring robots.txt, but also actively disguising their webcrawling traffic by using IP addresses outside their normal range for these requests.

This isn’t the first time Perplexity has landed in hot water over their web scraping, AI learning endeavors. But Perplexity has published a blog post, explaining that this is different!

And there’s genuinely an interesting argument to be made,that robots.txt is aimed at indexing and AI training traffic, and that agentic AI requests are a different category. Put simply, perplexity bots ignore robots.txt when a live user asks them to. Is that bad behavior, or what we should expect? This question will have to be settled as AI agents become more common.

Continue reading “This Week In Security: Perplexity V Cloudflare, GreedyBear, And HashiCorp”

This Week In Security: Spilling Tea, Rooting AIs, And Accusing Of Backdoors

The Tea app has had a rough week. It’s not an unfamiliar story: Unsecured Firebase databases were left exposed to the Internet without any authentication. What makes this story particularly troubling is the nature of the app, and the resulting data that was spilled.

Tea is a “dating safety” application strictly for women. To enforce this, creating an account requires an ID verification process where prospective users share their government issued photo IDs with the platform. And that brings us to the first Firebase leak. 59 GB of photo IDs and other photos for a large subset of users. This was not the only problem.

There was a second database discovered, and this one contains private messages between users. As one might imagine, given the topic matter of the app, many of these DMs contain sensitive details. This may not have been an unsecured Firebase database, but a separate problem where any API key could access any DM from any user.

This is the sort of security failing that is difficult for a company to recover from. And while it should be a lesson to users, not to trust their sensitive messages to closed-source apps with questionable security guarantees, history suggests that few will learn the lesson, and we’ll be covering yet another train-wreck of similar magnitude in another few months.

Continue reading “This Week In Security: Spilling Tea, Rooting AIs, And Accusing Of Backdoors”

This Week In Security: Sharepoint, Initramfs, And More

There was a disturbance in the enterprise security world, and it started with a Pwn2Own Berlin. [Khoa Dinh] and the team at Viettel Cyber Security discovered a pair of vulnerabilities in Microsoft’s SharePoint. They were demonstrated at the Berlin competition in May, and patched by Microsoft in this month’s Patch Tuesday.

This original exploit chain is interesting in itself. It’s inside the SharePoint endpoint, /_layouts/15/ToolPane.aspx. The code backing this endpoint has a complex authentication and validation check. Namely, if the incoming request isn’t authenticated, the code checks for a flag, which is set true when the referrer header points to a sign-out page, which can be set arbitrarily by the requester. The DisplayMode value needs set to Edit, but that’s accessible via a simple URL parameter. The pagePath value, based on the URL used in the call, needs to start with /_layouts/ and end with /ToolPane.aspx. That particular check seems like a slam dunk, given that we’re working with the ToolPane.aspx endpoint. But to bypass the DisplayMode check, we added a parameter to the end of the URL, and hilariously, the pagePath string includes those parameters. The simple work-around is to append another parameter, foo=/ToolPane.aspx.

Putting it together, this means a POST of /_layouts/15/ToolPane.aspx?DisplayMode=Edit&foo=/ToolPane.aspx with the Referrer header set to /_layouts/SignOut.aspx. This approach bypasses authentication, and allows a form parameter MSOTlPn_DWP to be specified. These must be a valid file on the target’s filesystem, in the _controltemplates/ directory, ending with .iscx. But it grants access to all of the internal controls on the SafeControls list.

There’s an entire second half to [Khoa Dinh]’s write-up, detailing the discovery of a deserialization bug in one of those endpoints, that also uses a clever type-confusion sort of attack. The end result was remote code execution on the SharePoint target, with a single, rather simple request. Microsoft rolled out patches to fix the exploit chain. The problem is that Microsoft often opts to fix vulnerabilities with minimal code changes, often failing to fix the underlying code flaws. This apparently happened in this case, as the authentication bypass fix could be defeated simply by adding yet another parameter to the URL.

These bypasses were found in the wild on July 19th, and Microsoft quickly confirmed. The next day, the 20th, Microsoft issued an emergency patch to address the bypasses. The live exploitation appears to be coming from a set of Chinese threat actors, with a post-exploitation emphasis on stealing data and maintaining access. There seem to be more than 400 compromised systems worldwide, with some of those being rather high profile.

Continue reading “This Week In Security: Sharepoint, Initramfs, And More”