This Week In Security: The Shai-Hulud Worm, ShadowLeak, And Inside The Great Firewall

Hardly a week goes by that there isn’t a story to cover about malware getting published to a repository. Last week it was millions of downloads on NPM, but this week it’s something much more concerning. Malware published on NPM is now looking for NPM tokens, and propagating to other NPM packages when found. Yes, it’s a worm, jumping from one NPM package to another, via installs on developer machines.

It does other things too, like grabbing all the secrets it can find when installed on a machine. If the compromised machine has access to a Github account, a new repo is created named Shai-Hulud, borrowed from the name of the sandworms from Dune. The collected secrets and machine info gets uploaded here, and a workflow also uploads any available GitHub secrets to the webhook.site domain.

How many packages are we talking about? At least 187, with some reports of over 500 packages compromised. The immediate attack has been contained, as NPM has worked to remove the compromised packages, and apparently has added filtering code that blocks the upload of compromised packages.

So far there hasn’t been an official statement on the worm from NPM or its parent companies, GitHub or Microsoft. Malicious packages uploaded to NPM is definitely nothing new. But this is the first time we’ve seen a worm that specializes in NPM packages. It’s not a good step for the trustworthiness of NPM or the direct package distribution model.

Continue reading “This Week In Security: The Shai-Hulud Worm, ShadowLeak, And Inside The Great Firewall”

Hackaday Links Column Banner

Hackaday Links: September 22, 2024

Thanks a lot, Elon. Or maybe not, depending on how this report that China used Starlink signals to detect low-observable targets pans out. There aren’t a lot of details, and we couldn’t find anything approximating a primary source, but it seems like the idea is based on forward scatter, which is when waves striking an object are deflected only a little bit. The test setup for this experiment was a ground-based receiver listening to the downlink signal from a Starlink satellite while a DJI Phantom 4 Pro drone was flown into the signal path. The drone was chosen because nobody had a spare F-22 or F-35 lying around, and its radar cross-section is about that of one of these stealth fighters. They claim that this passive detection method was able to make out details about the drone, but as with most reporting these days, this needs to be taken with an ample pinch of salt. Still, it’s an interesting development that may change things up in the stealth superiority field.

Continue reading “Hackaday Links: September 22, 2024”

Prompt Injection: An AI-Targeted Attack

For a brief window of time in the mid-2010s, a fairly common joke was to send voice commands to Alexa or other assistant devices over video. Late-night hosts and others would purposefully attempt to activate voice assistants like these en masse and get them to do ridiculous things. This isn’t quite as common of a gag anymore and was relatively harmless unless the voice assistant was set up to do something like automatically place Amazon orders, but now that much more powerful AI tools are coming online we’re seeing that joke taken to its logical conclusion: prompt-injection attacks. Continue reading “Prompt Injection: An AI-Targeted Attack”

What’s Old Is New Again: GPT-3 Prompt Injection Attack Affects AI

What do SQL injection attacks have in common with the nuances of GPT-3 prompting? More than one might think, it turns out.

Many security exploits hinge on getting user-supplied data incorrectly treated as instruction. With that in mind, read on to see [Simon Willison] explain how GPT-3 — a natural-language AI —  can be made to act incorrectly via what he’s calling prompt injection attacks.

This all started with a fascinating tweet from [Riley Goodside] demonstrating the ability to exploit GPT-3 prompts with malicious instructions that order the model to behave differently than one would expect.

Continue reading “What’s Old Is New Again: GPT-3 Prompt Injection Attack Affects AI”