Thanks a lot, Elon. Or maybe not, depending on how this report that China used Starlink signals to detect low-observable targets pans out. There aren’t a lot of details, and we couldn’t find anything approximating a primary source, but it seems like the idea is based on forward scatter, which is when waves striking an object are deflected only a little bit. The test setup for this experiment was a ground-based receiver listening to the downlink signal from a Starlink satellite while a DJI Phantom 4 Pro drone was flown into the signal path. The drone was chosen because nobody had a spare F-22 or F-35 lying around, and its radar cross-section is about that of one of these stealth fighters. They claim that this passive detection method was able to make out details about the drone, but as with most reporting these days, this needs to be taken with an ample pinch of salt. Still, it’s an interesting development that may change things up in the stealth superiority field.
prompt injection3 Articles
Prompt Injection: An AI-Targeted Attack
For a brief window of time in the mid-2010s, a fairly common joke was to send voice commands to Alexa or other assistant devices over video. Late-night hosts and others would purposefully attempt to activate voice assistants like these en masse and get them to do ridiculous things. This isn’t quite as common of a gag anymore and was relatively harmless unless the voice assistant was set up to do something like automatically place Amazon orders, but now that much more powerful AI tools are coming online we’re seeing that joke taken to its logical conclusion: prompt-injection attacks. Continue reading “Prompt Injection: An AI-Targeted Attack”
What’s Old Is New Again: GPT-3 Prompt Injection Attack Affects AI
What do SQL injection attacks have in common with the nuances of GPT-3 prompting? More than one might think, it turns out.
Many security exploits hinge on getting user-supplied data incorrectly treated as instruction. With that in mind, read on to see [Simon Willison] explain how GPT-3 — a natural-language AI — can be made to act incorrectly via what he’s calling prompt injection attacks.
This all started with a fascinating tweet from [Riley Goodside] demonstrating the ability to exploit GPT-3 prompts with malicious instructions that order the model to behave differently than one would expect.
Continue reading “What’s Old Is New Again: GPT-3 Prompt Injection Attack Affects AI”