It looks like we won’t have Cruise to kick around in this space anymore with the news that General Motors is pulling the plug on its woe-beset robotaxi project. Cruise, which GM acquired in 2016, fielded autonomous vehicles in various test markets, but the fleet racked up enough high-profile mishaps (first item) for California regulators to shut down test programs in the state last year. The inevitable layoffs ensued, and GM is now killing off its efforts to build robotaxis to concentrate on incorporating the Cruise technology into its “Super Cruise” suite of driver-assistance features for its full line of cars and trucks. We feel like this might be a tacit admission that surmounting the problems of fully autonomous driving is just too hard a nut to crack profitably with current technology, since Super Cruise uses eye-tracking cameras to make sure the driver is paying attention to the road ahead when automation features are engaged. Basically, GM is admitting there still needs to be meat in the seat, at least for now.
LLM39 Articles
An Animated Walkthrough Of How Large Language Models Work
If you wonder how Large Language Models (LLMs) work and aren’t afraid of getting a bit technical, don’t miss [Brendan Bycroft]’s LLM Visualization. It is an interactively-animated step-by-step walk-through of a GPT large language model complete with animated and interactive 3D block diagram of everything going on under the hood. Check it out!
The demonstration walks through a simple task and shows every step. The task is this: using the nano-gpt model, take a sequence of six letters and put them into alphabetical order.
A GPT model is a highly complex prediction engine, so the whole process begins with tokenizing the input (breaking up words and assigning numerical values to the chunks) and ends with choosing an appropriate output from a list of probabilities. There are of course many more steps in between, and different ways to adjust the model’s behavior. All of these are made quite clear by [Brendan]’s process breakdown.
We’ve previously covered how LLMs work, explained without math which eschews gritty technical details in favor of focusing on functionality, but it’s also nice to see an approach like this one, which embraces the technical elements of exactly what is going on.
We’ve also seen a much higher-level peek at how a modern AI model like Anthropic’s Claude works when it processes requests, extracting human-understandable concepts that illustrate what’s going on under the hood.
Playing Chess Against LLMs And The Mystery Of Instruct Models
At first glance, trying to play chess against a large language model (LLM) seems like a daft idea, as its weighted nodes have, at most, been trained on some chess-adjacent texts. It has no concept of board state, stratagems, or even whatever a ‘rook’ or ‘knight’ piece is. This daftness is indeed demonstrated by [Dynomight] in a recent blog post (Substack version), where the Stockfish chess AI is pitted against a range of LLMs, from a small Llama model to GPT-3.5. Although the outcomes (see featured image) are largely as you’d expect, there is one surprise: the gpt-3.5-turbo-instruct
model, which seems quite capable of giving Stockfish a run for its money, albeit on Stockfish’s lower settings.
Each model was given the same query, telling it to be a chess grandmaster, to use standard notation, and to choose its next move. The stark difference between the instruct model and the others calls investigation. OpenAI describes the instruct model as an ‘InstructGPT 3.5 class model’, which leads us to this page on OpenAI’s site and an associated 2022 paper that describes how InstructGPT is effectively the standard GPT LLM model heavily fine-tuned using human feedback.
Continue reading “Playing Chess Against LLMs And The Mystery Of Instruct Models”
This Week In Security: Linux VMs, Real AI CVEs, And Backscatter TOR DoS
Steve Ballmer famously called Linux “viral”, with some not-entirely coherent complaints about the OS. In a hilarious instance of life imitating art, Windows machines are now getting attacked through malicious Linux VM images distributed through phishing emails.
This approach seems to be intended to fool any anti-malware software that may be running. The VM includes the chisel tool, described as “a fast TCP/UDP tunnel, transported over HTTP, secured via SSH”. Now that’s an interesting protocol stack. It’s an obvious advantage for an attacker to have a Linux VM right on a target network. As this sort of virtualization does require hardware virtualization, it might be worth disabling the virtualization extensions in BIOS if they aren’t needed on a particular machine.
AI Finds Real CVE
We’ve talked about some rather unfortunate use of AI, where aspiring security researchers asked an LLM to find vulnerabilities in a project like curl, and then completely wasted a maintainer’s time on those bogus reports. We happened to interview Daniel Stenberg on FLOSS Weekly this week, and after he recounted this story, we mused that there might be a real opportunity to use LLMs to find vulnerabilities, when used as a way to direct fuzzing, and when combined with a good test suite.
And now, we have Google Project Zero bringing news of their Big Sleep LLM project finding a real-world vulnerability in SQLite. This tool was previously called Project Naptime, and while it’s not strictly a fuzzer, it does share some similarities. The main one being that both tools take their educated guesses and run that data through the real program code, to positively verify that there is a problem. With this proof of concept demonstrated, it’s sure to be replicated. It seems inevitable that someone will next try to get an LLM to not only find the vulnerability, but also find an appropriate fix. Continue reading “This Week In Security: Linux VMs, Real AI CVEs, And Backscatter TOR DoS”
This Week In Security: Playing Tag, Hacking Cameras, And More
Wired has a fascinating story this week, about the length Sophos has gone to for the last 5 years, to track down a group of malicious but clever security researchers that were continually discovering vulnerabilities and then using those findings to attack real-world targets. Sophos believes this adversary to be overlapping Chinese groups known as APT31, APT41, and Volt Typhoon.
The story is actually refreshing in its honesty, with Sophos freely admitting that their products, and security products from multiple other vendors have been caught in the crosshairs of these attacks. And indeed, we’ve covered stories about these vulnerabilities over the past weeks and months right here on this column. The sneaky truth is that many of these security products actually have pretty severe security problems.
The issues at Sophos started with an infection of an informational computer at a subsidiary office. They believe this was an information gathering exercise, that was a precursor to the widespread campaign. That campaign used multiple 0-days to crack “tens of thousands of firewalls around the world”. Sophos rolled out fixes for those 0-days, and included just a bit of extra logging as an undocumented feature. That logging paid off, as Sophos’ team of researchers soon identified an early signal among the telemetry. This wasn’t merely the first device to be attacked, but was actually a test device used to develop the attack. The game was on. Continue reading “This Week In Security: Playing Tag, Hacking Cameras, And More”
All System Prompts For Anthropic’s Claude, Revealed
For as long as AI Large Language Models have been around (well, for as long as modern ones have been accessible online, anyway) people have tried to coax the models into revealing their system prompts. The system prompt is essentially the model’s fundamental directives on what it should do and how it should act. Such healthy curiosity is rarely welcomed, however, and creative efforts at making a model cough up its instructions is frequently met with a figurative glare and stern tapping of the Terms & Conditions sign.
Anthropic have bucked this trend by making system prompts public for the web and mobile interfaces of all three incarnations of Claude. The prompt for Claude Opus (their flagship model) is well over 1500 words long, with different sections specifically for handling text and images. The prompt does things like help ensure Claude communicates in a useful way, taking into account the current date and an awareness of its knowledge cut-off, or the date after which Claude has no knowledge of events. There’s some stylistic stuff in there as well, such as Claude being specifically told to avoid obsequious-sounding filler affirmations, like starting a response with any form of the word “Certainly.”
Continue reading “All System Prompts For Anthropic’s Claude, Revealed”
Large Language Models On Small Computers
As technology progresses, we generally expect processing capabilities to scale up. Every year, we get more processor power, faster speeds, greater memory, and lower cost. However, we can also use improvements in software to get things running on what might otherwise be considered inadequate hardware. Taking this to the extreme, while large language models (LLMs) like GPT are running out of data to train on and having difficulty scaling up, [DaveBben] is experimenting with scaling down instead, running an LLM on the smallest computer that could reasonably run one.
Of course, some concessions have to be made to get an LLM running on underpowered hardware. In this case, the computer of choice is an ESP32, so the dataset was reduced from the trillions of parameters of something like GPT-4 or even hundreds of billions for GPT-3 down to only 260,000. The dataset comes from the tinyllamas checkpoint, and llama.2c is the implementation that [DaveBben] chose for this setup, as it can be streamlined to run a bit better on something like the ESP32. The specific model is the ESP32-S3FH4R2, which was chosen for its large amount of RAM compared to other versions since even this small model needs a minimum of 1 MB to run. It also has two cores, which will both work as hard as possible under (relatively) heavy loads like these, and the clock speed of the CPU can be maxed out at around 240 MHz.
Admittedly, [DaveBben] is mostly doing this just to see if it can be done since even the most powerful of ESP32 processors won’t be able to do much useful work with a large language model. It does turn out to be possible, though, and somewhat impressive, considering the ESP32 has about as much processing capability as a 486 or maybe an early Pentium chip, to put things in perspective. If you’re willing to devote a few more resources to an LLM, though, you can self-host it and use it in much the same way as an online model such as ChatGPT.