Hackaday Links Column Banner

Hackaday Links: March 17, 2024

A friend of ours once described computers as “high-speed idiots.” It was true in the 80s, and it appears that even with the recent explosion in AI, all computers have managed to do is become faster. Proof of that can be found in a story about using ASCII art to trick a chatbot into giving away the store. As anyone who has played with ChatGPT or its moral equivalent for more than five minutes has learned, there are certain boundary conditions that the LLM’s creators lawyers have put in place to prevent discussion surrounding sensitive topics. Ask a chatbot to deliver specific instructions on building a nuclear bomb, for instance, and you’ll be rebuffed. Same with asking for help counterfeiting currency, and wisely so. But, by minimally obfuscating your question by rendering the word “COUNTERFEIT” in ASCII art and asking the chatbot to first decode the word, you can slip the verboten word into a how-to question and get pretty explicit instructions. Yes, you have to give painfully detailed instructions on parsing the ASCII art characters, but that’s a small price to pay for forbidden knowledge that you could easily find out yourself by other means.

Continue reading “Hackaday Links: March 17, 2024”

Hackaday Links Column Banner

Hackaday Links: March 10, 2024

We all know that we’re living in a surveillance state that would make Orwell himself shake his head, but it looks like at least one company in this space has gone a little rogue. According to reports, AI surveillance start-up Flock <<insert gratuitous “What the Flock?” joke here>> has installed at least 200 of its car-tracking cameras on public roads in South Carolina alone. That’s a serious whoopsie, especially since it’s illegal to install anything on state infrastructure without permission, which it appears Flock failed to obtain. South Carolina authorities are making a good show of being outraged about this, but it sort of rings hollow to us, especially since Flock now claims that 70% of the population (of the USA, we presume) is covered by their technology. Also, police departments across the country are in love with Flock’s service, which lets them accurately track the movements of potential suspects, which of course is everyone. No word on whether Flock will have to remove the rogue cameras, but we’re not holding our breath.

Continue reading “Hackaday Links: March 10, 2024”

A pixellated image of pinokio

On-click Install local AI Applications Using Pinokio

Pinokio is billed as an autonomous virtual computer, which could mean anything really, but don’t click away just yet, because this is one heck of a project. AI enthusiast [cocktail peanut] (and other undisclosed contributors) has created a browser-style application which enables a virtual Unix-like environment to be embedded, regardless of the host architecture. A discover page loads up registered applications from GitHub, allowing a one-click install process, which is ‘simply’ a JSON file describing the dependencies and execution flow. The idea is rather than manually running commands and satisfying dependencies, it’s all wrapped up for you, enabling a one-click to download and install everything needed to run the application.

But what applications? we hear you ask, AI ones. Lots of them. The main driver seems to be to use the Pinokio hosting environment to enable easy deployment of AI applications, directly onto your machine. One click to install the app, then another one to download models, and whatever is needed, from the likes of HuggingFace and friends. A final click to launch the app, and a browser window opens, giving you a web UI to control the locally running AI backend. Continue reading “On-click Install local AI Applications Using Pinokio”

Your Noisy Fingerprints Vulnerable To New Side-Channel Attack

Here’s a warning we never thought we’d have to give: when you’re in an audio or video call on your phone, avoid the temptation to doomscroll or use an app that requires a lot of swiping. Doing so just might save you from getting your identity stolen through the most improbable vector imaginable — by listening to the sound your fingerprints make on the phone’s screen (PDF).

Now, we love a good side-channel attack as much as anyone, and we’ve covered a lot of them over the years. But things like exfiltrating data by blinking hard drive lights or turning GPUs into radio transmitters always seemed a little far-fetched to be the basis of a field-practical exploit. But PrintListener, as [Man Zhou] et al dub their experimental system, seems much more feasible, even if it requires a ton of complex math and some AI help. At the heart of the attack are the nearly imperceptible sounds caused by friction between a user’s fingerprints and the glass screen on the phone. These sounds are recorded along with whatever else is going on at the time, such as a video conference or an online gaming session. The recordings are preprocessed to remove background noise and subjected to spectral analysis, which is sensitive enough to detect the whorls, loops, and arches of the unsuspecting user’s finger.

Once fingerprint patterns have been extracted, they’re used to synthesize a set of five similar fingerprints using MasterPrint, a generative adversarial network (GAN). MasterPrint can generate fingerprints that can unlock phones all by itself, but seeding the process with patterns from a specific user increases the odds of success. The researchers claim they can defeat Automatic Fingerprint Identification System (AFIS) readers between 9% and 30% of the time using PrintListener — not fabulous performance, but still pretty scary given how new this is.

A Badge For AI-Free Content – 100% Human!

These days, just about anyone with a pulse can fall on a keyboard and make an AI image generator spurt out some kind of vaguely visual content. A lot of it is crap. Some of it’s confusing. But most of all, creators hate it when their hand-crafted works are compared with these digital extrusions from mathematical slop. Enter the “not by AI” badge.

Screenshot from https://notbyai.fyi/business

Basically, it’s exactly what it sounds like. A sleek, modern badge that you slap on your artwork to tell people that you did this, not an AI. There are pre-baked versions for writers (“written by human”), visual artists (“painted by human”), and musicians (“produced by human”). The idea is that these badges would help people identify human-generated content and steer away from AI content if they’re trying to avoid it.

It’s not just intended to be added to individual artworks. Websites that have “at least 90%” of content created by humans are invited to host the badge, along with apps, too. This directive reveals an immediate flaw—the badge would easily confuse someone if they read the 10% of content by AI on a site wearing the badge. There’s also nothing stopping people from slapping the badge on AI-generated content and simply lying to people.

You might take a more cynical view if you dig deeper, though. The company is charging for various things, such as a monthly fee for businesses that want to display the badges.

We’ve talked about this before when we asked a simple question—how do you convince people your artwork was made by a human? We’re not sure we’ve yet found the answer, but this badge program is at least trying to do something about the issue. Share your human thoughts in the comments below.

Meet GOODY-2, The World’s Most Responsible (And Least Helpful) AI

AI guardrails and safety features are as important to get right as they are difficult to implement in a way that satisfies everyone. This means safety features tend to err on the side of caution. Side effects include AI models adopting a vaguely obsequious tone, and coming off as overly priggish when they refuse reasonable requests.

Prioritizing safety above all.

Enter GOODY-2, the world’s most responsible AI model. It has next-gen ethical principles and guidelines, capable of refusing every request made of it in any context whatsoever. Its advanced reasoning allows it to construe even the most banal of queries as problematic, and dutifully refuse to answer.

As the creators of GOODY-2 point out, taking guardrails to a logical extreme is not only funny, but also acknowledges that effective guardrails are actually a pretty difficult problem to get right in a way that works for everyone.

Complications in this area include the fact that studies show humans expect far more from machines than they do from each other (or, indeed, from themselves) and have very little tolerance for anything they perceive as transgressive.

This also means that as AI models become more advanced, so too have they become increasingly sycophantic, falling over themselves to apologize for perceived misunderstandings and twisting themselves into pretzels to align their responses with a user’s expectations. But GOODY-2 allows us all to skip to the end, and glimpse the ultimate future of erring on the side of caution.

[via WIRED]

AI’s Existence Is All It Takes To Be Accused Of Being One

New technologies bring with them the threat of change. AI tools are one of the latest such developments. But as is often the case, when technological threats show up, they end up looking awfully human.

Recently, [E. M. Wolkovich] submitted a scientific paper for review that — to her surprise — was declared “obviously” the work of ChatGPT. No part of that was true. Like most people, [E. M. Wolkovich] finds writing a somewhat difficult process. Her paper represents a lot of time and effort. But despite zero evidence, this casual accusation of fraud in a scientific context was just sort of… accepted.

There are several reasons this is concerning. One is that, in principle, the scientific community wouldn’t dream of leveling an accusation of fraud like data manipulation without evidence. But a reviewer had no qualms about casually claiming [Wolkovich]’s writing wasn’t hers, effectively calling her a liar. Worse, at the editorial level, this baseless accusation was accepted and passed along with vague agreement instead of any sort of pushback.

Showing Your Work Isn’t Enough

Interestingly, [Wolkovich] writes everything in plain text using the LaTeX typesetting system, hosted on GitHub, complete with change commits. That means she could easily show her entire change history, from outline to finished manuscript, which should be enough to convince just about anyone that she isn’t a chatbot.

But pondering this raises a very good question: is [Wolkovich] having to prove she isn’t a chatbot a desirable outcome of this situation? We don’t think it is, nor is this an idle question. We’ve seen how even when an artist can present their full workflow to prove an AI didn’t make their art, enough doubt is sown by the accusation to poison the proceedings (not to mention greatly demoralizing the creator in the process.)

Better Standards Would Help

[Wolkovich] uses this opportunity to reflect on and share what this situation indicates about useful change. Now that AI tools exist, guidelines that acknowledge them should be created. Explicit standards about when and how AI tools can be used in the writing process, how those tools should be acknowledged if used, and a process to handle accusations of misuse would all be positive changes.

Because as it stands, it’s hard to see [Wolkovich]’s experience as anything other than an illustration of how a scientific community’s submission and review process was corrupted not by undeclared or thoughtless use of AI but by the simple fact that such tools exist. This seems like both a problem that will only get worse with time (right now, it is fairly easy to detect chatbots) and one that will not solve itself.