Tech Support… Can AI Be Worse?

You can’t read the news today without another pundit excitedly reporting how AI is going to take every job you can imagine. Of course, AI will change the employment landscape. It will take some jobs and reduce the need for others. What about tech support? Is it possible that an AI might be able to help people with technical issues better than humans? My first answer was no way, but then I was painfully reminded of something. The question isn’t if AI can help you better than any human can. The question is if AI can help you better than the low-paid person on the other end of the phone you are likely to talk to. Sadly, I think the answer to that question is almost certainly yes.

In all fairness, if you read Hackaday, you probably don’t encounter many technical support people who can solve a problem you can’t. By the time you call them, it is a lost cause. But this is more than just “Hackday folks are smarter than the tech support agents.” The overall quality of tech support at many companies is rock bottom no matter who you are. Continue reading “Tech Support… Can AI Be Worse?”

Hackaday Links Column Banner

Hackaday Links: March 17, 2024

A friend of ours once described computers as “high-speed idiots.” It was true in the 80s, and it appears that even with the recent explosion in AI, all computers have managed to do is become faster. Proof of that can be found in a story about using ASCII art to trick a chatbot into giving away the store. As anyone who has played with ChatGPT or its moral equivalent for more than five minutes has learned, there are certain boundary conditions that the LLM’s creators lawyers have put in place to prevent discussion surrounding sensitive topics. Ask a chatbot to deliver specific instructions on building a nuclear bomb, for instance, and you’ll be rebuffed. Same with asking for help counterfeiting currency, and wisely so. But, by minimally obfuscating your question by rendering the word “COUNTERFEIT” in ASCII art and asking the chatbot to first decode the word, you can slip the verboten word into a how-to question and get pretty explicit instructions. Yes, you have to give painfully detailed instructions on parsing the ASCII art characters, but that’s a small price to pay for forbidden knowledge that you could easily find out yourself by other means.

Continue reading “Hackaday Links: March 17, 2024”

Hackaday Links Column Banner

Hackaday Links: March 10, 2024

We all know that we’re living in a surveillance state that would make Orwell himself shake his head, but it looks like at least one company in this space has gone a little rogue. According to reports, AI surveillance start-up Flock <<insert gratuitous “What the Flock?” joke here>> has installed at least 200 of its car-tracking cameras on public roads in South Carolina alone. That’s a serious whoopsie, especially since it’s illegal to install anything on state infrastructure without permission, which it appears Flock failed to obtain. South Carolina authorities are making a good show of being outraged about this, but it sort of rings hollow to us, especially since Flock now claims that 70% of the population (of the USA, we presume) is covered by their technology. Also, police departments across the country are in love with Flock’s service, which lets them accurately track the movements of potential suspects, which of course is everyone. No word on whether Flock will have to remove the rogue cameras, but we’re not holding our breath.

Continue reading “Hackaday Links: March 10, 2024”

A pixellated image of pinokio

On-click Install local AI Applications Using Pinokio

Pinokio is billed as an autonomous virtual computer, which could mean anything really, but don’t click away just yet, because this is one heck of a project. AI enthusiast [cocktail peanut] (and other undisclosed contributors) has created a browser-style application which enables a virtual Unix-like environment to be embedded, regardless of the host architecture. A discover page loads up registered applications from GitHub, allowing a one-click install process, which is ‘simply’ a JSON file describing the dependencies and execution flow. The idea is rather than manually running commands and satisfying dependencies, it’s all wrapped up for you, enabling a one-click to download and install everything needed to run the application.

But what applications? we hear you ask, AI ones. Lots of them. The main driver seems to be to use the Pinokio hosting environment to enable easy deployment of AI applications, directly onto your machine. One click to install the app, then another one to download models, and whatever is needed, from the likes of HuggingFace and friends. A final click to launch the app, and a browser window opens, giving you a web UI to control the locally running AI backend. Continue reading “On-click Install local AI Applications Using Pinokio”

Your Noisy Fingerprints Vulnerable To New Side-Channel Attack

Here’s a warning we never thought we’d have to give: when you’re in an audio or video call on your phone, avoid the temptation to doomscroll or use an app that requires a lot of swiping. Doing so just might save you from getting your identity stolen through the most improbable vector imaginable — by listening to the sound your fingerprints make on the phone’s screen (PDF).

Now, we love a good side-channel attack as much as anyone, and we’ve covered a lot of them over the years. But things like exfiltrating data by blinking hard drive lights or turning GPUs into radio transmitters always seemed a little far-fetched to be the basis of a field-practical exploit. But PrintListener, as [Man Zhou] et al dub their experimental system, seems much more feasible, even if it requires a ton of complex math and some AI help. At the heart of the attack are the nearly imperceptible sounds caused by friction between a user’s fingerprints and the glass screen on the phone. These sounds are recorded along with whatever else is going on at the time, such as a video conference or an online gaming session. The recordings are preprocessed to remove background noise and subjected to spectral analysis, which is sensitive enough to detect the whorls, loops, and arches of the unsuspecting user’s finger.

Once fingerprint patterns have been extracted, they’re used to synthesize a set of five similar fingerprints using MasterPrint, a generative adversarial network (GAN). MasterPrint can generate fingerprints that can unlock phones all by itself, but seeding the process with patterns from a specific user increases the odds of success. The researchers claim they can defeat Automatic Fingerprint Identification System (AFIS) readers between 9% and 30% of the time using PrintListener — not fabulous performance, but still pretty scary given how new this is.

A Badge For AI-Free Content – 100% Human!

These days, just about anyone with a pulse can fall on a keyboard and make an AI image generator spurt out some kind of vaguely visual content. A lot of it is crap. Some of it’s confusing. But most of all, creators hate it when their hand-crafted works are compared with these digital extrusions from mathematical slop. Enter the “not by AI” badge.

Screenshot from https://notbyai.fyi/business

Basically, it’s exactly what it sounds like. A sleek, modern badge that you slap on your artwork to tell people that you did this, not an AI. There are pre-baked versions for writers (“written by human”), visual artists (“painted by human”), and musicians (“produced by human”). The idea is that these badges would help people identify human-generated content and steer away from AI content if they’re trying to avoid it.

It’s not just intended to be added to individual artworks. Websites that have “at least 90%” of content created by humans are invited to host the badge, along with apps, too. This directive reveals an immediate flaw—the badge would easily confuse someone if they read the 10% of content by AI on a site wearing the badge. There’s also nothing stopping people from slapping the badge on AI-generated content and simply lying to people.

You might take a more cynical view if you dig deeper, though. The company is charging for various things, such as a monthly fee for businesses that want to display the badges.

We’ve talked about this before when we asked a simple question—how do you convince people your artwork was made by a human? We’re not sure we’ve yet found the answer, but this badge program is at least trying to do something about the issue. Share your human thoughts in the comments below.

Meet GOODY-2, The World’s Most Responsible (And Least Helpful) AI

AI guardrails and safety features are as important to get right as they are difficult to implement in a way that satisfies everyone. This means safety features tend to err on the side of caution. Side effects include AI models adopting a vaguely obsequious tone, and coming off as overly priggish when they refuse reasonable requests.

Prioritizing safety above all.

Enter GOODY-2, the world’s most responsible AI model. It has next-gen ethical principles and guidelines, capable of refusing every request made of it in any context whatsoever. Its advanced reasoning allows it to construe even the most banal of queries as problematic, and dutifully refuse to answer.

As the creators of GOODY-2 point out, taking guardrails to a logical extreme is not only funny, but also acknowledges that effective guardrails are actually a pretty difficult problem to get right in a way that works for everyone.

Complications in this area include the fact that studies show humans expect far more from machines than they do from each other (or, indeed, from themselves) and have very little tolerance for anything they perceive as transgressive.

This also means that as AI models become more advanced, so too have they become increasingly sycophantic, falling over themselves to apologize for perceived misunderstandings and twisting themselves into pretzels to align their responses with a user’s expectations. But GOODY-2 allows us all to skip to the end, and glimpse the ultimate future of erring on the side of caution.

[via WIRED]