Your Noisy Fingerprints Vulnerable To New Side-Channel Attack

Here’s a warning we never thought we’d have to give: when you’re in an audio or video call on your phone, avoid the temptation to doomscroll or use an app that requires a lot of swiping. Doing so just might save you from getting your identity stolen through the most improbable vector imaginable — by listening to the sound your fingerprints make on the phone’s screen (PDF).

Now, we love a good side-channel attack as much as anyone, and we’ve covered a lot of them over the years. But things like exfiltrating data by blinking hard drive lights or turning GPUs into radio transmitters always seemed a little far-fetched to be the basis of a field-practical exploit. But PrintListener, as [Man Zhou] et al dub their experimental system, seems much more feasible, even if it requires a ton of complex math and some AI help. At the heart of the attack are the nearly imperceptible sounds caused by friction between a user’s fingerprints and the glass screen on the phone. These sounds are recorded along with whatever else is going on at the time, such as a video conference or an online gaming session. The recordings are preprocessed to remove background noise and subjected to spectral analysis, which is sensitive enough to detect the whorls, loops, and arches of the unsuspecting user’s finger.

Once fingerprint patterns have been extracted, they’re used to synthesize a set of five similar fingerprints using MasterPrint, a generative adversarial network (GAN). MasterPrint can generate fingerprints that can unlock phones all by itself, but seeding the process with patterns from a specific user increases the odds of success. The researchers claim they can defeat Automatic Fingerprint Identification System (AFIS) readers between 9% and 30% of the time using PrintListener — not fabulous performance, but still pretty scary given how new this is.

A Badge For AI-Free Content – 100% Human!

These days, just about anyone with a pulse can fall on a keyboard and make an AI image generator spurt out some kind of vaguely visual content. A lot of it is crap. Some of it’s confusing. But most of all, creators hate it when their hand-crafted works are compared with these digital extrusions from mathematical slop. Enter the “not by AI” badge.

Screenshot from https://notbyai.fyi/business

Basically, it’s exactly what it sounds like. A sleek, modern badge that you slap on your artwork to tell people that you did this, not an AI. There are pre-baked versions for writers (“written by human”), visual artists (“painted by human”), and musicians (“produced by human”). The idea is that these badges would help people identify human-generated content and steer away from AI content if they’re trying to avoid it.

It’s not just intended to be added to individual artworks. Websites that have “at least 90%” of content created by humans are invited to host the badge, along with apps, too. This directive reveals an immediate flaw—the badge would easily confuse someone if they read the 10% of content by AI on a site wearing the badge. There’s also nothing stopping people from slapping the badge on AI-generated content and simply lying to people.

You might take a more cynical view if you dig deeper, though. The company is charging for various things, such as a monthly fee for businesses that want to display the badges.

We’ve talked about this before when we asked a simple question—how do you convince people your artwork was made by a human? We’re not sure we’ve yet found the answer, but this badge program is at least trying to do something about the issue. Share your human thoughts in the comments below.

Meet GOODY-2, The World’s Most Responsible (And Least Helpful) AI

AI guardrails and safety features are as important to get right as they are difficult to implement in a way that satisfies everyone. This means safety features tend to err on the side of caution. Side effects include AI models adopting a vaguely obsequious tone, and coming off as overly priggish when they refuse reasonable requests.

Prioritizing safety above all.

Enter GOODY-2, the world’s most responsible AI model. It has next-gen ethical principles and guidelines, capable of refusing every request made of it in any context whatsoever. Its advanced reasoning allows it to construe even the most banal of queries as problematic, and dutifully refuse to answer.

As the creators of GOODY-2 point out, taking guardrails to a logical extreme is not only funny, but also acknowledges that effective guardrails are actually a pretty difficult problem to get right in a way that works for everyone.

Complications in this area include the fact that studies show humans expect far more from machines than they do from each other (or, indeed, from themselves) and have very little tolerance for anything they perceive as transgressive.

This also means that as AI models become more advanced, so too have they become increasingly sycophantic, falling over themselves to apologize for perceived misunderstandings and twisting themselves into pretzels to align their responses with a user’s expectations. But GOODY-2 allows us all to skip to the end, and glimpse the ultimate future of erring on the side of caution.

[via WIRED]

AI’s Existence Is All It Takes To Be Accused Of Being One

New technologies bring with them the threat of change. AI tools are one of the latest such developments. But as is often the case, when technological threats show up, they end up looking awfully human.

Recently, [E. M. Wolkovich] submitted a scientific paper for review that — to her surprise — was declared “obviously” the work of ChatGPT. No part of that was true. Like most people, [E. M. Wolkovich] finds writing a somewhat difficult process. Her paper represents a lot of time and effort. But despite zero evidence, this casual accusation of fraud in a scientific context was just sort of… accepted.

There are several reasons this is concerning. One is that, in principle, the scientific community wouldn’t dream of leveling an accusation of fraud like data manipulation without evidence. But a reviewer had no qualms about casually claiming [Wolkovich]’s writing wasn’t hers, effectively calling her a liar. Worse, at the editorial level, this baseless accusation was accepted and passed along with vague agreement instead of any sort of pushback.

Showing Your Work Isn’t Enough

Interestingly, [Wolkovich] writes everything in plain text using the LaTeX typesetting system, hosted on GitHub, complete with change commits. That means she could easily show her entire change history, from outline to finished manuscript, which should be enough to convince just about anyone that she isn’t a chatbot.

But pondering this raises a very good question: is [Wolkovich] having to prove she isn’t a chatbot a desirable outcome of this situation? We don’t think it is, nor is this an idle question. We’ve seen how even when an artist can present their full workflow to prove an AI didn’t make their art, enough doubt is sown by the accusation to poison the proceedings (not to mention greatly demoralizing the creator in the process.)

Better Standards Would Help

[Wolkovich] uses this opportunity to reflect on and share what this situation indicates about useful change. Now that AI tools exist, guidelines that acknowledge them should be created. Explicit standards about when and how AI tools can be used in the writing process, how those tools should be acknowledged if used, and a process to handle accusations of misuse would all be positive changes.

Because as it stands, it’s hard to see [Wolkovich]’s experience as anything other than an illustration of how a scientific community’s submission and review process was corrupted not by undeclared or thoughtless use of AI but by the simple fact that such tools exist. This seems like both a problem that will only get worse with time (right now, it is fairly easy to detect chatbots) and one that will not solve itself.

Wearable Robot Makes Mountain Climbing A Breeze For Seniors

You know, it’s just not fair. It seems that even if we stay active, age will eventually get the better of our muscles, robbing them of strength and our bodies of mobility. Canes and walkers do not provide additional strength, just support and reassurance in a treacherous landscape. What people could really benefit from are wearable robots that are able to compensate for a lack of muscle strength.

[Dr. Lee Jongwon] of the Korea Institute of Science and Technology has developed this very thing. MOONWALK-Omni is designed to “actively support leg strength in any direction”, and make one feel like they are walking on the moon. In order to test the wearable robot, [Dr. Jongwon] invited senior citizens to climb Korea’s Mount Yeongbong, which is some 604 meters (1980 feet) above sea level.

The robot weighs just 2 kg (about 4.5 lbs) and can be donned independently by the average adult in under ten seconds. There are four high-powered but ultra lightweight actuators on either side of the pelvis that aid balance and boost leg strength by up to 30%. This is all designed to increase propulsion.

An AI system works to analyze the wearer’s gait in real time in order to provide up-to-the-second effective muscle support in many different environments. One wearer, a formerly active mountain climber, reported feeling 10-20 years younger when reaching the top of Mount Yeongbong.

It’s quite interesting to see mobility robots outside of the simplicity of the rehabilitation setting. We have to wonder about the battery life. Will everyone over 65 be wearing these someday? We can only hope they become so affordable. In the meantime, here’s a wearable robot that travels all over your person for better telemetry.

Two researchers, a white woman and dark-skinned man look at a large monitor with a crystal structure displayed in red and white blocks.

AI On The Hunt For Better Batteries

While certain dystopian visions of the future have humans power the grid for AIs, Microsoft and Pacific Northwest National Laboratory (PNNL) set a machine learning system on the path of better solid state batteries instead.

Solid state batteries are the current darlings of battery research, promising a step-change in packaging size and safety among other advantages. While they have been working in the lab for some time now, we’re still yet to see any large-scale commercialization that could shake up the consumer electronics and electric vehicle spaces.

With a starting set of 32 million potential inorganic materials, the machine learning algorithm was able to select the 150 most promising candidates for further development in the lab. This smaller subset was then fed through a high-performance computing (HPC) algorithm to winnow the list down to 23. Eliminating previously explored compounds, the scientists were able to develop a promising Li/Na-ion solid state battery electrolyte that could reduce the needed Li in a battery by up to 70%.

For those of us who remember when energy materials research often consisted of digging through dusty old journal papers to find inorganic compounds of interest, this is a particularly exciting advancement. A couple more places technology can help in the sciences are robots doing the work in the lab or on the surgery table.

Continue reading “AI On The Hunt For Better Batteries”

Creators Can Fight Back Against AI With Nightshade

If an artist were to make use of a piece of intellectual property owned by a large tech company, they risk facing legal action. Yet many creators are unhappy that those same tech companies are using their IP on a grand scale in the form of training material for generative AI. Can they fight back?

Perhaps now they can, with Nightshade, from a team at the University of Chicago. It’s a piece of software for Windows and MacOS that poisons an image with imperceptible shading, to make an AI classify it in an entirely different way than it appears.

The idea is that creators use it on their artwork, and leave it for unsuspecting AIs to assimilate. Their example is that a picture of a cow might be poisoned such that the AI sees it as a handbag, and if enough creators use the software the AI is forever poisoned to return a picture of a handbag when asked for one of a cow. If enough of these poisoned images are put online then the risks of an AI using an online image become too high, and the hope is that then AI companies would be forced to take the IP of their source material seriously.

For this to work it depends on enough creators taking up and using the software, but we are guessing that an inevitable result will be an arms race between AIs and image poisoners. One thing is certain though, as the AI hype has fueled such a growth in generative AI systems, creators, whether they be major publishers, your favourite human-generated tech news website, or someone drawing a cartoon strip in their bedroom, deserve not to have their work stolen in this way.