Thermal Camera Plus Machine Learning Reads Passwords Off Keyboard Keys

An age-old vulnerability of physical keypads is visibly worn keys. For example, a number pad with digits clearly worn from repeated use provides an attacker with a clear starting point. The same concept can be applied to keyboards by using a thermal camera with the help of machine learning, but it also turns out that some types of keys and typing styles are harder to read than others.

Researchers at the University of Glasgow show how machine learning can pull details from thermal images like these quickly and effectively.

Touching a key with a fingertip imparts a slight amount of body heat, and that small amount of heat can be spotted by a thermal sensor. We’ve seen this basic approach used since at least 2005, and two things have changed since then: thermal cameras gotten much more common, and researchers discovered that by combining thermal readings with machine learning, it’s possible to eke out slight details too difficult or subtle to spot by human eye and judgement alone.

Here’s a link to the research and findings from the University of Glasgow, which shows how even a 16 symbol password can be attacked with an average accuracy of 55%. Shorter passwords are much easier to decipher, with the system attacking 6 and 8 symbol passwords with an accuracy between 92% and 80%, respectively. In the study, thermal readings were taken up to a full minute after the password was entered, but sooner readings result in higher accuracy.

A few things make things harder for the system. Fast typists spend less time touching keys, and therefore transfer less heat when they do, making things a little more challenging. Interestingly, the material of the keycaps plays a large role. ABS keycaps retain heat far more effectively than PBT (a material we often see in custom keyboard builds like this one.) It also turns out that the tiny amount of heat from LEDs in backlit keyboards runs effective interference when it comes to thermal readings.

Amusingly this kind of highly modern attack would be entirely useless against a scramblepad. Scramblepads are vintage devices that mix up which numbers go with which buttons each time the pad is used. Thermal imaging and machine learning would be able to tell which buttons were pressed and in what order, but that still wouldn’t help! A reminder that when it comes to security, tech does matter but fundamentals can matter more.

Very Slow Movie Player Avoids E-Ink Ghosting With Machine Learning

[mat kelcey] was so impressed and inspired by the concept of a very slow movie player (which is the playing of a movie at a slow rate on a kind of DIY photo frame) that he created his own with a high-resolution e-ink display. It shows high definition frames from Alien (1979) at a rate of about one frame every 200 seconds, but a surprising amount of work went into getting a color film intended to look good on a movie screen also look good when displayed on black & white e-ink.

The usual way to display images on a screen that is limited to black or white pixels is dithering, or manipulating relative densities of white and black to give the impression of a much richer image than one might otherwise expect. By itself, a dithering algorithm isn’t a cure-all and [mat] does an excellent job of explaining why, complete with loads of visual examples.

One consideration is the e-ink display itself. With these displays, changing the screen contents is where all the work happens, and it can be a visually imperfect process when it does. A very slow movie player aims to present each frame as cleanly as possible in an artful and stylish way, so rewriting the entire screen for every frame would mean uglier transitions, and that just wouldn’t do.

Delivering good dithering results despite sudden contrast shifts, and with fewest changed pixels.

So the overall challenge [mat] faced was twofold: how to dither a frame in a way that looked great, but also tried to minimize the number of pixels changed from the previous frame? All of a sudden, he had an interesting problem to solve and chose to solve it in an interesting way: training a GAN to generate the dithers, aiming to balance best image quality with minimal pixel change from the previous frame. The results do a great job of delivering quality visuals even when there are sharp changes in scene contrast to deal with. Curious about the code? Here’s the GitHub repository.

Here’s the original Very Slow Movie Player that so inspired [mat], and here’s a color version that helps make every frame a work of art. And as for dithering? It’s been around for ages, but that doesn’t mean there aren’t new problems to solve in that space. For example, making dithering look good in the game Return of the Obra Dinn required a custom algorithm.

Need To Pick Objects Out Of Images? Segment Anything Does Exactly That

Segment Anything, recently released by Facebook Research, does something that most people who have dabbled in computer vision have found daunting: reliably figure out which pixels in an image belong to an object. Making that easier is the goal of the Segment Anything Model (SAM), just released under the Apache 2.0 license.

The online demo has a bank of examples, but also works with uploaded images.

The results look fantastic, and there’s an interactive demo available where you can play with the different ways SAM works. One can pick out objects by pointing and clicking on an image, or images can be automatically segmented. It’s frankly very impressive to see SAM make masking out the different objects in an image look so effortless. What makes this possible is machine learning, and part of that is the fact that the model behind the system has been trained on a huge dataset of high-quality images and masks, making it very effective at what it does.

Continue reading “Need To Pick Objects Out Of Images? Segment Anything Does Exactly That”

ChatGPT Powers A Different Kind Of Logic Analyzer

If you’re hoping that this AI-powered logic analyzer will help you quickly debug that wonky digital circuit on your bench with the magic of AI, we’re sorry to disappoint you. But if you’re in luck if you’re in the market for something to help you detect logical fallacies someone spouts in conversation. With the magic of AI, of course.

First, a quick review: logic fallacies are errors in reasoning that lead to the wrong conclusions from a set of observations. Enumerating the kinds of fallacies has become a bit of a cottage industry in this age of fake news and misinformation, to the extent that many of the common fallacies have catchy names like “Texas Sharpshooter” or “No True Scotsman”. Each fallacy has its own set of characteristics, and while it can be easy to pick some of them out, analyzing speech and finding them all is a tough job.

Continue reading “ChatGPT Powers A Different Kind Of Logic Analyzer”

Machine Learning Helps Electron Microscopy

Machine learning is supposed to help us do everything these days, so why not electron microscopy? A team from Ireland has done just that and published their results using machine learning to enhance STEM — scanning transmission electron microscopy. The result is important because it targets a very particular use case — low dose STEM.

The problem is that to get high resolutions, you typically need to use high electron doses. However, bombarding a delicate, often biological, subject with high-energy electrons may change what you are looking at and damage the sample. But using reduced electron dosages results in a poor image due to Poisson noise. The new technique learns how to compensate for the noise and produce a better-quality image even at low dosages.

Continue reading “Machine Learning Helps Electron Microscopy”

Combining Acoustic Bioprinting With Raman Spectroscopy For High-Throughput Identification Of Bacteria

Rapidly analyzing samples for the presence of bacteria and similar organic structures is generally quite a time-intensive process, with often the requirement of a cell culture being developed. Proposed by Fareeha Safir and colleagues in Nano Letters is a method to use an acoustic droplet printer combined with Raman spectroscopy. Advantages of this method are a high throughput, which could make analysis of samples at sewage installations, hospitals and laboratories significantly faster.

Raman spectroscopy works on the principle of Raman scattering, which is the inelastic scattering of photons by matter, causing a distinct pattern in the thus scattered light. By starting with a pure light source (that is, a laser), the relatively weak Raman scattering can be captured and the laser light filtered out. The thus captured signal can be analyzed and matched with known pathogens. Continue reading “Combining Acoustic Bioprinting With Raman Spectroscopy For High-Throughput Identification Of Bacteria”

Hands-On: NVIDIA Jetson Orin Nano Developer Kit

NVIDIA’s Jetson line of single-board computers are doing something different in a vast sea of relatively similar Linux SBCs. Designed for edge computing applications, such as a robot that needs to perform high-speed computer vision while out in the field, they provide exceptional performance in a board that’s of comparable size and weight to other SBCs on the market. The only difference, as you might expect, is that they tend to cost a lot more: the current top of the line Jetson AGX Orin Developer Kit is $1999 USD

Luckily for hackers and makers like us, NVIDIA realized they needed an affordable gateway into their ecosystem, so they introduced the $99 Jetson Nano in 2019. The product proved so popular that just a year later the company refreshed it with a streamlined carrier board that dropped the cost of the kit down to an incredible $59. Looking to expand on that success even further, today NVIDIA announced a new upmarket entry into the Nano family that lies somewhere in the middle.

While the $499 price tag of the Jetson Orin Nano Developer Kit may be a bit steep for hobbyists, there’s no question that you get a lot for your money. Capable of performing 40 trillion operations per second (TOPS), NVIDIA estimates the Orin Nano is a staggering 80X as powerful as the previous Nano. It’s a level of performance that, admittedly, not every Hackaday reader needs on their workbench. But the allure of a palm-sized supercomputer is very real, and anyone with an interest in experimenting with machine learning would do well to weigh (literally, and figuratively) the Orin Nano against a desktop computer with a comparable NVIDIA graphics card.

We were provided with one of the very first Jetson Orin Nano Developer Kits before their official unveiling during NVIDIA GTC (GPU Technology Conference), and I’ve spent the last few days getting up close and personal with the hardware and software. After coming to terms with the fact that this tiny board is considerably more powerful than the computer I’m currently writing this on, I’m left excited to see what the community can accomplish with the incredible performance offered by this pint-sized system.

Continue reading “Hands-On: NVIDIA Jetson Orin Nano Developer Kit”