Self-Driving Library For Python

Fully autonomous vehicles seem to perennially be just a few years away, sort of like the automotive equivalent of fusion power. But just because robotic vehicles haven’t made much progress on our roadways doesn’t mean we can’t play with the technology at the hobbyist level. You can embark on your own experimentation right now with this open source self-driving Python library.

Granted, this is a library built for much smaller vehicles, but it’s still quite full-featured. Known as Donkey Car, it’s mostly intended for what would otherwise be remote-controlled cars or robotics platforms. The library is built to be as minimalist as possible with modularity as a design principle, and includes the ability to self-drive with computer vision using machine-learning algorithms. It is capable of logging sensor data and interfacing with various controllers as well, either physical devices or through something like a browser.

To build a complete platform costs around $250 in parts, but most things needed for a Donkey Car compatible build are easily sourced and it won’t be too long before your own RC vehicle has more “full self-driving” capabilities than a Tesla, and potentially less risk of having a major security vulnerability as well.

Hackaday Prize 2023: Finger Tracking Via Muscle Sensors

Whether you want to build a computer interface device, or control a prosthetic hand, having some idea of a user’s finger movements can be useful. The OpenMuscle finger tracking sensor can offer the data you need, and it’s a device you can readily build in your own workshop.

The device consists of a wrist cuff that mounts twelve pressure sensors, arranged radially about the forearm. The pressure sensors are a custom design, using magnets, hall effect senors, and springs to detect the motion of the muscles in the vicinity of the wrist.

We first looked at this project last year, and since then, it’s advanced in leaps and bounds. The basic data from the pressure sensors now feeds into a trained machine learning model, which then predicts the user’s actual finger movements. The long-term goal is to create a device that can control prosthetic hands based on muscle contractions in the forearm. Ideally, this would be super-intuitive to use, requiring a minimum of practice and training for the end user.

It’s great to see machine learning combined with innovative mechanical design to serve a real need. We can’t wait to see where the OpenMuscle project goes next.

Continue reading “Hackaday Prize 2023: Finger Tracking Via Muscle Sensors”

Hackaday Links Column Banner

Hackaday Links: May 7, 2023

More fallout for SpaceX this week after their Starship launch attempt, but of the legal kind rather than concrete and rebar. A handful of environmental groups filed the suit, alleging that the launch generated “intense heat, noise, and light that adversely affects surrounding habitat areas and communities, which included designated critical habitat for federally protected species as well as National Wildlife Refuge and State Park lands,” in addition to “scatter[ing] debris and ash over a large area.”

Specifics of this energetic launch aside, we always wondered about the choice of Boca Chica for a launch facility. Yes, it has all the obvious advantages, like a large body of water directly to the east and being at a relatively low latitude. But the whole area is a wildlife sanctuary, and from what we understand there are still people living pretty close to the launch facility. Then again, you could pretty much say the same thing about the Cape Canaveral and Cape Kennedy complex, which probably couldn’t be built today. Amazing how a Space Race will grease the wheels of progress.

Continue reading “Hackaday Links: May 7, 2023”

Thermal Camera Plus Machine Learning Reads Passwords Off Keyboard Keys

An age-old vulnerability of physical keypads is visibly worn keys. For example, a number pad with digits clearly worn from repeated use provides an attacker with a clear starting point. The same concept can be applied to keyboards by using a thermal camera with the help of machine learning, but it also turns out that some types of keys and typing styles are harder to read than others.

Researchers at the University of Glasgow show how machine learning can pull details from thermal images like these quickly and effectively.

Touching a key with a fingertip imparts a slight amount of body heat, and that small amount of heat can be spotted by a thermal sensor. We’ve seen this basic approach used since at least 2005, and two things have changed since then: thermal cameras gotten much more common, and researchers discovered that by combining thermal readings with machine learning, it’s possible to eke out slight details too difficult or subtle to spot by human eye and judgement alone.

Here’s a link to the research and findings from the University of Glasgow, which shows how even a 16 symbol password can be attacked with an average accuracy of 55%. Shorter passwords are much easier to decipher, with the system attacking 6 and 8 symbol passwords with an accuracy between 92% and 80%, respectively. In the study, thermal readings were taken up to a full minute after the password was entered, but sooner readings result in higher accuracy.

A few things make things harder for the system. Fast typists spend less time touching keys, and therefore transfer less heat when they do, making things a little more challenging. Interestingly, the material of the keycaps plays a large role. ABS keycaps retain heat far more effectively than PBT (a material we often see in custom keyboard builds like this one.) It also turns out that the tiny amount of heat from LEDs in backlit keyboards runs effective interference when it comes to thermal readings.

Amusingly this kind of highly modern attack would be entirely useless against a scramblepad. Scramblepads are vintage devices that mix up which numbers go with which buttons each time the pad is used. Thermal imaging and machine learning would be able to tell which buttons were pressed and in what order, but that still wouldn’t help! A reminder that when it comes to security, tech does matter but fundamentals can matter more.

Very Slow Movie Player Avoids E-Ink Ghosting With Machine Learning

[mat kelcey] was so impressed and inspired by the concept of a very slow movie player (which is the playing of a movie at a slow rate on a kind of DIY photo frame) that he created his own with a high-resolution e-ink display. It shows high definition frames from Alien (1979) at a rate of about one frame every 200 seconds, but a surprising amount of work went into getting a color film intended to look good on a movie screen also look good when displayed on black & white e-ink.

The usual way to display images on a screen that is limited to black or white pixels is dithering, or manipulating relative densities of white and black to give the impression of a much richer image than one might otherwise expect. By itself, a dithering algorithm isn’t a cure-all and [mat] does an excellent job of explaining why, complete with loads of visual examples.

One consideration is the e-ink display itself. With these displays, changing the screen contents is where all the work happens, and it can be a visually imperfect process when it does. A very slow movie player aims to present each frame as cleanly as possible in an artful and stylish way, so rewriting the entire screen for every frame would mean uglier transitions, and that just wouldn’t do.

Delivering good dithering results despite sudden contrast shifts, and with fewest changed pixels.

So the overall challenge [mat] faced was twofold: how to dither a frame in a way that looked great, but also tried to minimize the number of pixels changed from the previous frame? All of a sudden, he had an interesting problem to solve and chose to solve it in an interesting way: training a GAN to generate the dithers, aiming to balance best image quality with minimal pixel change from the previous frame. The results do a great job of delivering quality visuals even when there are sharp changes in scene contrast to deal with. Curious about the code? Here’s the GitHub repository.

Here’s the original Very Slow Movie Player that so inspired [mat], and here’s a color version that helps make every frame a work of art. And as for dithering? It’s been around for ages, but that doesn’t mean there aren’t new problems to solve in that space. For example, making dithering look good in the game Return of the Obra Dinn required a custom algorithm.

Need To Pick Objects Out Of Images? Segment Anything Does Exactly That

Segment Anything, recently released by Facebook Research, does something that most people who have dabbled in computer vision have found daunting: reliably figure out which pixels in an image belong to an object. Making that easier is the goal of the Segment Anything Model (SAM), just released under the Apache 2.0 license.

The online demo has a bank of examples, but also works with uploaded images.

The results look fantastic, and there’s an interactive demo available where you can play with the different ways SAM works. One can pick out objects by pointing and clicking on an image, or images can be automatically segmented. It’s frankly very impressive to see SAM make masking out the different objects in an image look so effortless. What makes this possible is machine learning, and part of that is the fact that the model behind the system has been trained on a huge dataset of high-quality images and masks, making it very effective at what it does.

Continue reading “Need To Pick Objects Out Of Images? Segment Anything Does Exactly That”

ChatGPT Powers A Different Kind Of Logic Analyzer

If you’re hoping that this AI-powered logic analyzer will help you quickly debug that wonky digital circuit on your bench with the magic of AI, we’re sorry to disappoint you. But if you’re in luck if you’re in the market for something to help you detect logical fallacies someone spouts in conversation. With the magic of AI, of course.

First, a quick review: logic fallacies are errors in reasoning that lead to the wrong conclusions from a set of observations. Enumerating the kinds of fallacies has become a bit of a cottage industry in this age of fake news and misinformation, to the extent that many of the common fallacies have catchy names like “Texas Sharpshooter” or “No True Scotsman”. Each fallacy has its own set of characteristics, and while it can be easy to pick some of them out, analyzing speech and finding them all is a tough job.

Continue reading “ChatGPT Powers A Different Kind Of Logic Analyzer”