Undocumented X86 Instructions Allow Microcode Access

For an old CPU, finding all the valid instructions wasn’t very hard. You simply tried them all. Sure, really old CPUs might make it hard to tell what the instruction did, but once CPUs got illegal instruction traps, you could quickly just scan possible op codes and see what didn’t throw an exception. Modern processors, though, are quite another thing. For example, you might run a random instruction that locks up the machine or miss an instruction that would have been valid but the CPU is in the wrong mode. [Can Bölük] has a novel solution: By speculatively executing the target instruction and then monitoring the microcode sequencer, he can determine if the CPU is decoding an instruction even if it refuses to execute it.

Some unknown instructions may have power for good or evil, such as the recently announced undocumented instructions that can apparently rewrite the microcode. We expect to see a post soon on how to reprogram your Intel processor to run as a 6502 natively.

Continue reading “Undocumented X86 Instructions Allow Microcode Access”

Machine Learning Current Sensor Snoops On MCUs

Anyone who’s ever tried their hand at reverse engineering a piece of hardware has wished there was some kind of magic wand you could tap on a PCB to understand what its doing and why. We imagine that’s what put security researcher [Mark C] on the path to developing CurrentSense-TinyML, a fascinating proof of concept that uses machine learning and sensitive current measurements to try and determine what a microcontroller is up to.

Energy consumption as the LED blinks.

The idea is simple enough: just place a INA219 current sensor between the power supply and the microcontroller under observation, and record the resulting measurements as it goes about its business. Of course in this case, [Mark] knew what the target Arduino Nano was doing because he wrote the code that blinks its onboard LED.

This allowed him to create training data for TensorFlow, which was ultimately optimized into a model that could fit onto the Arduino Nano 33 BLE Sense which stands in for our magic wand. The end result is that the model can accurately predict when the Nano has fired up its LED based on the amount of power it’s using. [Mark] has done a fantastic job of documenting the whole process, which also doubles as a great intro for putting machine learning to work on a microcontroller.

Now we already know what you’re thinking: obviously the current would go up when the LED was lit, so the machine learning aspect is completely unnecessary. That may be true in this limited context, but remember, this is just a proof of concept to base further work on. In the future, with more training data, this technique could potentially be used to identify a whole range of nuanced activities. You’d be able to see when the MCU was sitting idle, when it was writing to flash, or when it was reading from sensors. In fact, with a good enough model, it might even be possible to identify the individual sensors that are being polled.

These are early days, but we’re very interested in seeing where this research goes. It might not be magic, but if analyzing the current draw of a coffee maker can tell you how much everyone in the office is drinking, then maybe it can help us figure out what all these unlabeled ICs are doing.

A Crash Course On Sniffing Bluetooth Low Energy

Bluetooth Low Energy (BLE) is everywhere these days. If you fire up a scanner on your phone and walk around the neighborhood, we’d be willing to bet you’d pick up dozens if not hundreds of devices. By extension, from fitness bands to light bulbs, it’s equally likely that you’re going to want to talk to some of these BLE gadgets at some point. But how?

Well, watching this three part video series from [Stuart Patterson] would be a good start. He covers how to get a cheap nRF52480 BLE dongle configured for sniffing, pulling the packets out of the air with Wireshark, and perhaps most crucially, how to duplicate the commands coming from a device’s companion application on the ESP32.

Testing out the sniffed commands.

The first video in the series is focused on getting a Windows box setup for BLE sniffing, so readers who aren’t currently living under Microsoft’s boot heel may want to skip ahead to the second installment. That’s where things really start heating up, as [Stuart] demonstrates how you can intercept commands being sent to the target device.

It’s worth noting that little attempt is made to actually decode what the commands mean. In this particular application, it’s enough to simply replay the commands using the ESP32’s BLE hardware, which is explained in the third video. Obviously this technique might not work on more advanced devices, but it should still give you a solid base to work from.

In the end, [Stuart] takes an LED lamp that could only be controlled with a smartphone application and turns it into something he can talk to on his own terms. Once the ESP32 can send commands to the lamp, it only takes a bit more code to spin up a web interface or REST API so you can control the device from your computer or other gadget on the network. While naturally the finer points will differ, this same overall workflow should allow you to get control of whatever BLE gizmo you’ve got your eye on.

Continue reading “A Crash Course On Sniffing Bluetooth Low Energy”

An Entire Game Inside Of A Font

Where’s the last place you’d expect to be able to play a game on your computer? The word processing program? Image editor? How about your text editor? That’s right — you can fight your Fontemons in any program that makes use of fonts, because mad genius [Michael Mulet] has created a game that exists entirely within a single Open Type font file.

[Michael] has harnessed the power of ligatures to create a choose-your-own-adventure-style turn-based game that pokes fun at both Pokemon and various typeface names. You start by choosing between Papyromaniac, Verdanta, and Proggito and face off against enemies like Helvetikhan and Scourier.

This works because there are many ways to draw glyphs on a screen. [Michael] chose Type 2 Charstrings, which is a vector graphics format that Adobe created for PostScript. It can draw pixels with a series of move specifications that tell it to draw up, to the right, and then back down before closing off the pixel with an implicit operator that draws from the starting point to the ending point. [Michael] was able to create two shades of gray by drawing smaller versions of the pixels and making the image by combining white and black pixels.

If you just want to play the game, you can either download the .otf file or just try it out in the browser. You’re supposed to use ‘a’ and ‘b’ to make choices that advance the game, but we soon discovered that spamming other keys like ‘v’ and Enter will lead to strange places. If you play it straight, it takes about 20 minutes, but there are enough secrets built in to make it last five times as long. [Michael] was kind enough to create a tutorial for making font-based games, but if you just want to get going, the game engine is open source.

What other fun things can you do on that locked-down work computer? If it has Excel, you can use it to do animation or just kick back and watch a movie.

Amazon Echo Gets Open Source Brain Transplant

There’s little debate that Amazon’s Alexa ecosystem makes it easy to add voice control to your smart home, but not everyone is thrilled with how it works. The fact that all of your commands are bounced off of Amazon’s servers instead of staying internal to the network is an absolute no-go for the more privacy minded among us, and honestly, it’s hard to blame them. The whole thing is pretty creepy when you think about it.

Which is precisely why [André Hentschel] decided to look into replacing the firmware on his Amazon Echo with an open source alternative. The Linux-powered first generation Echo had been rooted years before thanks to the diagnostic port on the bottom of the device, and there were even a few firmware images floating around out there that he could poke around in. In theory, all he had to do was remove anything that called back to the Amazon servers and replace the proprietary bits with comparable free software libraries and tools.

Taping into the Echo’s debug port.

Of course, it ended up being a little trickier than that. The original Echo is running on a 2.6.x series Linux kernel, which even for a device released in 2014, is painfully outdated. With its similarly archaic version of glibc, newer Linux software would refuse to run. [André] found that building an up-to-date filesystem image for the Echo wasn’t a problem, but getting the niche device’s hardware working on a more modern kernel was another story.

He eventually got the microphone array working, but not the onboard digital signal processor (DSP). Without the DSP, the age of the Echo’s hardware really started to show, and it was clear the seven year old smart speaker would need some help to get the job done.

The solution [André] came up with is not unlike how the device worked originally: the Echo performs wake word detection locally, but then offloads the actual speech processing to a more powerful computer. Except in this case, the other computer is on the same network and not hidden away in Amazon’s cloud. The Porcupine project provides the wake word detection, speech samples are broken down into actionable intents with voice2json, and the responses are delivered by the venerable eSpeak speech synthesizer.

As you can see in the video below the overall experience is pretty similar to stock, complete with fancy LED ring action. In fact, since Porcupine allows for multiple wake words, you could even argue that the usability has been improved. While [André] says adding support for Mycroft would be a logical expansion, his immediate goal is to get everything documented and available on the project’s GitLab repository so others can start experimenting for themselves.

Continue reading “Amazon Echo Gets Open Source Brain Transplant”

WebSerial: Browser Based Development For Your Boards And Electronic Badges

For years, one of the most accessible and simplest-to-implement methods of talking to a dev board has been to give it a serial port. Almost everything has serial in some form, so all that’s needed is to fire up a terminal. But even with that simplicity, there are still moments when the end user might find a terminal interface a little daunting. Think of a board aimed at kids for example, or an event badge which must be accessible to as many people as possible.

We’ve seen a very convenient solution to this problem in the form of WebUSB, but for devices without the appropriate USB hardware there’s WebSerial, an in-browser API for communicating with serial ports including USB-to-serial chips. [Tom Clement] argues that this could serve as the way forward for event badges. Best of all it can be a retro-fit to enable in-browser development for older badges or dev boards with a serial port.

The boards on which he demonstrates the technique are the series of event badges running the badge.team firmware platform including his own i-Pane from CampZone 2019 and going right back to the SHA 2017 badge, but there’s no reason why the same technique can’t be extended to other boards.

There’s a snag with all this though, sadly only browsers in the Chrome family support it at the time of writing, with no plans from Mozilla and apple, and silence from Microsoft. So things look likely to stay that way. It is however inevitable that in time there will be commercial products taking advantage of it via the use of cheap USB to serial chips, so perhaps the case to incorporate it will make itself.

Header: Mobius, Public domain.

An Algorithm For Art: Thread Portraits

We’ve all been there — through the magic of the internet, you see someone else’s stunning project and you just have to replicate it. For [Jenny Ma], that project was computer-generated string art, as in the computer figures out the best nail order to replicate a given image, and you lay out the thread yourself.

So, how does it work? Although a few algorithms are out there already, [Jenny] wanted to make her own using Python. Essentially it crops the image into a circle and then lays out evenly-spaced software nails around the circumference. The algorithm starts from a random nail and then determines the best next nail to wrap around by drawing a line from that nail to every other nail and choosing the darkest one based on the darkness of the image underneath that little line. It repeats this one chord at a time, subtracting from the original image until every pixel has been replaced with a thread or lack thereof, and then it spits out an ordered list of nail numbers.

Once the software was ready, [Jenny] made a wood canvas that’s 80 cm (31.5″) in diameter and started laying out the nail hole locations. There wasn’t quite enough room for 300 nails, so instead of starting over, [Jenny] changed the algorithm to use 298 nails and re-ran it.

[Jenny] does a great job of discussing the many variables at play in this hardware representation of software-created art. The most obvious of course is that the more nails used, the higher the resolution would be, but she determined that 300 is the sweet spot — more than that, and the resolution doesn’t really improve. We have to wonder if 360 nails would make things any easier. Check out the build video after the break.

Want to cut out most of the manual labor altogether? Build yourself a string art machine.

Continue reading “An Algorithm For Art: Thread Portraits”