In the maker world, it’s the Arduino and ESP32 lines that get the lion’s share of attention. However, you can do fantastic things with PIC chips, too, if you put the dev time in—it’s just perhaps less likely another maker has done so before you. A great example is this VGA output project from [grecotron].
A PIC18F47K42 is perhaps not the first part you would reach for to pursue any sort of video-based project. However, with the right techniques, you can get the 8-bit microcontroller pumping out the pixels surprisingly well. [grecotron] was able to get the chip outputting to a VGA monitor at a resolution of 360 x 480 with up to 16 colors. It took some careful coding to ensure the chip could reliably meet the timing requirements for the standard and to get HSYNC, VSYNC, and the color signals all dancing in harmony. Aiding in this regard was that the chip was clocked with a 14.3182 MHz crystal to make it easy to divide down from all the internal timers as needed. Supporting hardware is light, too—primarily consisting of a VGA connector, a couple of multiplexers, and resistor ladder DACs for the color signals. Files are on Github for those interested in deeper detail on the work.
VGA output is possible to implement on all kinds of microcontrollers—and even a bunch of raw logic if you know what you’re doing. If you’re pursuing your own video output wizardry, be sure to let us know on the tipsline.
Digital clock projects have been with us since the 1970s, when affordable LEDs and integrated circuits became available. In 2026 most of them use a microcontroller, but for the AliExpress fans there’s one that goes straight back to the ’70s with a pile of logic chips. You can make it on the supplied PCBs, but that wasn’t for [ALTco]. Instead, he made the circuit in free form, using six metres of brass wire.
The construction is anchored together by a set of busbars that carry sockets for a set of seven-segment and driver modules. The circuit is typical for the day, with a crystal oscillator and divider chain feeding the counters for the displays. There are a few clever tricks that older engineers might recognize in order to reduce the chip count. In this case that’s negated by an extra set of circuitry allowing the time to be set from a rotary encoder.
We’re impressed by the intricacy of the device, made bit by bit without a plan, it as some wires what thread their way between others. It’s a truly beautiful piece, and it reminds us of our circuit sculpture contest back in 2020.
This week Jonathan chats with Milo Schwartz about Pangolin, the Open Source tunneling solution. Why do we need something other than Wireguard, and how does Pangolin fix IoT and IT problems? And most importantly, how do you run your own self-hosted Pangolin install? Watch to find out!
Digital Convergence Corporation is hardly a household name, and there’s a good reason for that. However, it raised about $185 million in investments around the year 2000 from companies such as Coca-Cola, Radio Shack, GE, E. W. Scripps, and the media giant Belo Corporation. So what did all these companies want, and why didn’t it catch on? If you are old enough, you might remember the :CueCat, but you probably thought it was Radio Shack’s disaster. They were simply investors.
The Big Idea
The :CueCat was a barcode scanner that, usually, plugged into a PC’s keyboard port (in those days, that was normally a PS/2 port). A special cable, often called a wedge, was like a Y-cable, allowing you to use your keyboard and the scanner on the same port. The scanner looked like a cat, of course.
However, the :CueCat was not just a generic barcode scanner. It was made to only scan “cues” which were to appear in catalogs, newspapers, and other publications. The idea was that you’d see something in an ad or a catalog, rush to your computer to scan the barcode, and be transported to the retailer’s website to learn more and complete the purchase.
The software could also listen using your sound card for special audio codes that would play on radio or TV commercials and then automatically pop up the associated webpage. So, a piece of software that was reading your keyboard, listening to your room audio at all times, and could inject keystrokes into your computer. What could go wrong?
Tech has a problem, an e-waste problem. Google is a common offender when it comes to this, creating a product just to end support a couple of years later. Thankfully, there are some lasting capabilities left in their defunct Stadia controllers. After hearing about these capabilities, [Bringus Studios] managed to turn this future e-waste into something new: a Bluetooth adapter for game controllers.
To give some credit to Google, once they announced the Stadia program was winding down, they released an updated firmware that let you use the controller as a generic Bluetooth gamepad. But there was also a rather unusual feature added — if another controller is connected to it via USB, its output will be passed along over Bluetooth as if it was coming from the Stadia controller itself.
This would allow you to wirelessly connect an Xbox 360 or PlayStation 3 controller to your computer, for example. But while a neat trick, having the two controllers plugged into each other is a bit awkward. So [Bringus Studios] decided to take the Stadia controller apart and turn it into a dedicated Bluetooth interface.
The Linux world is currently seeing an explosion in new users, thanks in large part to Microsoft turning its Windows operating system into the most intrusive piece of spyware in modern computing. For those who value privacy and security, Linux has long been the safe haven where there’s reasonable certainty that the operating system itself isn’t harvesting user data or otherwise snooping where it shouldn’t be. Yet even after solving the OS problem, a deeper issue remains: the hardware itself. Since around 2008, virtually every Intel and AMD processor has included coprocessors running closed-source code known as the Intel Management Engine (IME) or AMD Platform Security Processor (PSP).
M1 MacBook Air, now with more freedom
These components operate entirely outside the user’s and operating system’s control. They are given privileged access to memory, storage, and networking and can retain that access even when the CPU is not running, creating systemic vulnerabilities that cannot be fully mitigated by software alone. One practical approach to minimizing exposure to opaque management subsystems like the IME or PSP is to use platforms that do not use x86 hardware in the first place. Perhaps surprisingly, the ARM-based Apple M1 and M2 computers offer a compelling option, providing a more constrained and clearly defined trust model for Linux users who prioritize privacy and security.
Before getting into why Apple Silicon can be appealing for those with this concern, we first need to address the elephant in the room: Apple’s proprietary, closed-source operating system. Luckily, the Asahi Linux project has done most of the heavy lifting for those with certain Apple Silicon machines who want to go more open-source. In fact, Asahi is one of the easiest Linux installs to perform today even when compared to beginner-friendly distributions like Mint or Fedora, provided you are using fully supported M1 or M2 machines rather than attempting an install on newer, less-supported models. The installer runs as a script within macOS, eliminating the need to image a USB stick. Once the script is executed, the user simply follows the prompts, restarts the computer, and boots into the new Linux environment. Privacy-conscious users may also want to take a few optional steps, such as verifying the Asahi checksum and encrypting the installation with LUKS but these steps are not too challenging for experienced users.
Black Boxes
Changing the operating system on modern computers is the easy part, though. The hard part is determining exactly how much trust should be placed in the underlying hardware and firmware of any given system, and then deciding what to do to make improvements. This is where Apple Silicon starts to make a compelling case compared to modern x86 machines. Rather than consolidating a wide range of low-level functionality into a highly privileged black box like the IME or PSP, Apple splits these responsibilities more narrowly, with components like the Secure Enclave focusing on specific security functions instead of being given broad system access.
Like many modern systems, Apple computers include a dedicated security coprocessor alongside the main CPU, known as the Secure Enclave Processor (SEP). It runs a minimal, hardened operating system called sepOS and is isolated from the rest of the system. Its primary roles include securely storing encryption keys, handling sensitive authentication data, and performing cryptographic operations. This separation helps ensure that even if the main operating system is compromised, secrets managed by the SEP remain protected.
The Chain of Trust
To boot an Apple Silicon computer, a “chain of trust” is followed in a series of steps, each of which verifies the previous step. This is outlined in more detail in Apple’s documentation, but starts with an immutable boot ROM embedded in the system-on-chip during manufacturing. It first verifies early boot stages, including the low-level bootloader and iBoot, which in turn authenticate and verify the operating system kernel and system image before completing the boot process. If any of these verification steps fail, the system halts booting to prevent unauthorized or compromised code from executing.
Perhaps obvious at this point is that Apple doesn’t sign Asahi Linux images. But rather than allowing unrestricted execution like many PCs, or fully locking down the device like a smartphone, Apple’s approach takes a middle way. They rely on another critical piece of “security hardware” required to authorize that third-party OS: a human user. The Asahi Linux documentation discusses this in depth, but Apple’s secure boot system allows the owner of the computer to explicitly authorize additional operating systems by creating a custom boot policy within the user-approved trust chain. In practice, this means that the integrity of the boot process is still enforced, but the user ultimately decides what is trusted. If a boot component is modified outside of this trust chain, the system will refuse to execute it. In contrast to this system, where secure boot is enforced by default and only relaxed through explicit user action, x86 systems can treat these protections as optional. A motivated x86 user can achieve a comparable level of security, but they must assemble and maintain it themselves, as well as figure it out in the first place.
Reducing the Attack Surface
The limited scope of Apple’s Secure Enclave gives it a much smaller attack surface compared to something like the Intel Management Engine. As mentioned before, the IME combines a wider range of functionality, including features designed for low-level remote system management. This broader scope increases its complexity and, by extension, its attack surface which has led to several high-profile vulnerabilities. Apple’s Secure Enclave, by contrast, is designed with a much narrower focus. That’s not to say it’s a perfect, invulnerable system since it’s also a closed-source black box, but its limited responsibilities inherently reduce that attack surface.
I’ll also note here that Apple is far from a perfect company. Their walled garden approach is inherently anti-consumer, and they’ve rightly taken some criticism for inflating hardware costs, deliberately making their computers difficult to repair, enforcing arbitrary divisions between different classes of products to encourage users to buy more devices, and maintaining a monopolistic and increasingly toxic app store.
But buying an M1 or M2 machine on the used market won’t directly give Apple any money, and beyond running the Asahi installer script doesn’t require interacting with any Apple software or their ecosystem in any way, beyond the initial installation. I’ve argued in the past that older Apple computers make excellent Linux machines for these reasons as well, and since the M1 and M2 machines eliminate the IME risk of these older computers they’re an even better proposition, even without considering the massive performance gains possible.
Ultimately, though, the best choice of hardware depends on one’s threat model and priorities. If the goal is to minimize exposure to IME/PSP-level risks while retaining semi-modern performance, an M1/M2 Mac with Asahi Linux is one of the best options available today. But if fully open hardware is non-negotiable, you’ll need to accept older or less powerful machines… for now.
Once upon a time, they told us we wouldn’t download a car, and they were wrong. Later, Zero Motorcycles stated in their FAQ that you cannot hack an electric motorcycle, a statement which [Persephone Karnstein] and collaborator [Mitchell Marasch] evidently took issue with. Not only can you hack an electric motorcycle, it is — in [Persephone]’s words — a security nightmare.
You should absolutely go over to [Persephone]’s website and check out the whole write-up, which is adapted from a talk given at BSides Seattle 2026. There’s simply way more detail than we can get into here. Everything from “what horridly toxic solvents would I need to unpot this PCB?” to the scripts used in de-compiling and understanding code, it’s all there, and in a lively and readable style to boot. Even if you have no interest in security, or electric motorcycles, you should check it out.
The upshot is that not only were Zero Motorcycles wrong when they said their electric motorcycles could not be hacked, they were hilariously wrong. The problem isn’t the motorcycle alone: it has an app that talks to the electronics on the bike, which take over-the-air (OTA) updates. What about the code linked to the VIN alluded to in that screenshot? Well, it turns out you just need a code structured like a VIN, not an actual number. Oops. By the end of it, [Persephone] and [Mitchell] have taken absolute control of the bike’s firmware, an so have them full control over all its systems.
Why cut the brake lines when you can perform an OTA update that will do the same thing invisibly? And don’t think you can just reset the bike to factory settings to fix it: they thought of this, and the purely-conceptual, never-deployed malware has enough access to prevent that. Or they could just set the battery on fire. That was an option, too, because the battery management system gets OTA updates as well.
To be clear, we don’t have any problem with a motorcycle that’s dependent on electronics to operate. After all, we’ve seen many projects that would meet that definition over the years. But the difference is none of those projects fumbled the execution this badly. Even this 3 kW unicycle, which has a computer for balance control, doesn’t see the need to expose itself. It’s horribly unsafe in very different ways.