A Look At The Intel N100 Radxa X4 SBC

Recently Radxa released the X4, which is an SBC containing not only an N100 x86_64 SoC but also an RP2040  MCU connected to a Raspberry Pi-style double pin header. The Intel N100 is one of a range of Alder Lake-N SoCs which are based on a highly optimized version of the Skylake core, first released in 2015. These cores are also used as ‘efficiency’ cores in Intel’s desktop CPUs. Being x86-based, this means that the Radxa X4 can run any Linux, Windows and other OS from either NVMe (PCIe 3.0 x4) or eMMC storage. After getting his hands on one of these SBCs, [Bret] couldn’t wait to take a gander at what it can do.

Installing Windows 11 and Debian 12 on a 500 GB NVMe (2230) SSD installed on the X4 board worked pretty much as expected on an x86 system, with just some missing drivers for the onboard Intel 2.5 Gbit Ethernet and WiFi, depending on the OS, but these were easily obtained via the Intel site and installed. The board comes with an installed RTC battery and a full-featured AMI BIOS, as well as up to 16 GB of LPPDR5 RAM.

Using the system with the Radxa PoE+ HAT via the 2.5 Gbit Ethernet port also worked a treat once using a quality PoE switch, even with the N100’s power level set to 15 Watt from the default 6. The RP2040 MCU on the mainboard is connected to the SoC using both USB 2.0 and UART, according to the board schematic. This means that from the N100 all of the Raspberry Pi-style pins can be accessed, making it in many ways a more functional SBC than the Raspberry Pi 5, with a similar power envelope and cost picture.

At $80 USD before shipping for the 8 GB (no eMMC) version that [Bret] looked at one might ask whether an N100-based MiniPC could be competitive, albeit that features like PoE+  and integrated RPi-compatible header are definite selling points.

Screenshot of the Kaby Lake CPU pinout next to the Coffee Lake CPU pinout, showing just how few differences there are

Intel’s Anti-Upgrade Tricks Defeated With Kapton Tape

If you own an Intel motherboard with a Z170 or Z270 chipset, you might believe that it only supports CPUs up to Intel’s 7th generation, known as Kaby Lake. Even the CPU socket’s pinout is different in the next generation — we are told, it will fit the same socket, but it won’t boot. So if you want a newer CPU, you’ll have to buy a new motherboard while you’re at it. Or do you?

Turns out, the difference in the socket is just a few pins here and there, and you can make a 8th or 9th generation Coffee Lake CPU work on your Z170/270 board if you apply a few Kapton tape fixes and mod your BIOS, in a process you can find as “Coffee Mod”. You can even preserve compatibility with the 6th/7th generation CPUs after doing this mod, should you ever need to go back to an older chip. Contrasting this to AMD’s high degree of CPU support on even old Ryzen motherboards, it’s as if Intel introduced this incompatibility intentionally.

There’s been a number of posts on various PC forums and YouTube videos, going through the process and showing off the tools used to modify the BIOS. Some mods are exceptionally easy to apply. For example, if you have the Asus Maximus VIII Ranger motherboard, a single jumper wire between two pads next to the EC will enable support without Kapton tape, a mod that likely could be figured out for other similar motherboards as well. There’s a few aspects to keep in mind, like making sure your board’s VRMs are good enough for the new chip, and a little more patching might be needed for hyper-threading, but nothing too involved.

Between money-grab features like this that hamper even the simplest of upgrades and increase e-waste, fun vulnerabilities, and inability to sort out problems like stability power consumption issues, it’s reassuring to see users take back control over their platforms wherever possible, and brings us back to the days of modding Xeon CPUs to fit into 775 sockets.

Don’t get too excited though, as projects like Intel BootGuard are bound to hamper mods like this on newer generations by introducing digital signing for BIOS images, flying under the banner of user security yet again. Alas, it appears way more likely that Intel’s financial security is the culprit.

Continue reading “Intel’s Anti-Upgrade Tricks Defeated With Kapton Tape”

A System Board For The 8008

Intel processors, at least for PCs, are ubiquitous and have been for decades. Even beyond the chips specifically built by Intel, other companies have used their instruction set to build chips, including AMD and VIA, for nearly as long. They’re so common the shorthand “x86” is used for most of these processors, after Intel’s convention of naming their processors with an “-86” suffix since the 1970s. Not all of their processors share this convention, though, but you’ll have to go even further back in time to find one. [Mark] has brought one into the modern age and is showing off his system board for this 8008 processor.

The 8008 predates any x86 processor by about six years and was among the first mass-produced 8-bit processors even before the well-known 8080. The expansion from four bits to eight was massive for the time and allowed a much wider range of applications for embedded systems and early personal computers. [Mark] goes into some of the details for programming these antique processors before demonstrating his system board. It gets power from a USB-C connection and uses a set of regulators and level shifters to make sure the voltages all match. Support for all the functions the 8008 needs is courtesy of an STM32. That includes the system memory.

For those looking to develop something like this, [Mark] has also added his development tools to a separate GitHub page. Although it’s always a good idea for those interested in computer science to take a look at old processors like these, it’s not always the easiest path to get original hardware like this, which also carries the risk of letting smoke out of delicate components. A much easier route is to spin up an emulator like an 8086 IBM PC emulator on an ESP32. Want to see inside this old chip? Have a look.

Continue reading “A System Board For The 8008”

How Does The Raspberry Pi Rack Up Against A Mini PC?

When the first Raspberry Pi came out back in 2012 it was groundbreaking because it offered a usable little Linux machine with the proud boast of a $25 dollar price tag. Sure it wasn’t the fastest kid on the block, but there was almost nothing at that price which could do what it did. Three leap years later though it’s surrounded by a host of competitors with similar hardware, and its top-end model now costs several times that original list price.

Meanwhile the cost of a “real” x86 computer such as those based upon the Intel N100 has dropped to the point at which it almost matches a fully tricked-out Pi with storage and peripherals, so does the Pi still hold its own? [CNX Software] has taken a look.

From the examples they use, in both cases the Intel machine is a little more expensive than the Pi, but comes with the advantage of all the peripherals, cooling, and storage coming built-in rather than add-ons. They rate the Pi as having the advantage on expandability as we’d expect, but the Intel giving a better bang for the buck in performance terms. From where we’re sitting the advantage of the Pi over most of its ARM competition has always been its good OS support, something which is probably exceeded by that on an x86 platform.

So, would you buy the Intel over the high-end Pi? Let us know in the comments.

PC AT mainboard with both 16-bit ISA and 32-bit PCI slots. (Credit: htomari, Flickr)

How Intel Gave Us The PCI Bus While Burying VESA’s VL-Bus

Gigabyte GA486IM mainboard from 1994 with ISA, VLB and PCI slots. (Credit: Rjluna2, Wikimedia)
Gigabyte GA486IM mainboard from 1994 with ISA, VLB and PCI slots. (Credit: Rjluna2, Wikimedia)

The early days of home computing were quite a jungle of different standards and convoluted solutions to make one piece of hardware work on as many different platforms as possible. IBM’s PC was an unexpected shift here, as with its expansion card-based system (retroactively called the ISA bus) it inspired a new evolution in computers. Of course, by the early 1990s the ISA bus couldn’t keep up with hardware demands, and a successor was needed. Many expected this to be VESA’s VLB, but as [Ernie Smith] regales us in a recent article in Tedium, Intel came out of left field with its PCI standard after initially backing VLB.

IBM, of course, wanted to see its own proprietary MCA standard used, while VLB was an open standard. One big issue with VLB is that it isn’t a new bus as such, but rather an additional slot tacked onto the existing ISA bus, as it was then called. While the reasoning for PCI was sound, with it being a compact, 32-bit (also 64-bit) design with plug and play and more complex but also more powerful PCI controller, its announcement came right before VLB was supposed to be announced.

Although there was some worry that having both VLB and PCI in the market competing would be bad, ultimately few mainboards ended up supporting VLB, and VLB quietly vanished. Later on PCI was extended into the Accelerated Graphics Port (AGP) that enabled the GPU revolution of the late 90s and still coexists with its PCIe successor. We covered making your own ISA and PCI cards a while ago, which shows that although PCI is more complex than ISA, it’s still well within the reach of today’s hobbyist, unlike PCIe which ramps up the hardware requirements.

Top image: PC AT mainboard with both 16-bit ISA and 32-bit PCI slots. (Credit: htomari, Flickr)

Two pictures of the mobo side by side, both with kapton tape covering everything other than the flash chip. On the left, the flash chip is populated, whereas on the right it's not

Enabling Intel AMT For BIOS-over-WiFi

Intel ME, AMT, SMT, V-Pro… All of these acronyms are kind of intimidating, all we know about them is that they are tied to remote control technologies rooted deep in Intel CPUs, way deeper than even operating systems go. Sometimes though, you want remote control for your own purposes, and that’s what [ABy] achieved. He’s got a HP ProDesk 600 G3 Mini, decided to put it into a hard to reach spot in his flat, somewhere you couldn’t easily fetch a monitor and a keyboard for any debugging needs. So, he started looking into some sort of remote access option in case he’d need to access the BIOS remotely, and went as far as it took to make it work. (Google Translate)

The features he needed are covered by Intel AMT — specifically, BIOS access over a WiFi connection. However, his mini PC only had SMT enabled from the factory, the cut-down version of AMT without features like wireless support. He figured out that BIOS dumping was the way, promptly did just that, found a suitable set of tools for his ME region version, and enabled AMT using Intel’s FIT (Flash Image Tool) software.

Now, dumping the image could be done from a running system fully through software, but apparently, flashing back requires an external programmer. He went with the classic CH341, did the 3.3 V voltmod that’s required to make it safe for flash chip use, and proceeded to spend a good amount of time making it work. Something about the process was screwy, likely the proprietary CH341 software. Comments under the article highlight that you should use flashrom for these tasks, and indeed, you should.

This article goes into a ton of detail when it comes to working with Intel BIOS images — whichever kind of setting you want to change, be it AMT support or some entirely different but just as tasty setting, you will be well served by this write-up. Comments do point out that you might want to upgrade the Intel ME version while at it, and for what it’s worth, you can look into disabling it too; we’ve shown you a multitude of reasons why you should, and a good few ways you could.

50-Year-Old Program Gets Speed Boost

At first glance, getting a computer program to run faster than the first electronic computers might seem trivial. After all, most of us carry enormously powerful processors in our pockets every day as if that’s normal. But [Mark] isn’t trying to beat computers like the ENIAC with a mobile ARM processor or other modern device. He’s now programming with the successor to the original Intel integrated circuit processor, the 4040, but beating the ENIAC is still little more complicated than you might think with a processor from 1974.

For this project, the goal was to best the 70-hour time set by ENIAC for computing the first 2035 digits of pi. There are a number of algorithms for performing this calculation, but using a 4-bit processor and an extremely limited memory of only 1280 bytes makes a number of these methods impossible, especially with the self-imposed time limit. The limited instruction set is a potential bottleneck as well with these early processors. [Mark] decided to use [Fabrice Bellard]’s algorithm given these limitations. He goes into great detail about the mathematics behind this method before coding it in JavaScript. Generating assembly language from a working JavaScript was found to be fairly straightforward.

[Mark] is also doing a lot of work on the 4040 to get this program running as well, including upgrades to the 40xx tool stack, the compiler and linker, and an emulator he’s using to test his program before sending it to physical hardware. The project is remarkably well-documented, including all of the optimizations needed to get these antique processors running fast enough to beat the ENIAC. We won’t spoil the results for you, but as a hint to how it worked out, he started this project using the 4040 since his original attempt using a 4004 wasn’t quite fast enough.