Comparing X86 And 68000 In An FPGA

[Michael Kohn] started programming on the Motorola 68000 architecture and then, for work reasons, moved over to the Intel x86 and was not exactly pleased by the latter chip’s perceived shortcomings. In the ’80s, the 68000 was a very popular chip, powering everything from personal computers to arcade machines, and looking at its architecture and ease of programming, you can see why this was.

Fast-forward a few years, and [Michael] decided to implement both cores in an FPGA to compare real applications, you know, for science. As an extra bonus, he also compares the performance of a minimal RISC-V implementation on the same hardware, taken from an earlier RISC-V project (which you should also check out !)

Utilizing their ‘Java Grinder’ application (also pretty awesome, especially the retro console support), a simple Mandelbrot fractal generator was used as a non-trivial workload to produce binaries for each architecture, and the result was timed. Unsurprisingly, for CISC architectures, the 68000 and x86 code sizes were practically identical and significantly smaller than the equivalent RISC-V. Still, looking at the execution times, the 68000 beat the x86 hands down, with the newer RISC-V speeding along to take pole position. [Michael] admits that these implementations are minimal, with no pipelining, so they could be sped up a little.

Also, it’s not a totally fair race. As you’ll note from the RISC-V implementation, there was a custom RISC-V instruction implemented to perform the Mandelbrot generator’s iterator. This computes the complex operation Z = Z2 + C, which, as fellow fractal nerds will know, is where a Mandelbrot generator spends nearly all the compute time. We suspect that’s the real reason RISC-V came out on top.

If actual hardware is more your cup of tea, you could build a minimal 68k system pretty easily, provided you can find the chips. The current ubiquitous x86 architecture, as odd as it started out, is here to stay for the foreseeable future, so you’d just better get comfortable with it!

Continue reading “Comparing X86 And 68000 In An FPGA”

CH32V003 Makes For Dirt Cheap RISC-V Computer

These days, when most folks think of a computer they imagine a machine with multiple CPUs, several gigabytes of RAM,  and a few terabytes of non-volatile storage for good measure. With such modern expectations, it can be difficult to see something like a microcontroller as little more than a toy. But if said MCU has a keyboard, is hooked up to a display, and lets you run basic productivity and development software, doesn’t that qualify it as a computer? It certainly would have in the 1980s.

With that in mind, [Olimex] has teased the RVPC, which they’re calling the “world lowest cost Open Source Hardware All-in-one educational RISC-V computer” (say that three times fast). The tiny board features the SOIC-8 variant of the CH32V003 and…well, not a whole lot else. You’ve got a handful of passives, a buzzer, an LED, and the connectors for a PS/2 keyboard, a power supply, and a VGA display. The idea is to offer this as a beginner’s soldering kit in the future, so most most of the components are through-hole.

On the software side, the post references things like the ch32v003fun development stack, and the PicoRVD programmer as examples of open source tools that can get your CH32V computer up and running. There’s even a selection of retro-style games out there that would be playable on the platform. But what [Olimex] really has their eye on is a port of VMON, a RISC-V monitor program.

When paired with the 320×200 VGA text mode that they figure the hardware is capable of, you’ve got yourself the makings of an educational tool that would be great for learning assembly and playing around with bare metal programming.

It might not have the timeless style of the Voja4, but at least you can fit it in a normal sized pocket.

Thanks to [PPJ] for the tip.

256-Core RISC-V Megacluster

Supercomputers are always an impressive sight to behold, but also completely unobtainable for the ordinary person. But what if that wasn’t the case? [bitluni] shows us how it’s done with his 256-core RISC-V megacluster.

While the CH32V family of microcontrollers it’s based on aren’t nearly as powerful as what you’d traditionally find in a supercomputer, [bitluni] does use them to demonstrate a property of supercomputers: many, many cores doing the same task in parallel.

To recap our previous coverage, a single “supercluster” is made from 16 CH32V003 microcontrollers connected to each other with an 8-bit bus, with an LED on each and the remaining pins to an I/O expander. The megacluster is in turn made from 16 of these superclusters, which are put in pairs on 8 “blades” with a CH32V203 per square as a bridge between the supercluster and the main 8-bit bus of the megacluster, controlled by one last CH32V203.

[bitluni] goes into detail about designing PCBs that break KiCad, managing an overcrowded bus with 16 participants, culminating in a mesmerizing showcase of blinking LEDs showing that RC oscillators aren’t all that accurate.

Continue reading “256-Core RISC-V Megacluster”

Google Removes RISC-V Support From Android

Last year the introduction of  RISC-V support to the Android-specific, Linux-derived Android Common Kernel (ACK) made it seem that before long Android devices might be using SoCs based around the RISC-V ISA, but it would seem that these hopes are now dashed. As reported by Android Authority, with a series of recently accepted patches this RISC-V support was stripped again from the ACK. While this doesn’t mean that Android cannot be made to work on RISC-V, any company interested would have to do all of the heavy lifting themselves, which might include Qualcomm with their recently announced RISC-V-based smartwatch Snapdragon SoC.

No reason was provided by Google for this change, and the official statement from Google to Android Authority says that Google is not ready to provide a single supported Android Generic Kernel Image (GKI), but that ‘Android will continue to support RISC-V’. This change however, removes RISC-V kernel support from the ACK, and since Google only certifies Android builds which ship with a GKI featuring an ACK, this effectively means that RISC-V is not supported at this point, and likely won’t be for the foreseeable future.

As discussed on Hacker News, a potential reason might be the very fragmentary nature of the RISC-V ISA, which makes a standard RISC-V kernel very complicated if you want to support more than a (barebones) profile. This is also supported by a RISC-V mailing list thread, where ‘expensive maintenance’ is mentioned for why Google doesn’t want to support RISC-V.

Chip Mystery: The Case Of The Purloined Pin

Let’s face it — electronics are hard. Difficult concepts, tiny parts, inscrutable datasheets, and a hundred other factors make it easy to screw up in new and exciting ways. Sometimes the Magic Smoke is released, but more often things just don’t work even though they absolutely should, and no amount of banging your head on the bench seems to change things.

It’s at times like this that one questions their sanity, as [Gili Yankovitch] probably did when he discovered that not all CH32V003s are created equal. In an attempt to recreate the Linux-on-a-microcontroller project, [Gili] decided to go with the A4M6 variant of the dirt-cheap RISC-V microcontroller. This variant lives in a SOP16 package, which makes soldering a bit easier than either of the 20-pin versions, which come in either QFN or TSSOP packages.

Wisely checking the datasheet before proceeding, [Gili] was surprised and alarmed that the clock line for the SPI interface didn’t appear to be bonded out to a pin. Not believing his eyes, he turned to the ultimate source of truth and knowledge, where pretty much everyone came to the same conclusion: the vendor done screwed up.

Now, is this a bug, or is this a feature? Opinions will vary, of course. We assume that the company will claim it’s intentional to provide only two of the three pins needed to support a critical interface, while every end user who gets tripped up by this will certainly consider it a mistake. But forewarned is forearmed, as they say, and hats off to [Gili] for taking one for the team and letting the community know.

Espressif’s ESP32-P4 Application Processor: Details Begin To Emerge

Every now and then there’s a part that comes along which is hotly anticipated, but which understandably its manufacturer remains tight-lipped about in order to preserve maximum impact surrounding its launch. Right now that’s Espressif’s ESP32-P4: a powerful application processor with dual-core 400 MHz and a single-core low power 40 MHz RISC-V processors. Interestingly it doesn’t appear to have the radios which have been a feature of previous ESP parts, but it makes up for those with a much more comprehensive array of peripherals.

Some details are beginning to emerge, whether from leaks or in preparation for launch, including the first signs of support in their JTAG tool, and a glimpse in a video from another Chinese company of a development board. We got our hopes up a little when we saw the P4 appearing in some Espressif documentation, but on closer examination there’s nothing there yet about the interesting new peripherals.

Looking at the dev board and the video we can see some of what the thing is capable of as it drives a large touchscreen and a camera. There are two MIPI DSI/CSI ports on  the PCB, as well as three USB ports and a sound codec. A more run-of-the-mill ESP32-C3 is present we think to provide wireless networking, and there’s a fourth USB port which we are fairly certain is in fact only for serial communications via a what our best blurry photograph reading tells us is a Silicon Labs USB-to-serial chip. Finally there’s large Raspberry Pi-style header which appears to carry all the GPIOs and other pins. We’ve placed the video below the break, if you see anything we’ve missed please tell us in the comments.

We first covered this chip back in January, and then as now we’re looking forward to seeing what our community does with it.

Continue reading “Espressif’s ESP32-P4 Application Processor: Details Begin To Emerge”

Why X86 Needs To Die

As I’m sure many of you know, x86 architecture has been around for quite some time. It has its roots in Intel’s early 8086 processor, the first in the family. Indeed, even the original 8086 inherits a small amount of architectural structure from Intel’s 8-bit predecessors, dating all the way back to the 8008. But the 8086 evolved into the 186, 286, 386, 486, and then they got names: Pentium would have been the 586.

Along the way, new instructions were added, but the core of the x86 instruction set was retained. And a lot of effort was spent making the same instructions faster and faster. This has become so extreme that, even though the 8086 and modern Xeon processors can both run a common subset of code, the two CPUs architecturally look about as far apart as they possibly could.

So here we are today, with even the highest-end x86 CPUs still supporting the archaic 8086 real mode, where the CPU can address memory directly, without any redirection. Having this level of backwards compatibility can cause problems, especially with respect to multitasking and memory protection, but it was a feature of previous chips, so it’s a feature of current x86 designs. And there’s more!

I think it’s time to put a lot of the legacy of the 8086 to rest, and let the modern processors run free. Continue reading “Why X86 Needs To Die”