Minecraft In Minecraft On The CHUNGUS II

Minecraft is a simple video game. Well, it’s a simple video game that also has within it the ability to create all of the logic components that you’d need to build a computer. And building CPUs in Minecraft is by now a long-standing tradition.

Enter CHUNGUS II. The Computational Humongous Unconventional Number and Graphics Unit by [Sammyuri] is the biggest and baddest Minecraft computer that we’ve ever seen. So big, in fact, that it was finally reasonable to think about porting a stripped-down version of Minecraft to the computer itself. Yes, that’s right, Minecraft running in Minecraft. (Video embedded below.) Writing the compiler and programming the game brought two more hackers to the party, [Uwerta] and [StackDoubleFlow], and quite honestly, we’re amazed that a team as small as three people pulled this off.

Anyway, once you’ve picked your jaw up off the floor, also check out [Sammyuri]’s video on just the CHUNGUS II computer itself. (Also embedded below.) Seeing the architecture is interesting, even if you don’t speak Redstone as fluently as our heroes here. We love that the assembler creates a block of ROM – out of Minecraft blocks – that you can then cut/paste into the game’s reality.

For a “simple” game about breaking blocks and punching trees, Minecraft has inspired hackers to make the game better both inside and outside of the real world. For instance, for the latest in performant open-source Minecraft servers, check out Folia. Maybe, one day, they’ll build CHUNGUS II in the real world. It could happen.

Thanks [dbcdr] for the tip!

Continue reading “Minecraft In Minecraft On The CHUNGUS II”

8086 Multiply Algorithm Gets Reverse Engineered

The 8086 has been around since 1978, so it’s pretty well understood. As the namesake of the prevalent x86 architecture, it’s often studied by those looking to learn more about microprocessors in general. To this end, [Ken Shirriff] set about reverse engineering the 8086’s multiplication algorithm.

[Ken]’s efforts were achieved by using die photos of the 8086 chip. Taken under a microscope, they can be used to map out the various functional blocks of the microprocessor. The multiplication algorithm can be nutted out by looking at the arithmetic/logic unit, or ALU. However, it’s also important to understand the role that microcode plays, too. Even as far back as 1978, designers were using microcode to simplify the control logic used in microprocessors.

[Ken] breaks down his investigation into manageable chunks, exploring how the chip achieves both 8-bit and 16-bit multiplication in detail. He covers how the numbers make their way through various instructions and registers to come out with the right result in the end.

It’s a fun look at what’s going on at the ground level in a chip that’s been around since before the personal computer revolution. For any budding chip designers, it’s a great academic exercise to follow along at home. If you’ve been doing your own digging deep into CPU architectures, don’t hesitate to drop us a line!

The 10 Kinds Of Programmers That Use Calcutron-33

It is interesting how, if you observe long enough, things tend to be cyclical. Back in the old days, some computers didn’t use binary, they used decimal. This was especially true of made up educational computers like TUTAC or CARDIAC, but there was real decimal hardware out there, too. Then everyone decided that binary made much more sense and now it’s very hard to find a computer that doesn’t use it.

But [Erik] has written a simulator, assembler, and debugger for Calcutron-33, a “decimal RISC” CPU. Why? The idea is to provide a teaching platform to explain assembly language concepts to people who might stumble on binary numbers. Once they understand Calcutron, they can move on to more conventional CPUs with some measure of confidence.

To that end, there are several articles covering the basic architecture, the instruction set, and how to write assembly for the machine. The CPU has much in common with modern microprocessors other than the use of decimal throughout.

There have been several versions of the virtual machine with various improvements and bug fixes. We’ll be honest: we admire the work and its scope. However, if you already know about binary, this might not be your best bet. What’s more is, maybe you should understand binary before tackling assembly language programming, at least in modern times. Still, it does cover a lot of ground that applies regardless.

Made-up computers like TUTAC and CARDIAC were all the rage when computer time was too expensive to waste on mere students. There was also MIX from computer legend Donald Knuth.

Ask Hackaday: When It Comes To Processors, How Far Back Can You Go?

When it was recently announced that the Linux kernel might drop support for the Intel 486 line of chips, we took a look at the state of the 486 world. You can’t buy them from Intel anymore, but you can buy clones, which are apparently still used in embedded devices. But that made us think: if you can’t buy a genuine 486, what other old CPUs are still in production, and which is the oldest?

Defining A Few Rules

An Intel 4004 microprocessor in ceramic packaging
The daddy of them all, 1972’s Intel 4004 went out of production in 1981. Thomas Nguyen, CC BY-SA 4.0

There are a few obvious contenders that immediately come to mind, for example both the 6502 from 1975 and the Z80 from 1976 are still readily available. Some other old silicon survives in the form of cores incorporated into other chips, for example the venerable Intel 8051 microcontroller may have shuffled off this mortal coil as a 40-pin DIP years ago, but is happily housekeeping the activities of many far more modern devices today. Still further there’s the fascinating world of specialist obsolete parts manufacturing in which a production run of unobtainable silicon can be created specially for an extremely well-heeled customer. Should Uncle Sam ever need a crate of the Intel 8080 from 1974 for example, Rochester Electronics can oblige.

Continue reading “Ask Hackaday: When It Comes To Processors, How Far Back Can You Go?”

Home-Built CPU Runs With Home-Built Toolchain

A few years ago [Takaya Saeki] and fellow students of the University of Tokyo, were given a very limited instruction during their ‘CPU exercise’ class, along the lines of:

Take this ray-tracing program written in OCaml and run it on your CPU implemented on an FPGA

Splitting into groups to cover the CPU, FPU, simulator tool, and compiler toolchain, the students started with designing a RISC ISA, then designed a CPU around that. You can follow along with the retrospective writeup of the class, then dive into the GitHub pages for each of the components of the system, although the commentary is mainly in Japanese. Hey, you can google translate right? Continue reading “Home-Built CPU Runs With Home-Built Toolchain”

Exploring Texas Instrument’s Forgotten CPU

Texas Instruments isn’t the name you usually hear associated with the first microprocessor. But the TI TMX 1795 was an 8008 chip produced months before the 8008. It was never available commercially, though, so it has been largely forgotten by most people. But not [Ken Shirriff]. You can see a demo from 2015 of the device in the video below, too.

The reason the chips have the same architecture is they were built to replace the same large circuit board inside a Datapoint 2200 programmable terminal. These were big beasts that could be programmed in BASIC or PL/B.

Datapoint asked Intel to shrink the board to a chip due to heating problems — but after delays, they instead replaced the power supply and lost interest in the device. TI heard about the affair and wanted in on the deal. However, Datapoint was unimpressed. The chip didn’t tolerate voltage fluctuations very well, since they had replaced the power supply and had a new CPU design that was faster than the chip would be. They were also unimpressed with how much stuff you had to add to get a complete system.

So why did the Intel 8008 work out in the marketplace but the TI chip didn’t? After all, Datapoint decided not to use the 8008, also. But as [Ken] points out, the 8008 was much smaller than the TI chip and, thus, was more cost-effective to produce.

As usual, [Ken]’s posts are always interesting and enlightening. He’s looked at a lot of old computers. He’s even dug into old space hardware. Great stuff!

Continue reading “Exploring Texas Instrument’s Forgotten CPU”

The MOS 7600 Video Game Chip Gives Up Its Secrets

A good chip decapping and reverse engineering is always going to capture our interest, and when it comes from [Ken Shirriff] we know it’s going to be a particularly good one. This time he’s directed his attention to the MOS 7600 all-in-one video game chip (Nitter), a mostly forgotten device from the 6502 chipmaker which we featured a few weeks ago when it was the subject of a blogger’s curiosity. The question then was whether it contained a microprocessor or not and even whether it was another 6502 variant, and the answer revealed in the decapping answers that but will disappoint the 6502 camp.

On the chip is a mixture of analog and digital circuitry, with some elements of a more traditional game chip alongside a ROM, a PLA, and a serial CPU core. The PLA stores pixel data while the ROM stores the CPU code, and the CPU serves to perform calculations necessary to the games themselves. He hasn’t fully reverse-engineered either, but the two areas of the chip are mask-programmed to produce the different games with which the chip could be found.

So the answer to the original question is that there is a CPU on board, but it’s not a 6502 and the operation is a hybrid between dedicated game chip and CPU-controlled chip. What we find interesting is that this serial CPU core might have as we mused in the previous piece made the heart of a usable 1970s microcontroller, was this a missed opportunity on the part of MOS? We’ll never know, but at least another piece of early video game history has been uncovered.