A Dis-Integrated 6502

The 6502 is the classic CPU. This chip is found in the original Apple, Apple II, PET, Commodore 64, BBC Micro, Atari 2600, and 800, the original Nintendo Entertainment System, Tamagotchis, and Bender Bending Rodriguez. This was the chip that started the microcomputer revolution, and holds a special place in the heart of every nerd and technophile. The 6502 is also possibly the most studied processor, with die shots of polysilicon and metal found in VLSI textbooks and numerous simulators available online.

The only thing we haven’t seen, until now, is a version of the 6502 built out of discrete transistors. That’s what [Eric Schlaepfer] has been working on over the past year. It’s huge – 12 inches by 15 inches – has over four thousand individual components, and so far, this thing works. It’s not completely tested, but the preliminary results look good.

The MOnSter 6502 began as a thought experiment between [Eric] and [Windell Oskay], the guy behind Evil Mad Scientist and creator of the discrete 555 and dis-integrated 741 kits. After realizing that a few thousand transistors could fit on a single panel, [Eric] grabbed the netlist of the 6502 from Visual6502.org. With the help of several scripts, and placing 4,304 components into a board design, the 6502 was made dis-integrated. If you’re building a CPU made out of discrete components, it only makes sense to add a bunch of LEDs, so [Eric] threw a few of these on the data and address lines.

This is the NMOS version of the 6502, not the later, improved CMOS version. As such, this version of the 6502 doesn’t have all the instructions some programs would expect. The NMOS version is slower, more prone to noise, and is not a static CPU.

So far, the CPU is not completely tested and [eric] doesn’t expect it to run faster than a few hundred kilohertz, anyway. That means this gigantic CPU can’t be dropped into an Apple II or commodore; these computers need a CPU to run at a specific speed. It will, however, work in a custom development board.

Will the gigantic 6502 ever be for sale? That’s undetermined, but given the interest this project will receive it’s a foregone conclusion.

Correction: [Eric] designed the 555 and 741 kits

Hackaday Prize Entry: You Can Do Anything With A Bunch Of NANDs

Every few years, someone on the Internet builds a truly homebrew CPU. Not one built with a 6502, Z80, or a CPU from the 80s, either: one built completely out of 74-series logic chips or discrete transistor. We’re lucky enough to have [Alexander] document his build on Hackaday.io, and even luckier to have him enter it into this year’s Hackaday Prize. It’s an 8-bit computer built completely out of NAND gates.

Computers are just logic, and with enough NAND gates, you can do anything. That’s exactly what [Alex] is doing with this computer. It’s built entirely out of 74F00 chips – a ‘fast’ version of the ubiquitous quad 2-input NAND chip. The architecture of this computer borrows from the best CPUs of the 70s and 80s. The ALU is only four bits, like the Z80, but also uses the 6502 technique where the borrow is an inverted carry. It’s a small instruction set, a 2-stage pipeline, and should be able to compute one million instructions per second.

Designing a CPU is one thing, and thanks to Logisim, this is already done. Constructing a CPU is another matter entirely. For this, [Alex] is going for a module and backplane approach, where the ALU is constructed of a few identical modules tied together into a gigantic motherboard. [Alex] isn’t stopping at a CPU, either: he has a 16-byte ROM that’s programmed by plugging diodes into holes.

It’s an amazingly ambitious project, and for entering this project into the 2016 Hackaday Prize, [Alex] already netted himself $1000 and a trip to the final round of competition.

The HackadayPrize2016 is Sponsored by:

Crawl, Walk, Run: A Starter CPU

Last time I talked about getting started with CPU design by looking at older designs before trying to tackle a more modern architecture. In particular, I recommended Caxton Foster’s Blue, even though (or maybe because) it was in schematic form. Even though the schematics are easy to understand, Blue does use a few dated constructs and you probably ought to build your take on the design using your choice of VHDL or Verilog.

In my case, my choice was Verilog. You can find my implementation of Blue on Opencores.org. I made quite a few changes to Foster’s original design. For example, armed with semiconductor memory, I managed to get all instructions to operate in one major cycle (which is, of course, 8 minor cycles). I also modernized the clock generation and added some resources and instructions.

Continue reading “Crawl, Walk, Run: A Starter CPU”

Moore’s Law is Over (Again)

According to this article in Nature, Moore’s Law is officially done. And bears poop in the woods.

Note when the time axis ends...
Note when the time axis ends…

There was a time, a few years back, when the constant exponential growth rate of the number of transistors packed into an IC was taken for granted: every two years, a doubling in density. After all, it was a “law” proposed by Gordon E. Moore, founder of Intel. Less a law than a production goal for a silicon manufacturer, it proved to be a very useful marketing gimmick.

Rumors of the death of Moore’s law usually stir up every couple years, and then Intel would figure out a way to pack things even more densely. But lately, even Intel has admitted that the pace of miniaturization has to slow down. And now we have confirmation in Nature: the cost of Intel continuing its rate of miniaturization is less than the benefit.

We’ve already gotten used to CPU speed increases slowing way down in the name of energy efficiency, so this isn’t totally new territory. Do we even care if the Moore’s-law rate slows down by 50%? How small do our ICs need to be?

Graph by [Wgsimon] via Wikipedia.

8-bit Computer Made Solely From NAND Gates

As an electronics rookie, one of the first things they tell you when they teach you about logic gates is, “You can make everything from a combination of NAND gates”. There usually follows a demonstration of simple AND, OR, and XOR gates made from NAND gates, and maybe a flip-flop or two. Then you move on, when you want a logic function you use the relevant device that contains it, and the nugget of information about NAND gates recedes to become just another part of your electronics general knowledge.

Not [Alexander Shabarshin] though. He’s set himself the task of creating an entire CPU solely from NAND gates, and he’s using 74F00 chips to give a hoped-for 1MIPS performance.  His design has an 8-bit data bus but a 4-bit ALU, and an impressive 2-stage pipeline and RISC instruction set which sets it apart from the computers most of us had when 74-series logic was a much more recent innovation. So far he has completed PCBs for a D-type flip-flop and a one-bit ALU, four of which will work in parallel in the final machine

Unsurprisingly, we have maintained a keen interest in TTL computers here at Hackaday for a very long time. You might say that we have featured so many for the subject to deserve a review article of its own. There is the ASAP-3, the Magic-1, the Duo Basic, the Apollo181, the unnamed CPU made by [Donn Stewart], the BMOW, and a clone of the Apollo Guidance Computer. But what sets [Alexander’s] project aside from all these fine machines is his bare-metal NAND-only design. The other 74-series CPU designers have had the full range of devices such as the 74181 ALU at their disposal. By studying the building blocks at this most fundamental level a deeper understanding can be gained of the inner workings of parts normally represented just as black boxes.

One of the briefs for writing a Hackaday article is that if the subject makes the writer stop and read rather than skim over it then it is likely to do so for the reader too. This project may not yet have delivered a working CPU, but its progress so far is interesting enough for an in-depth read. Definitely one to watch.

Reverse Engineering The iPhone’s Ancestor

By all accounts, the ARM architecture should be a forgotten footnote in the history of computing. What began as a custom coprocessor for a computer developed for the BBC could have easily found the same fate as National Semiconductor’s NS32000 series, HP’s PA-RISC series, or Intel’s iAPX series of microprocessors. Despite these humble beginnings, the first ARM processor has found its way into nearly every cell phone on the planet, as well as tablets, set-top boxes, and routers. What made the first ARM processor special? [Ken Shirriff] potsed a bit on the ancestor to the iPhone.

The first ARM processor was inspired by a few research papers at Berkeley and Stanford on Reduced Instruction Set Computing, or RISC. Unlike the Intel 80386 that came out the same year as the ARM1, the ARM would only have a tenth of the number of transistors, used one-twentieth of the power, and only use a handful of instructions. The idea was using a smaller number of instructions would lead to a faster overall processor.

This doesn’t mean that there still isn’t interesting hardware on the first ARM processor; for that you only need to look at this ARM visualization. In terms of silicon area, the largest parts of the ARM1 are the register file and the barrel shifter, each of which have two very important functions in this CPU.

The first ARM chip makes heavy use of registers – all 25 of them, holding 32 bits each. Each bit in a single register consists of two read transistors, one write transistor, and two inverters. This memory cell is repeated 32 times vertically and 25 times horizontally.

The next-largest component of the ARM1 is the barrel shifter. This is just a device that allows binary arguments to be shifted to the left and right, or rotated any amount, up to 31 bits. This barrel shifter is constructed from a 32 by 32 grid of transistors. The gates of these transistors are connected by diagonal control lines, and by activating the right transistor, any argument can be shifted or rotated.

In modern terms, the ARM1 is a fantastically simple chip. For one reason or another, though, this chip would become the grandparent of billions of devices manufactured this year.

Exponential Growth In Linear Time: The End Of Moore’s Law

Moore’s Law states the number of transistors on an integrated circuit will double about every two years. This law, coined by Intel and Fairchild founder [Gordon Moore] has been a truism since it’s introduction in 1965. Since the introduction of the Intel 4004 in 1971, to the Pentiums of 1993, and the Skylake processors introduced last month, the law has mostly held true.

The law, however, promises exponential growth in linear time. This is a promise that is ultimately unsustainable. This is not an article that considers the future roadblocks that will end [Moore]’s observation, but an article that says the expectations of Moore’s Law have already ended. It ended quietly, sometime around 2005, and we will never again see the time when transistor density, or faster processors, more capable graphics cards, and higher density memories will double in capability biannually.

Continue reading “Exponential Growth In Linear Time: The End Of Moore’s Law”