Intel processors, at least for PCs, are ubiquitous and have been for decades. Even beyond the chips specifically built by Intel, other companies have used their instruction set to build chips, including AMD and VIA, for nearly as long. They’re so common the shorthand “x86” is used for most of these processors, after Intel’s convention of naming their processors with an “-86” suffix since the 1970s. Not all of their processors share this convention, though, but you’ll have to go even further back in time to find one. [Mark] has brought one into the modern age and is showing off his system board for this 8008 processor.
The 8008 predates any x86 processor by about six years and was among the first mass-produced 8-bit processors even before the well-known 8080. The expansion from four bits to eight was massive for the time and allowed a much wider range of applications for embedded systems and early personal computers. [Mark] goes into some of the details for programming these antique processors before demonstrating his system board. It gets power from a USB-C connection and uses a set of regulators and level shifters to make sure the voltages all match. Support for all the functions the 8008 needs is courtesy of an STM32. That includes the system memory.
For those looking to develop something like this, [Mark] has also added his development tools to a separate GitHub page. Although it’s always a good idea for those interested in computer science to take a look at old processors like these, it’s not always the easiest path to get original hardware like this, which also carries the risk of letting smoke out of delicate components. A much easier route is to spin up an emulator like an 8086 IBM PC emulator on an ESP32. Want to see inside this old chip? Have a look.
“The 8008 predates any x86 processor by about six years and was among the first mass-produced 8-bit processors even before the well-known 8080.”
The i8080 was a mistake, though. It was early being superseded by the Z80.
To be fair, the i8085 had served a niche, though. Microcontrollers and stuff.
To build a real computer, i8085 could be replaced by an NSC800 (mostly Z80 software compatible, can run Turbo Pascal on CP/M-80).
It was an advanced design which related to the 8085 in a similar way like the NEC V20 (V30) related to i8088 (i8086).
The i8008 is interesting, though, considering its primitive nature.
An address space of 16KB is about enough fit a tiny version of Fortran or BASIC.
It’s also enough for storing a rudimentary terminal software, maybe.
Without much bells and whistles (excellent communications programs understand various protocols).
The NEC V20 could run 8080 instructions through a funny virtual mode. I wish I would have known about it at the time I had an XT upgraded with one. I think it might have been possible to run CP/M inside of MS-DOS doing that. Or maybe it was designed more for running CP/M-80 apps inside of CP/M-86?
Do you realize that the Z80 wasn’t an Intel processor?
It was made by former intel engineers, the “fathers” of the i8080, if we will.
They’ve founded Zilog because the i8080 development took a wrong direction.
The i8080 was never meant to be, depending on how we look at it.
About the only place I’ve crossed paths with an 8008 was a warehouse inventory barcode scanner I bought for a few Dollars in the early 1980s at the Dayton Hamvention. The machine was in two shoulder bags. One bag was full of NiCd batteries and the other bag had the wand interface which recorded count data to an audio cassette drive as digital signals. The heart of the digital mechanism was the beloved 8008 which was in a very attractive white ceramic package with gold plated cap and leads.
I should have saved the 8008 chip for posterity, but it is long departed to location unknown.
Speaking of anachronistic CPUs, I recently found a pile of old boards in one of my junk boxes which came out of gas pumps in the late 1970s. The CPU chip is a Fairchild F8. There are a couple of Fairchild support chips along side which appear might be a RAM/ROM chip and another which looks like it is VIA similar to a Mostek 6522.
The retroness is completed by several rows of RCA DIP style Numitron seven segment indescent displays for the price, sale total and gallons pumped.
It’d be fun to cobble up a project to use an F8. There might be some code kicking around in a corner of the Internet because Fairchild sold a home video console called “Channel F” featuring joysticks with a crazy number of degrees of freedom for the time – something like up, down, left, right, twist left, twist right, push, pull.
The history of the F8 is interesting.
If I understand correctly, it’s essentially being based on construction plans stolen in German.
The full story can be read here:
https://www.cpushack.com/2013/06/08/cpu-of-the-day-fairchild-f8-microprocessor/
My first computer was the Mostek/Fairchild F8 Evaluation Kit with 1k RAM and 1k FAIRBUG monitor ROM on the PSU chip. That was early 1978 and I’ve been collecting the odd (literally) F8 bits ever since. I would really like to see your gas pump unit, I’d super appreciate it if you could put up some pictures and announce it on the VCFed forum.
F8 Tiny BASIC from Dr Dobbs exists, along with some other software apart from the Channel F. One of its cartridges was a tiny APL.
this project highlights the kind of futility i feel when i think about my own experiences with retrocomputing. it looks weird to have a DIP 8008 on a board with so many surface mount SOIC/QFT chips. and then, i see, they mopped up all of the ‘bridge’ sort of functionality with an stm32. that’s solid design, if you really want to use an 8008 today, but it doesn’t hardly make sense. an stm32 is so powerful, you could emulate the whole 8008 with it. and if you’re just using an emulated 8008, why not just run the emulator on your PC, your phone, write it in javascript and run it on any browser.
i have a couple authentic 486 machines — a classic HP omnibook and a barn-found desktop — and they aren’t really good for anything other than running classic software. and when you think about it, that stuff had pretty frustrating limitations even back in the day. for example, on the 486 i have to use a slowdown program to play a lot of videogames at the right speed, and that was true on my 286-12 as well. and on top of that, i wasn’t any better at beating the missions of the original x-wing at 35 than i was at 15.
you kind of come back to this basic question, what’s the point? and the only point i can come up with is served perfectly well with an emulator, which is, there is a certain magic to programming a dead-simple computer. having an assembly language that is so simple you can learn all the instructions at a glance and then enjoy the challenge of trying to get it to do anything remotely interesting. i don’t know if i’d ever bother at my age, though i often have the temptation to throw together an android app that just provides basically a hex editor for the memory image of a z80 or 6502 emulator. something to play with. but i often wonder how kids these days will learn anything without ever having a simple thing to mess around with.
“this project highlights the kind of futility i feel when i think about my own experiences with retrocomputing. it looks weird to have a DIP 8008 on a board with so many surface mount SOIC/QFT chips. and then, i see, they mopped up all of the ‘bridge’ sort of functionality with an stm32. that’s solid design, if you really want to use an 8008 today, but it doesn’t hardly make sense. an stm32 is so powerful, you could emulate the whole 8008 with it. and if you’re just using an emulated 8008, why not just run the emulator on your PC, your phone, write it in javascript and run it on any browser. ”
I feel your pain. I often think same about FPGAs.
To me, the main reason for tinkering with such vintage hardware is to interact with “living” hardware. Hardware that works by laws of physics, rather than via rules set into software.
That’s why relays and discreet TTL circuits are so fascinating. No MCU with built-in software! No microcode!
Unfortunately, many younger people don’t get it. They’re arguing about functionality and cost, rather than the magic that’s inside old tech.
Anyway, the stm32 likely acts as an interfaceoor chipset for sake of convenience, I suppose.
The use of SMD parts has to do with the fact that young people are being alienated to use through-hole parts.
Using SMD is simpler, also. You can place chips on the upper side all time, and place components 1:1 as shown on the schematics.
So you don’t have to use use your brain so often. You don’t need to turn the board physically and in your mind anymore.
But what can we do? Explaining to current generations why using through-hole parts and sockets is better sometimes is very difficult. It’s like fighting windmills.
Pretty sure I heard similar lamentations in the 1970s about kids THOSE days just lazily wiring up TTL ICs and never getting the kind of understanding that you get from tubes, where everything’s visible through glass.
yeah that’s why i think it might suffice for kids merely to have a bare metal programming experience, even if it’s not “the same”, just to have some understanding of that world that still exists under all our abstractions.
because i think i learned alright, simply knowing how you can build logic gates out of transistors, without having hardly done it myself. but i do think i’d really miss something if it was completely hidden to me, if i’d never considered how transistors make a gate, gates make a register and ALU, and how those add up to CPUs and so on. surely no one will ever again spend their teenage years reading the Ralf Brown interrupt list (my dad managed to buy it in the shape of a soft-cover textbook), as if it was a high level programming API. and good riddance. just a little toying around with an emulator — a contrived simple environment — can give someone a sense of how it must work.
Tubes are indeed better for teaching the principles, period.
To give an example, the electrons flow from cathode to anode, which resembles the natural flow of electricity.
By contrast, solid-state teaching material that covers bipolar transistors only comes up with that strange concept of “holes” that electrons got stuck into.
About TTL ICs.. They could at least being replaced by discreet circuits, still.
They saved space on thr circuit board and as suchbhad their purpose.
Functionally, they were still identical to discrete circuits.
Modern microcontrollers (Arduino/ATmega, ATTiny, PIC, ESP32, blue pill etc), microcode and FPGAs surely have their purpose, but they make vintage hardware superfluous.
They’re nolonger same technology anymore, the transistor basis set aside.
They relate to TTL technology as much as Federation tech relates to Klingon tech. ;)
With due respect, I say “just no”. The magic happens when you turn on the switch and your game (or whatever) is off and running. Even better if it drives a monochrome, reflective STN display, or just an array of red LEDs. No pocket Unix derivative that either consumes power continuously or needs a full minute to load all the different layers of software needed to emulate the sort of hardware that was possible in the mid-70s. Bare bones all the way, for me. And I do get it, that it seems a bit silly to use an MCU hundreds of times more capable than your CPU, just to provide the support functions for that CPU, but I just cringe when I get to the inevitable “why not just run it in javascript in the browser?”
what magic? you’re playing with your browser today but how long has it been since you touched something older than 20 years? i’ve got this ~30 year old laptop sitting in my livingroom, even, and i touch it about once a year.
that’s what i’m getting at. i can’t find any magic in retrocomputing. the magic is in my memories. when it comes to sitting down and using it, it’s actually not that exciting. in practice, it’s all limit, just like my experience with the computers back then was.
“ve got this ~30 year old laptop sitting in my livingroom [..] that’s what i’m getting at. i can’t find any magic in retrocomputing.”
That’s not the point my boy. That’s not vintage tech we talk about, what ykuvve got is just old tech.
What we talk about are digital circuits that work on logic circuits, on things like NAND/OR gatters.
Like for example, building a digital clock from dircreet parts without any software. Or an electric dice without any software.
Such circuits work on physics and they sometimes behave unexpectedly, which makes things interesting.
Or let’s talk about old game consoles like the Atari VCS and NES.
The real hardware has its limits and quirks that have their charms.
You get flickery tiles and glitches for no apparent reasons.
You end up in a different level for no reason if you die on a certain spot multiple times etc.
There are so many little quirks or peculiarities in the real hardware that makes things interesting.
It’s like interacting with something lively.
Anyway, I’m not sure of that message got trough. I suppose you’d have to grown up with the technology to get a feel to it.
Does anyone else like to look at old architectures and think “what if … it could address megabytes/gibabytes/terabytes of RAM?” or “… how hard would it be to build a version (or otherwise-faithful emulator) with more addressable memory?” For bonus points: “How much harder would it be if the new version had to be backward-compatible with the original?”
A few years ago I played around an emulator of the Manchester Baby (1948) to increase its addressable memory. I never got any further than doubling the addressable RAM before I moved on to other things, but my early goal was to have an emulator-build-time option to have up to 2^28 bits of addressable memory (the word size was 32, 3 bits were for the opcode). It was a fun thought experiment.
Technically speaking you can put 4gb of adressible memory on a 6502
Get two ay8910 or 8930
A Cpld for control signals (address decoding and multiplex lut cuz non 65xx based chip)
32bits of bidirectional gpio total with 16 per chip with the benefit of the ay8910 uses a multiplex data and address bus,
And just needs a address pointer and r/w and just shove the address if the register and whatever u wanna read or write
And it capable of generating high quality audio too
So technically you could run a gig of ram on an Atari 8bit
If instead you fit an 8910 instead of a second pokey to get the extra gpio bus… and you have the pbi and 2k shitty math routines you can just throw out and use esp32 to do it better in hardware with 32 and 64bit precision much faster…..3d and isometric rendering for Atari ….
Nowadays you can even upgrade the Atari xl with a 65816 and get 16bit, with most of the original 8bit code still backwards compatible
Wonder how many are still waiting to for an 65832
Well better hope you good a fpga, and implement a 32 bit 6502 but using a risc arch so you can dumb it down to fit on a chip…
With these 8 and 16bit CPU
You could possibly get away using a cpld and nand or nor flash as ram
Just need to patch ram init to wipe the flash every boot
Can also use it like battery backup ram without batteries
Modern nor and nand flash can be used via 8 bit parallel modes
Just slightly different writing routines, can write blocks or in sequence
The bus speeds are inline and slow enough if the nor and nand flash has cycle timing under 150ns for reads and writes even at 1mhz 6502 and z80 would be faster than dram
Since you can get flash chips with access time under 100ns
Lothearek and flashjazzcat ain’t never come up with a “ramless” Atari xl conversion…. single 8 meg flash is cheap
Faster than dram but not as fast as sram
But also is non volatile and and can retain data with no power
No memory and dram refresh needed so you can gain a few clock cycles not waiting on memory
The oldest processor I’ve ever used was a 4004 built into the now undoubtedly obsolete and replaced automated test set that plugs into a bunch of connectors in the A-10 Thunderbolt II (aka Warthog) . A remote unit was used while sitting in the cockpit manipulating various cockpit controls.
Did you make airplane noises whilst fiddling with the controls?
Sandia National Labs invested millions of $s trying to build rad hard versions of the 8085 chipSETS. Then gave up.
Next Sandia labs decided to build a rad hard version of the 8051.
Serial i/0, especially mode 0 shift register, and timer obviated need for 8080 peripheral chips.
2024. 8051/52s are obsolete. Their use as flash controllers replaced by ARM M series nanocomputers?
risc-v/ARM sbcs now starting to run Linux server software!
At <$6 per platform. Some drawing <1 W.
"Sam Altman-Backed Nuclear Startup Signs Major Data Center Contract:"
HAHaha?
HAHaha??
“Amazon to add 15 datacenters to atomic-powered campus”
Thu 30 May 2024
Posted in Redmond OR.
Reason: Meta Prineville Data Center field trip planned 6/1/24.
A ‘HAHaha’?
In the 8008 years I was a kid. I used a small programmer for Intel 1702 EPROM, managed by an 8008, without knowing exactly what it was to program the Eproms, introducing the bytes from the serial line (Teletype ASR32) typing H for High, L for Low as well as B as the start character and F for the end. In the Eproms we wrote the code of an 8-bit device ‘ancestor’ of the PLC, built with multiple TTL cards! One day I decided to satisfy my curiosity and try to understand what governed that unusual object. I managed to find the instruction set and then I could not resist the temptation to modify the resident program. I added the ability to enter data in hexadecimal (this made my work much faster and safer) and wrote for my own whim the code for extracting the square root that worked well (but that only served my twenty-year-old pride. I would not make comparisons between the 8008 and other CPUs: each has its strengths, weaknesses, peculiarities, but each of them represents a huge step towards our days. We must also remember the WAY in which silicon circuits were designed: CAD/CAM/CAE were primordial. Federico Faggin, father of the Z80 and backbone of the 8080, designed ‘by hand’ the silicon masks of the famous and successful CPU. They were all miracles, a result of efforts that today appear (and were) incredible.