In a recent article on The Chip Letter [Babbage] looks at the Intel iAPX 432 computer architecture. This was an ambitious, hyper-CISC architecture that was Intel’s first 32-bit architecture. As a stack-based architecture, it exposed no registers to the software developer, while providing high-levels pertaining to object-oriented programming, multitasking and garbage collection in hardware.
At the time that the iAPX 432 (originally the 8800) project was proposed, Gordon Moore was CEO of Intel, and thus ultimately signed off on it. Intended as an indirect successor to the successful 8080 (which was followed up by the equally successful 8086), this new architecture was a ‘micro-mainframe’ that would target high-end users that could run Ada and similar modern languages of the early 1980s.
Unfortunately, upon its release in 1981, the iAPX 432 turned out to be excruciatingly slow and poorly optimized, including the provided Ada compiler. The immense complexity of this new architecture meant that the processor itself was split across two ASICs, with the instruction decoding itself being hugely complex, as [Babbage] describes in the article. Features in the architecture that made it very flexible also meant that a lot of transistors were required to implement these, making for an exceedingly bloated design, not unlike the Intel Itanium (IA-64) disaster a few decades later.
Although the iAPX 432 was a bridge too far by most metrics, it did mean that Intel performed a lot of R&D on advanced features that would later be used in its i960 and x86 processors. With Intel being hardly a struggling company in 1985 when the iAPX 432 architecture was retired, this meant that despite it being a commercial failure, it still provided an interesting glimpse into an alternate reality where the iAPX 432 would have taken the computer world by storm, rather than x86.
You have to crack a few eggs to cook an omelette. And Intel’s later CPUs could even cook the omelette.
Second-sourcing part of the problem.
https://youtu.be/kZ9ntfjytTI?t=410
There has been no progress at all. We have been writing the exact same bugs into our code for over 40 years now and there is still no sign of any advancement or any improvement. Software development is totally dead as a field for advancement, there is more intelligence and more advancement in shoe tying and gum chewing than in software.
A lot of the reason is that it costs money to design software correctly, and to write it. Companies want it developed cheap, so they hire new programmers who don’t know how to avoid the mistakes of the past.
And the work needed to review designs is more expensive than developing the software without reviews, and hard to get right.
Any software engineer has been somewhere that getting the software released was far more important than getting it to work right. I know I have – and in several places, including where the VP said a device would ship before the end of the quarter, even though we found a bug that would brick it if the configuration had certain errors. We had a fix in 2 weeks, and the device had its firmware in a removable ROM module, and the bug would only show up if the customer was using TCP/IP instead of the company’s proprietary networking, but still. Get it done fast, because we have time to fix it later, is the industry motto.
That’s simply not true. Do you know nothing about LLVM, the C++ STL, or clangd? How about all the various types of bugs and exploits we have identified? You may not appreciate it but real progress has been made.
To not even mention advances in testing, linting, static analysis, language features, IDEs..
What confuses the matter is that all these improvements allow us to be vastly more productive, so we write more code and features faster and then the number of bugs can increase again purely because of the quantity of stuff.
But per line and per programmer and per day it is certainly a lot better than 40 years ago. Much easier and faster to write software and our software can do a heck of a lot more too.
Look at the current crop of CVE reports. They could all be from 30 years ago: the EXACT SAME buffer overruns and use-after-fee bugs that we have seen for decades.
Nostalgia driven development.
There are zero bugs like that in various different languages. Few programs have to be written in c. The vast majority of code out there isn’t and never has a CVE report. The reason you still see them is because there is so much more code out there that even a teeny tiny relative proportion of bugs results in a large absolute number.
Progress in testing and analysis does nothing to stop people from writing the bugs in the first place. Developers don’t actually use any of these tools in practice. Look at projects on GitHub or npm. There are no tests no analysis no nothing. None of the other stuff you mention has any effect on bug production other than making it easier and easier to create more bugs.
If developers are 10 x more efficient then they are writing 10 x as many bugs.
The more complex a system becomes, the more vulnerable it gets.
That’s why all in all, Windows 3.1 might be more reliable than Windows 11, despite its humble cooperative multi-tasking origin. 😉
That seems cynical, or are you speaking from personal experience? Writing code more quickly is not in itself a productivity improvement unless your metrics are so messed up they hide the true error “rate”. Finding bugs requires active testing for bugs — they won’t find themselves.
For each project you first have to assess whether it matters. Not every abandoned hello world test is being run by every mega corporation on the public internet.
That’s like arguing that kids learning to write these days still make spelling errors just like they did 40 years ago.
There’s the object oriented programming (OOP) principle that was being forcefully pushed-through, though. Or worse, object-based programming.
While it surely has its place, it caused good procedural programming to go out of fashion.
Secondly, structured programming was being hyped over unstructured programming.
Sounds reasonable at first, but there are occasions were it is helpful if an application code executes in a specific way/order that can be mentally followed by a human programmer.
Not seldomly, bad “Spaghetti” code can be highly efficient, also.
Anyway, these are just my two cents.
This is the paper that sank the 432, clearly demonstrating how slow ir was.
https://archive.org/details/PerformanceEvaluationOfTheIntelAPX432
“As a stack-based architecture, it exposed no registers to the software developer, while providing high-levels pertaining to object-oriented programming, multitasking and garbage collection in hardware.”
Era of all those languages from Forth to Smalltalk. History could have been so different.
History could not have been very different. The x86 architecture won on merit. A lot of ideas that people thought were better didn’t make it, because these ideas weren’t actually any better. Stack based languages/architectures are just a really bad idea.
I was working for Pertec at the time; they had purchased MITS (originators of the Altair 8800), and were looking for a more powerful follow-on for the 8080 for their small business systems.
It was not “merit”. The Motorola 68000 was a better architecture, and the National series of (IIRC) 16016, 16032, and 32032 was even better. Intel won specifically because IBM chose it for their PC — it won because it got popular. And maybe because Intel told a bunch of lies about how 8080 code could be “migrated” to the 8086 (what they didn’t say was that the code could only be ported if it happened to have been written in PL/M) — assembly code was absolutely NOT portable.
We took a look at the IAPX 432 also, and concluded that it was a hot mess.
Why is 68000 better ? Sure, I agree it looks nicer and is more pleasant to program assembly code in. But does it give more performance in the end? For a while, there was a race between 68000 series CPUs, and the x86 line. After the 68060, Motorola abandoned the design in favor of PowerPC. If it was such a great architecture, why didn’t they keep working on it?
For one thing, 68k was designed from the ground up as a more ‘modern’ architecture. Although it had a 16-bit data bus and ALUs, the it was designed to be a 32-bit architecture and allowed unsegmented access to 24 (later 32) bits of address space, something Intel really didn’t have until the i386 (released over 5 years later). IBM actually *wanted* to use the 68k instead of the 8088 (8086 with an 8-bit data bus), but it wasn’t production ready at the time.
Short version: x86 won because it was was first to market and IBM was desperate, not because it was a particularly well designed architecture.
PowerPC was *also* a more advanced architecture than x86, and for many tasks it could outperform x86. It saw heavy use in automotive and aerospace application (including both the Curiosity and Perseverance rovers). It was also used in game consoles and on some servers (where it competed with the likes of SPARC and MIPS).
People ultimately don’t care about CPU architectures, so the fact that it was “modern” is totally irrelevant. What people care about is performance/dollar, and the Intel Pentium was winning against the 68060. Motorola was getting stuck making more improvements, while Intel was still steaming along.
The beauty of the x86 architecture was that it provided a relatively smooth upgrade path. The 386 was offering full 32 bit address space, while keeping the segment registers for cool new purposes. Another advantage of the x86 was the relatively compact code size, which becomes important at higher clock rates, when memory can’t keep up, and you need to put the code in a cache. Smaller code means more efficient use of small caches.
It is true that IBM gave Intel a big boost, but IBM could have switched to a better alternative if that was available. Sun microsystems was trying to use the 68k family, but switched to their own Sparc architecture instead. NeXT was using 68030, Apple was using 68k, Silicon graphics was using 68k. There was plenty of opportunity for Motorola to demonstrate that they could provide a better value for money. I still remember working at the university in the ’90s surrounded by Sun workstations, originally all much bigger and better than PCs, but a few years later they were all getting kicked out and replaced by PCs, as PCs were clearly starting to win the battle for pure performance, at a much lower price.
Motorola dropped the 68060 mainly due to lack of customers and powerPC alliance allowed to share development cost with IBM.
68k was doomed, it cannot handle any of the optimizations we use today. Can’t use caches, can’t be pipelined, can’t reorder instructions, It is a toy architecture compared to ARM or x86_64
The 68000 was a far better processor – it took until the 400Mhz x86 machine to be as responsive and fast as the 7.2 MHz in the Amiga to get as responsive, almost a decade later. The 68000 was used by Apple. But when IBM was defining their computer they wanted a moderately cheap slug of a machine to act as a feeder to their main frame sales, not compete with them. Everything about the original IBM PC was cheap and readily available to cut development costs.
After that it was company direction that made a larger difference. Intel was willing to keep really bad early implementation ideas in later processors in order to be 100% backward compatible while Motorola thought incremental, incompatible improvements was the way to go. New processor? Patch the OS and programs to avoid the changes.
Between the corporate blessing and guaranteed sales of Intel processors into PCs and cheap compatibles and the willingness of Motorola to undermine their customers, it was no contest in the marketplace, but don’t confuse that for technical superiority. Even then it might have been saved by Motorola if Bill Gates hadn’t undercut all other OS makers by essentially giving away his version of CP/M to the clone makers and cementing a money flood into Intel to survive bad calls like Itanium.
Life is truly unjust, look at JavaScript. #1 language in 2023 🤮🤮🤮
Yes indeed something comes along that really does increase development productivity and really does decrease the number of bugs written, and it’s dismissed by “real” developers.
Ironically, ActionScript used by Macromedia Flash was quite similar to JavaScript.
But for some reason: JavaScript=Yaaay!, Flash/ActionScript=Booooh!
People are silly sometimes. 😒
Then there was JScript, Microsoft’s version of JavaScript.. 😇
x86 won due to the momentum it gained in the early PC market thanks to IBM’s adoption, and then by Intel’s silicon being the most advanced in the world for nearly two decades. Intel could outperform everyone because their process was constantly several generations ahead, but the x86 architecture itself is actually very inefficient to implement for a wide variety of reasons. Even by CISC standards the instruction set can be considered sub-par as it is nowhere near orthogonal (x86 was originally based on the 8080, which itself took inspiration from the PDP minicomputers). Today, it is perpetuated by the massive software ecosystem built around it.
Orthogonal sounds nice, but comes with a code size penalty. Suppose you have a CPU with 16 identical registers, then there exist 16! identical permutations of your code, but you only need one of those permutations to exist. All the others are just wasteful.
If you have a few dedicated registers for certain operations, you can often rewrite the code so that the operands end up in the proper registers, and you can benefit from more compact instruction encoding.
With 16 identical registers the OS program loader does late register assignment dynamically so that up to 16 separate programs can be running simultaneously and not waiting for “the” register they need to become available.
X86 is dead dead dead. has been for decades now. Nobody cares. It has nothing to do with x86_64.
x86_64 wins because it can be optimized like crazy. It has a good instruction size and good addressesing modes. The competition not so much. SPARC and Power are very very old designs that don’t mesh with modern hardware.
This is the paper that sank it. Some graduate students at Berkeley tried some basic benchmarks comparing the i432 to other microprocessors, and found it hopelessly slow.
https://archive.org/details/PerformanceEvaluationOfTheIntelAPX432
Intel’s solution in the noughties was a series of benchmark shenanigans. They, through a shell company, bought a benchmarking company and nobble the results:
https://www.extremetech.com/computing/193480-intel-finally-agrees-to-pay-15-to-pentium-4-owners-over-amd-athlon-benchmarking-shenanigans
They are still at the benchmark shenanigans, but now they show the results from the latest generation of Intel processors and how well the compare to AMD’s processors (from the previous generation!)
I wonder how much of the poor performance was due to the state of Ada compilers at the time. I remember taking an Ada class in ’83 running on an IBM 3090. If more than 2 or 3 of us tried to compile/run at the same time it would take the machine down along with the other 1K users. We were not popular.
The compiler was part of it. But the iAPX432 took the ‘C’ in CISC very, very seriously. The instruction length was from 6 to 321 bits. That’s right, instructions were not word aligned or even byte aligned, but bit aligned. If you thought the VAX polynomial instruction was a beast, the iAPX432 put it to shame. And it gets worse. The memory was segmented, and each segment had a set of access rights and other housekeeping information for the object in that segment. All of which was kept in memory. There were over a dozen object types recognized in hardware. So there was an awful lot of balls in the air at any given time. Then, you factor in that you could just plug in additional processing units to increase performance…
It is amazing that they ever got it debugged. Realistically, it was probably crawling with them. But that is a different issue,.
Capability-based systems, an idea for that time.
https://en.wikipedia.org/wiki/Rekursiv
Ada was written for the military for military projects that used military “grunts” to “design and build code”. It was doomed from the start although there are many adherents since it’s “so much better now”. Ada had other issues, namely that it had to be recertified each year, for each environment, and each processor. It even had to have a real-time kernel for it’s multitasking, but I don’t know if it ever got extended to multiprocessing. Change one transistor or even the IC process and it had to be recertified (which admittedly is the best paranoid practice). Change any tool (e.g. bug fixes) and it had to be recertified. Some of that may have changed since I last was involved. There was even an 432 instruction that did the range checking for indexed instructions (a macro really) which essentially got ported over to the next generation of x86s, the x186 and subsequent issues. It’s strange how we expect we can make the ultimate HW but are hopeless when it comes to the ultimate SW when HW uses so much SW? There really is a difference between good, bad, and mediocre coders, and maybe more importantly, manager — seriously! ALL of that needs to be accepted before we get better at SW and realize we need better ways to screen for good coders and managers. But sales, marketing, and the bean counters will always get in the way, so we’re screwed.
Ada is 100% alive and well, with modern compilers it is just as fast as any other compiled language.
None of that other stuff you said is true. It runs just great on your Linux box as is.
Every modern language has support for threading etc built in. C is badly flawed precisely because it lacks support for it.
Ada was not written for the military, it was written for general purpose use by any developer. I sat in the designers class and listened to her lectures. Prof. Liskov would laugh at your words.
You can’t say “none of that stuff you said is true” since you also say “Ada is 100% alive and well, with modern compilers it is just as fast as any other compiled language” . “..just as fast”? I never said Ada was dead, nor did I say it was inactive. My experience with Ada was at the company that owned the original compiler maker, Alsys, so I had some insight (see: https://en.wikipedia.org/wiki/Alsys where is says: “In 1991 Alsys was acquired by Thomson-CSF”) since I worked at a Thomson-CSF subsidiary back then. There’s a link in the Wiki page to: https://web.archive.org/web/20070928175911/http://www.ada-europe.org/Jean_Ichbiah_Obituary.html for the death of “the creator of Ada and founder of Alsys”. I’m unsure why you invoked “Prof. Liskov would laugh at your words” whose Wiki article makes no reference to Ada (but that could be an oversight). I acknowledge that Liskov is unknown to me, as I suspect is true of most people of interest on this planet. Jean Ichabiah helped Honeywell win the US military contract for a a new language designed to for the military (see: https://en.wikipedia.org/wiki/Ada_(programming_language) and https://www.adacore.com/about-ada/timeline-of-ada).
BTW, does anybody know if there is even a single surviving copy of this CPU? It may qualify as one of the rarest commercially manufactured processors in existence.
Ebay is going to be one’s bet. Scrap sales if one looks carefully. Auctions are another.
I have i432/100 SBC, but have not run it yet. But it’s in my list of projects :)
I’ve got a High Integrity Systems ‘Multibox Computer’ sitting behind me with 3 iAPX 432 GDP’s (2 IC’s each) and 1 IP processor.
Unfortunately, I cannot get the IP to initialise, and I suspect one or more of the RAM IC’s are bad, but they are all soldered in place (I have counted 160+ 4164s on one board alone!).
I have a website with some photos at eight6[dot]net.
Hey, can you please contact me via me[at]mark.engineer. I am currently developing system, based on iAPX 432 (first revision), and very interested in rev3. Thanks.
Does anyone know where to find documentation of the instruction set? To people like me who collect strange instruction sets, this one should be fun to read. Though writing an emulator for it is probably a task too far for all but the most dedicated …
Bitsaver has quite a bit.
http://www.bitsavers.org/components/intel/iAPX_432/
Everyone was doing benchmark shenanigans one way or another. Sun managed to artificially boost their SPEC scores through compiler heroics that targeted one specific benchmark, in a rule-skirting way. That lasted a couple of years until other compilers added similar optimizations, and eventually SPEC being revised.
Wow, rare find. Do you have the software for it?
My ask for ‘do you have the software’ was a reply to Mark who claims to have an iSBC 432/100. I guess if anyone else out there has it or has this hardware please contact me. This machine is extremely rare and I know of only one confirmed functional machine. Everything else seems to be lost, most notably the software to compile or develop for these systems.
Hey, I don’t have any software, but I am not that interested in software part of iAPX 432. My plan is to develop own board with iAPX 432 CPU (43201 and 43202) and FPGA to emulate 43203 (I don’t have IO chip) and memory.
So software also would be developed by my own. It looks complex to build proper hierarchy of necessary objects to run even simple program, but sounds possible and as good challenge.