Over Apple’s decades-long history, they have been quick to adapt to new processor technology when they see an opportunity. Their switch from PowerPC to Intel in the early 2000s made Apple machines more accessible to the wider PC world who was already accustomed to using x86 processors, and a decade earlier they moved from Motorola 68000 processors to take advantage of the scalability, power-per-watt, and performance of the PowerPC platform. They’ve recently made the switch to their own in-house silicon, but, as reported by [The Chip Letter], this wasn’t the first time they attempted to design their own chips from the ground up rather than using chips from other companies like Motorola or Intel.
In the mid 1980s, Apple was already looking to move away from the Motorola 68000 for performance reasons, and part of the reason it took so long to make the switch is that in the intervening years they launched Project Aquarius to attempt to design their own silicon. As the article linked above explains, they needed a large amount of computing power to get this done and purchased a Cray X-MP/48 supercomputer to help, as well as assigning a large number of engineers and designers to see the project through to the finish. A critical error was made, though, when they decided to build their design around a stack architecture rather than a RISC. Eventually they switched to a RISC design, though, but the project still had struggled to ever get a prototype working. Eventually the entire project was scrapped and the company eventually moved on to PowerPC, but not without a tremendous loss of time and money.
Interestingly enough, another team were designing their own architecture at about the same time and ended up creating what would eventually become the modern day ARM architecture, which Apple was involved with and currently licenses to build their M1 and M2 chips as well as their mobile processors. It was only by accident that Apple didn’t decide on a RISC design in time for their personal computers. The computing world might look a lot different today if Apple hadn’t languished in the early 00s as the ultimate result of their failure to develop a competitive system in the mid 80s. Apple’s distance from PowerPC now doesn’t mean that architecture has been completely abandoned, though.
Thanks to [Stephen] for the tip!
Didn’t realise that Apple went into ARM having failed to develop their own processor. Seems like they’ve a long history of over-rating their own ability.
Power-per-watt?
Processing power per Watt (of electrical power) I assume.. An 80W bulb isn’t great at processing but a cpu using 80W of power can do a lot more (infinitely more) processing for the same power. Using power as a description of processing capability and then relating it back to a unit of power is a mistake admittedly.🤣
The world is only getting more confusing, because a lightbulb might do a few dozen MIPs these days.
I think it it’s got an ESP32 in it, it blows away the integer processing power of anything Apple made until late in the 90s.
“A critical error was made, though, when they decided to build their design around a stack architecture rather than a RISC.”
Ah Harris were are thou?
How did they end up with a loaded Cray when it’s not any use for silicon design?
Back then Spice simulations were quite an onerous task. A Cray could speed things up.
The design group was a huge user of time on the Cray. From my knowledge, its primary use was for shell design renderings and plastic molds. Ironically at the same time that Apple was using the Cray for plastics CAD, Cray was using Macintoshes for electrical CAD on the Cray 3 at CCC. When I visited Cray in 1987, the only two non-mainframe, non-supercomputer machines in the entire building were Suns and MAC IIs. No PCs in sight, and I was looking.
RISC and stack architecture aren’t opposing options. The alternaive to RISC (small instruction set that usually executes in a single clock cycle) is CISC (large instruction set with different execution times). The alternative to a stack architecture is a register architecture.
Register machines are certainly more common, but register allocation is a major problem in code execution. For a RISC machine with N registers, an instruction like Logical Shift Left (LSL) needs N variants, each LSL[n] operating on one of the registers. To reduce instruction variants you have to limit the number of registers that can be shifted, which forces data copying from one register to another to get things done.
Deciding which registers to use for a function call and its parameters, keeping track of the appropriate instructions, and shuffling data between the registers and RAM are all major problems in compiler/interpreter design.
Stack machines map more directly to compiled code, and in theory require less wasted operations shuffling information from one location to another. It’s harder to make stack architectures work in practice though.
Ahh the stack machine fad. They had stack CPUs designed around Forth, Pascal, Modula-2 and I think Ada. When Apple started on the Lisa and Mac it was supposed to use Pascal for development so a stack machine. You also have the Burroughs large systems , HP-3000 classic, and Inmos Transputer. So some of them were successful so it wasn’t a blatantly dumb choice.
One of the main tenets of RISC is that it uses a load-store register-register architecture rather than a stack. It’s in the fundamental principles of RISC.
To get pedantic, RISC usually *dispatches* an instruction per cycle, but the execution time is longer. The MIPS3000 pipe is 5 cycles long for instance. The ARM Cortex M3 is 3 cycles long. The PowerPC G4 has a 7 cycle long pipeline. In the PowerPC 620, the pipe is variable with different execution times ranging from 5 to 7 cycles in most cases but with exceptions like divide which go for much longer. I don’t know who told you register allocation was some kind of problem, lol. Sounds like first world problems. Try writing 6502 code. When you have bits available to define additional registers there’s no reason to not use as many bits as possible, but I know how to do the job with 2 index registers and an accumulator.
I was working at Cray Research Inc. at this time and supposedly Seymour Cray said “Apple is using a Cray to design their next machine and I’m using an Apple to design my next machine!”
Thdd we difference between talented engineers and a genius engineer.
I still say the PowerPC architecture was better than both x86 and Arm (and yes I have done kernel development for all 3). x86 is just a mess and Arm was cobbled together.
Apple wouldn’t be the same company if Commodore management would have had 1 brain cell :). Millions of color while mac were still black and white dark castle abominations being emulated faster in software than the equivalent mac with same processor due to clever hacks and bios in ram. Great times and memories… up until they (commodore) failed misserably, lost their 5yrs advantage in a slow, languishing painful death.
People don’t realize this today but Microsoft helped apple a lot early on to prevent being treated as a monopoly. I never liked Steeve Jobs but gotta hand him credit where is due, the innovation cycle in the mid 2000s was impressive and I would be retired today if my fanboyism wouldn’t have clouded my judgment :).
I remember Mac emulation on the Amiga using Shapeshifter. I was only a teenager at the time so I’m sure I missed some subtleties, but the impression I got at the time was that sure, you could emulate a Mac very nicely, but in that era the AmigaOS had better usability than the contemporary MacOS (two mouse buttons, for starters!), and the Amiga was where all the exciting work (3D design, video editing, music production) was being done. Emulating a Mac at the time was kinda pointless :-)
the emulation was there just to give the ability to run mainstream office productivity software.
Fair enough… I was happy enough with Pagesetter/Pro Page for desktop publishing and a spreadsheet (whose name escapes me for now) to draw graphs because I was too lazy to do it with a pen and paper like the rest of the physics class. But of course, a teenager’s time is usually worth less than a grown-up’s :-)
I never used it myself, but I was under the impression that Final Writer was a decent word processor and compatible with a range of contemporary PC/Apple file formats. In fact, there was even a new release of Final Writer this year! https://www.amiga-news.de/en/news/AN-2024-08-00096-EN.html
Hi Bryan, Thanks for summarising my post on the Apple Aquarius project. Just a polite request that you please cite the source – The Chip Letter Substack – https://thechipletter.substack.com – rather than just putting a link without credit in your text. Other hacakday authors do this.
Many thanks
Babbage
Apple used a Cray to design their processor. Acorn mainly used their in-house BBC Micro home computers to simulate the first generation ARM processor… (And the first prototypes worked!)
“In due course, the team found that the Sun workstations that they attached to the supercomputer could run their software just as quickly as the Cray itself.”
the beginning of the end
Is it really a loss of money?
I’m sure the time spent on this taught them on pitfalls to avoid when they tried again. Apple in the 80s and early 90s wasn’t doing too hot in general since the way they provided bonuses and commended indivdual teams meant it was more lucrative to discourage cooperation between them and (some say, anyway) encourage internal sabotoge within the company.
Yes, performance per watt very impressive for Apple silicon no matter M1 through M3. But, I think it has a better fit in Macbooks then Mac desktops. I do not like everything requiring configuration at time of purchase. If you under buy hardware your buying another Mac sooner then you hoped. Not that bad in a notebook very understandable. Not the way I purchase desktops, because I want more flexibility in a desktop computer to upgrade and grow with my needs. Besides now having to pay Apple whatever they want for upgrades. Very happy with my Macbook Air M2, but won’t be buying any sort of Mac desktop in the future.