Itanium: The Great X86 Replacement That Never Was

Itanium was once meant to be the next step in computing, to compete with the likes of IBM, Sun and DEC, but also for Intel to have an architecture that couldn’t be taken from it, as the PC was from IBM by its clones. Today, however, Itanium is a relic of the past. [Asianometry] tells us the story of Itanium.

By the ’90s, servers were an established market dominated by RISC architectures and Unix-like operating systems. Intel wanted to compete in this market, due in part to worries of losing control over x86. So, when Hewlett Packard came to Intel in late ’93, Intel eventually agreed to collaborate on a new project in EPIC (Explicitly Parallel Instruction Computing).

The project initially called PA-WW (later IA-64 and Itanium), was also a radical approach to ILP (Instruction-Level Parallelism). As HP engineers saw RISC architectures potentially hitting performance limits in the future, the idea was a compromise between fully compiler-driven VLIW and the fully hardware-driven superscalar and out-of-order computers.

The collaboration between Intel and HP did not go without problems, however. Internal politics, both between HP and Intel disagreeing about design choices and Intel’s Itanium and x86 teams internally competing who was making the new big product, were early signs of trouble. The x86 team’s work eventually came to be the Pentium Pro, which was now catching up with the fastest RISC architectures.

In the mean time, Itanium had been delayed once and twice, due to Intel underestimating the true scale of the project and the fabrication technology required. The mounting delays eventually caused a release in 2003, 4 years late. And the competition wasn’t waiting in the mean time. New RISC chips were still being released year after year, eating in to what would have been Itanium’s performance advantage.

In an ironic twist, Itanium’s attempt to dislodge x86 actually solidified it. AMD realized that Intel had made a mistake; software developers would not want to recompile for a completely different architecture. And so, yet more competition began in the form of AMD’s 64-bit extension to x86, the specification written by the legendary Jim Keller. And, while sales numbers were lower than projected, AMD had still won; more AMD64 chips were being sold than Itanium ones.

In the end, Itanium died a slow death due to delays and increasing competition. With it, AMD made a major change to x86, the first time Intel was on the back foot in the x86 race, eventually leading to their adoption of AMD64 (now called x86-64) with some minor changes. By the time Itanium 2 launched, the writing was on the wall: Itanium had failed to capture the market.

History often rhymes, and so does the story of Itanium to that of VLIW; an architecture perhaps too ambitious for its own good.

Die shots of an Intel Itanium processor courtesy of [der8auer].

14 thoughts on “Itanium: The Great X86 Replacement That Never Was

    1. Honestly these are preferable to an article that doesn’t even tell me what’s in the video but instead just alludes to how interesting it is.

      If I want to watch the video I can, but if I just want the key points I can read the article – I don’t see this as a problem, given the world has shifted to video whether we like it or not.

      1. I think the complaint was that there have been too many articles that are just recapping someone else’s story, that isn’t even coming from the hacker/maker community (i.e. “not a hack”) but from youtube content farmers.

  1. Interesting that the POWER architecture survives, albeit in much smaller quantities – IBM i on POWER AKA AS400, and others.

  2. Itanic was a faulty design. Its concept of executing multiple things in parallel per instruction did not work out; it was a slow and energy wasting behemoth, broken by design beyond repair. Intel did not listen to the warnings of experts and had to face a disaster. It is a pity that this bad decision killed the funding of the Alpha platform by HP joining Intel’s mistake.

    1. The theory was attractive and Itanium had some clever engineering going on, sadly the reality is fixed instruction scheduling (or semi-fixed I guess in Itanium) can’t realistically handle normal code well.
      For a subset of scientific workloads Itanium was nice.

  3. the problem with VLIW is not that the compilers are too complicated, but actually the opposite…the compilers are too simple, and therefore you can put them inside the CPU. Since the CPU will keep changing over time, that’s the best place to put extremely specific optimizations. So obvious in hindsight.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.