Intel Suggests Dropping Everything But 64-Bit From X86 With Its X86-S Proposal

In a move that has a significant part of the internet flashing back to the innocent days of 2001 when Intel launched its Itanium architecture as a replacement for the then 32-bit only x86 architecture – before it getting bludgeoned by AMD’s competing x86_64 architecture – Intel has now released a whitepaper with associated X86-S specification that seeks to probe the community’s thoughts on it essentially removing all pre-x86_64 features out of x86 CPUs.

While today you can essentially still install your copy of MSDOS 6.11 on a brand-new Intel Core i7 system, with some caveats, it’s undeniable that to most users of PCs the removal of 16 and 32-bit mode would likely go by unnoticed, as well as the suggested removal of rings 1 and 2, as well as range of other low-level (I/O) features. Rather than the boot process going from real-mode 16-bit to protected mode, and from 32- to 64-bit mode, the system would boot straight into the 64-bit mode which Intel figures is what everyone uses anyway.

Where things get a bit hazy is that on this theoretical X86-S you cannot just install and boot your current 64-bit operating systems, as they have no concept of this new boot procedure, or the other low-level features that got dropped. This is where the Itanium comparison seems most apt, as it was Intel’s attempt at a clean cut with its x86 legacy, only for literally everything about the concept (VLIW) and ‘legacy software’ support to go horribly wrong.

Although X86-S seems much less ambitious than Itanium, it would nevertheless be interesting to hear AMD’s thoughts on the matter.

136 thoughts on “Intel Suggests Dropping Everything But 64-Bit From X86 With Its X86-S Proposal

  1. I “get” it in a sense. So maybe the compromise is to leave one or a few cores with the full compatibility features, and the rest of the cores (such as the “economy cores” with 64-bit-only support ?

    1. Easy solution is to have a single Atom or Quark core to handle 16 and 32 bit. The main cores are 64 bit only. Once the machine has booted up, that legacy core gets reused to perform simple tasks like power management or audio DSP.

      1. Honestly, there isn’t a good reason on a modern PC to have a 16 bit real mode capable BSP. A modern PC firmware will try to get to 64 bit mode ASAP and will spend most of the time in that mode while doing its thing (as long as CSM support isn’t needed).
        IMO, a better use case is to enable the scheduler to target it via special legacy threads or processes.

      2. I would be a bit sad – I run VMware Workstation and then have lots of interesting VM. Both 32-bit and 64-bit. Having just a single Atom core capable of 32-bit would make it a real pain when that single core may have to run multiple 32-bit VM.

        And some development tools are 32-bit only Win applications – now possible to run on a native 64-bit Win installation but would suddenly need emulation or be relegated to that single Atom core.

        Better to look at a 20-core CPU with 16 64-bit and 4 “compatibility-bit” cores. 4x Atom isn’t much silicon.

    2. You can cut out a lot of validation and testing if you cut out the functionality entirely. Given that 16-bit code doesn’t really work in a 64-bit operating system, we’re already using emulation to handle legacy stuff on Windows and Linux.

      1. Works fine. You can install NTVDM on Windows 64-bit and run 16-bit code with no caveats. Windows 3.1 and 95 apps work completely fine on Windows 10 Pro 64-bit.

    3. I think a better solution would be OS developers creating shims to get to an environment they can use rather than changing hardware.

      Similarly people worrying about being forced off windows XP.

      I appreciate that some systems are stuck in XP but there is plenty of hardware out there that can run XP. Why should all future hardware support it too!?

      If dropping 8/16/32 bit modes means faster, cheaper more efficient CPUs I’m all for it.

      1. “If dropping 8/16/32 bit modes means faster, cheaper more efficient CPUs I’m all for it.”

        I don’t think that’s the case. It would be more efficient to remove unimportant SIMDs like SSE1/2/3 and AVX/AVX2 instead.

        The forced removal of 16/32-Bit is nothing but sabotage to me.
        We’re heading torwards a future in x86 CPUs can run WinTel software only.

        What disturbs me the most is that one one seems to care. In the end, we get what we deserve, maybe. Back in the 90s we played Lemmings, now we’re the lemmings. Mindless consumers who don’t stand up for their freedom anymore. Depressing. 😔

          1. I’d happily see them all gone.

            To be frank I wouldn’t want to lumber everyone else with AVX2 indefinitely despite owning a coral TPU myself.

            For the most part improving emulation might be all that’s needed (playing Lemmings shouldn’t determine my CPU specs in 2025 and beyond that’s bonkers).

        1. The way you answer gives me the impression you don’t understand what the parts will be removed. The CPU will still be able to compute in 8-bit, 16-bit or 32-bit units. What would be removed is the parts unusable in a 64-bit environment. For instance, 16-bit addressing like [bx],[bp],[si], [di] are unusable to address a 64-bit physical space so you will never see a compiler to generate those instructions with 16-bit addressing for a good reason. There are those kinds of instructions they plan to get rid of because they make no sense unlike SSE/AVX instructions (note that they are the FPU in 64-bit). Same for real mode and protected mode, there are useless for 64-bit applications. Just have a look on their whitepapers if you are fully familiar with the IA32 architecture.

      2. > I appreciate that some systems are stuck in XP but there is plenty of hardware out there
        > that can run XP. Why should all future hardware support it too!?

        I am thinking that these people that still use XP now are not the people who use the computer for general purpose stuff anymore. Those XP machines that are still left out there are most likely old control systems, or systems that are used for one task only. So there is no performance issue. The software was designed, built and verified for that particular system, and hardware upgrades are not necessary. Only hardware replacements in case of failure.

        And like you said: if the software was verified on one brand and model of hardware, even now in case of failure, the software will need to be re-verified. Because that one brand and model of hardware doesn’t exist anymore. So as long as there are still alternatives that can run XP, it seems to be no problem to me.

        If future Intel CPUs won’t support it anymore, nobody of these people really cares. As long as Intel still keeps making some legacy CPUs that still support 16/32/64 bit. But for sure Intel will do that, if there is a market. Maybe they’ll even make a ‘final version’ with all patches and fixes, and possibly even an full SOC solution (or AMD might provide a remake of the Geode, which was already quite complete, but not fully SOC). As long as there is a market.

        Of course one issue with that is that the older chip designs might not be so easy translatable to the current nanometer IC production process. AMD Geode was 350nm. We’re now at 3nm.

        But it would make verification of hardware replacement of XP systems even more simple, because everyone now has just one target to (re-)verify for.

        I really see no problem. Nobody will really be hurt, and we will all benefit.

        1. I think the part of 1 core for 8,16, 32, & 64 bit x86 while having the others do only 64-bit will make boot up easier. The main reason why is it most only except 1 core 1 thread so have 1 core 1 thread for that. It will also keep compatibility to the 1990s when we only had 1 core CPUs.

      3. Sorry, but “8/16/32 bit modes” is leading to misconceptions. First, there is no such 8-bit mode and 8-bit computation are still available. Second, 16-bit mode is a little more complex: no 16-bit address mode (0x67) and no real or 16-bit protective (à la 286) modes. You can still use 16-bit computation (0x66). So I understand where they want to go and it makes sense to remove the parts that are never used in a 64-bit environment. For instance using 16-bit address even in a 32-bit environment was useless and even stupid because there would be very few cases where it would be possible to apply them and even so with no benefit to do so.

    4. Would be easier and cheaper to implement that on a Mainboard side. An option Rom socket for a programmed FPGA on your mainboard could provide an emulated 386 CPU(or higher), which people could install if they have compatibility issues. It would be enough for most old software that actually uses 8/16 bit code, and it would be enough to boot incompatible operating systems, given that pretty much the first thing a modern os does is switch to protected mode. My guess is Intel is trying to save die space here, and thus cut cost. Moving x86 to a spearate optional chip would allow just that at minimal cost and without sacrificing compatibility.

        1. I think i recall reading about an apple lisa II that had an expansion card to behave as an older model with different processor. If possible with a modern architecture, it would be a way to satisfy the needs of retrogamers and retroindustrialists, slot in a 32/16 bit expansion card in to your 64 bit machine. I guess the problem is it’s not cost effective to produce dedicated old processors, that’s why x86 is such a mess.

    5. One of the points listed on the linked page is “Using the simplified segmentation model of 64-bit for segmentation support for 32-bit applications, matching what modern operating systems already use”. I don’t think they are removing support for 32-bit applications, only real-mode and 32-bit operating systems, and legacy features required to support them.

      Intel are well aware of the tons of business-critical 32-bit applications still in use, and how harmful it would be for their own business to kill them.

    6. At first glance this seems to make a lot of sense. Why carry the baggage around from decrepit older generations? However…..

      1. All software that wants to run/boot needs to change. This costs money and time for no apprciable user benefit.
      2. It actually costs MORE to remove these features in the design than to leave them in. The cost of removal does not go away completely even after 2 future generations.
      3. All validation suites have to be rewritten to no longer test for the absent features. However, they do need to test to ensure that required responses to attempt to use legacy features respond in the correct way. This is a very large cost and likely to miss bugs.
      4. Why would Intel be interested in allow all their competition to continue unimpeded in the market, requiring no changes, while they have to wait for the correct version of Windows, Linux, etc. to become available without bugs?

      This is one of those things that sounds really nice but breaks badly upon implementation. It is head-scratching why this is being considered as it’s a computer architect visible thing only. Sure, it makes the processor LOOK cleaner, but does that matter to anyone after the machine boots?

      The comparison to the Itanic processor is apt. Intel could have done this 20 years ago when they held undisputed leadership but they can’t now. The most damning is that it require MORE effort to remove in the design than to keep it the same. Seems weird but feel free to ask me how I know.

  2. Wouldn’t systems that use 64-bit UEFI binaries continue to work? Another note I’ve seen elsewhere is that 32-bit userland applications should continue to work under the new, theoretical processors, too. Interesting stuff.

    1. How would 32 bit userland work without 32 bit instructions? It might be possible to create an emulation layer for 32 bit instructions but it truly would be processor emulation with everything that entails.

      1. This is where the distinction gets a bit messy. There aren’t actually separate “64 bit” and “32 bit” instructions in x86_64. An instruction prefix is used to denote when a particular instruction should use the 64 bit register.

      2. I think squashing selectors/segment registers for 32-bit in a way similar to 64-bit would be one way to “remove” legacy 32-bit support. You would no longer be able to run a traditional 32-bit kernel, or support mostly obsolete interfaces like call gates (CALL FAR / RET FAR). But I think 32-bit user mode on a 64-bit kernel would still be possible in a backwards compatible way.

  3. i wouldn’t mind a de-crufting of x86. my question is what is going to happen to the fpu. as far as i know its still the same x87 instructions inherited from the co processor back before the dx chips started integrating it. thats pretty legacy. wouldnt mind if it gets swapped out for a newer one with fp128 support.

    1. there’s a dizzying evolution of MMX SSE SSE2 sort of things in intel that have added a defacto reasonable FPU to the ISA. they bring their own cruft but we’re no longer beholden to the awkward stack of fp87. i don’t remember which one of those acronyms is the big step forward but these days we get named registers.

      for example “double f(double x, double y) { return x/y; }”. run that through gcc -S -O -m32, and you get fldl, fdivl (fp87 instructions) but with gcc -S -O -m64, you get just “divsd %xmm1, %xmm0” – it uses named registers instead of pushing on a stack. it doesn’t seem that bad for this small example but named registers for FP operations are way easier for a compiler to work with, or anyways, they’re more compatible with how most compilers already have to be set up to work the general purpose registers.

  4. Maybe a push to get everyone off of Windows XP to 10? I’m not even sure what happens on a PC when it encounters a 16 bit instruction when operating in 64 bit mode. But so much of the world is still using XP or older versions of Windows, since they “rely” on old PCs and their old SW. I also guess it’s a “whitepaper” to avoid accepting the blame is this blows up, right?

    1. Not all cores are the same in modern Intel CPUs. Even Windows 10 struggles to utilize efficiency cores and performance cores properly with the CPU scheduler update Microsoft threw into Windows 10 21H2. Installing Windows XP at all on newer hardware requires a lot of driver BS and injecting drivers that are compatible (there are not many that are still compatible) into the installer. The steady march of time has made nearly everything new incompatible with Windows XP. Less than 0.59% of Windows users are running XP still as of 2021. If you want XP still, you can either virtualize it or run it on older hardware, preferably airgapped.

      1. i dont run anything older than 7, and i dont think i have 7 currently installed into anything. it was formerly on the system i used to demo windows 11. but it will probibly go back with how little i like windows 11, if i dont install a linux distro. all i do with that machine is run my 3d printer and the slicer i use is cross platform.

      2. I meant it more generically — namely, that there are so many systems out there globally that still rely on super old outdated systems that run on old hardware. They have mostly “locked” in to that older stuff that makes them more vulnerable to various Internet based attacks (which is why they should never be on the web!). And it’s not only in “third world” countries: https://www.usnews.com/news/health-news/articles/2023-01-04/ransomware-attacks-on-u-s-hospitals-have-doubled-since-2016

        The issue raised by the possible Intel changes affect binary compatibility that is already present to some extent in current SW distributions. My direct experience is outdated, so I suspect it’s worse now than when I had any involvement.

        1. seems like you answered your own question. “still rely on super old outdated systems that run on old hardware.”

          for truly legacy loads, they will use legacy hardware. they will go through growing pains of maintaining old hardware, buying new hardware that is exceptionally backwards compatible, running under an emulator of some sort, and/or upgrading their software to work with newer computers.

          but in the short term, they will continue to run their 2003 OS on their 2003 computer, and whatever fads are in the world of new production 2025 chips will not really concern them much.

          1. Some of that “legacy hardware” is multimillion dollar kit like MRI machines which have a 10-15 year expected lifespan

            It’s not that the control hardware/os can’t be updated, but that the vendor refuses to do so

          2. “Refuses to do so” isn’t entirely accurate. Making any changes to medical equipment is a very costly process, because it means that the entire device has to go through the approval process again. It’s more cost effective to pay high prices for legacy hardware for repairs than it is to pay for reapproval, not to mention faster because re-approval takes time.

      3. “Less than 0.59% of Windows users are running XP still as of 2021”

        That’s probably true for internet connected systems, but I bet there’s a boatload of other industrial and control systems offline that are using this.

        1. 32 bit userland applications don’t use 32-bit protected mode, which is being removed. They use the 32 bit submode of 64 bit mode (“compatibility mode”) which is being retained.

          Indeed for 64 bit kernels the OS should need minimal adaptation for this mode. If it wants to use 5 level page tables it needs to use the new method for switching to those. The kernel needs to start running in 64 bit mode already. And of course you need to have a 64 bit UEFI bootloader that loads your kernel.

      1. i am usually saying this from the other perspective because the TV-box ARM SoC in raspberry pi is a pig — delivering almost no performance while consuming quite a bit of electricity. (hi foldi, no i don’t want to hear what you remember of looking up wattage figures years and years ago but thanks anyways)

        but ARM is not just “a low power architecture.” and x86 isn’t “a vastly more powerful architecture.” there are x86 implementations that are decently low power, and ARM implementations that aren’t. they each cover large ranges of the performance, wattage, and performance / watt space.

        i think in the rare situation there is an apples-to-apples comparison, ARM chips will generally be somewhat more power efficient than intel chips…but they absolutely are not limited to low-power roles, and x86 isn’t limited to power hog or high-performance roles either.

        at the moment, i think if you look at the top of the performance curve you will find very few ARM choices and a lot of intel/amd choices but that is just a quirk of today and could change very quickly.

        and most uses simply don’t care about the top. thanks to a stubborn limit in performance per core, even on x86, people have come to realize that the thing that matters for truly large loads is scalability across multiple cores. and when it comes to that, ARM can be very powerful indeed.

      2. It’s not the architecture that’s more powerful, it’s the actual cores, the microarchitecture choice. Nothing stops you from applying the similar michroarchitecture design to an ARM or RISC-V or anything else to get the same level of performance.

    1. Excellent point. What is the benefit of being “mostly” Intel compatible, in light of proven (and generally more cost-effective) ARM and RISC-V chips? Not to mention those from AMD, which would suddenly become “more Intel compatible than Intel.”

  5. This seems like a terrible idea, Businesses need certianty in their IT systems: Certianty that their existing software will work, Certianty that their existing hardware will work, Certianty that their existing documents will remain readable .etc.
    If you start changing user mode operation of the CPU you get regression problems: as a result of this I know some companies that are still using Windows XP on internet facing computers (a huge security risk) because the existing software/hardware that they need to operate is not compatible with anything newer, yes they keep getting hacked, but all they can do is reload a known good system image and continue as normal!

    1. I’ll tell you a terrible idea, running XP in a non-airgapped internet facing context. They could run XP and not be doing something so stupid, I guarantee it. Just because a CNC machine from 2003 needs to run on XP does not mean it needs to be internet facing. It should never have been internet facing in the first place!

    2. That sounds like a disfunctional business that doesn’t perform Strategic Portfolio Management and Application Rationalizing. If they did then the outdate systems would have been identified and replaced before they fell out of support.

      Why should everyone else continue to support their immaturity?

      1. “If they did then the outdate systems would have been identified and replaced before they fell out of support.”

        You’re naively assuming that things always progress and that newer equals progress.

        But that’s not necessarily true. Some in-house solutions created by ingenious people might be still good despite their age. Or let’s say, they fit the business excellently.

        Another thing to consider:
        Is it really wise to replace an old system based on, say CP/M or DOS, by a mediocre Windows system that’s semi-outdated, too?

        Everything is outdated at some point, anyway. There’s no reason to keep playing the upgrade game, thus. Not as long as the old solution is performing well: It’s perhaps better to break out and keep using an outdated system for decades and learn to fix it. Doing so keeps business consistent and running, too.

        It’s the same with outdated router modems. Installing new firmware revisions closes old security holes, but simultaneously adds a dozen new security holes that didn’t exist before. So why upgrading? It’s one step forwards and three backwards. It’s just silly.

      2. I have been writing software since I was 10 1978. In the real world when it works you don’t touch it. While there may be very few users of 16/32 bit code in the general population, Just about every machine you interact with on a daily basis is.
        ATMs
        Cars
        Planes
        Trains
        Calculators
        POS
        Every robot every where(mostly)
        Every device in a hospital(and most of them or run vx works, QNX or windows CE)

      1. Still really stupid. They could have simply allocated funds to have a company like CodeWeavers hack it to work on Linux. Whatever custom hardware they are using is also stupid because it WILL fail. They needed to migrate decades ago.

        1. This is a theoretical point of view, but industry does not work like that.

          There are machines that one cannot migrate. Custom-built machine that are essential in a production chain are still way cheaper to maintain than to spend millions to design new.
          Those will stick for a long long time.

          1. “Cheaper to maintain” – yes, but that’s what’s being discussed, no one is suggesting that the custom machine being controlled should be chucked, it’s the machine doing the controlling that needs to be chucked. If you’re spending millions to design and build something you should be allocating budget to maintain it, neglecting it for 20 years and being surprised when it breaks is just foolishness.

      1. Having the source code does not solve the cost of maintaining the software. Sometimes system changes are significant (move from 16 to 32 bit or 64 bit — assumptions about data types changed) and can require review of large parts of the code base, while giving up years of testing (not all testing is automated nor planned).

        In an ideal world all of this is without issue, and you have a perfect set of tests, enough time and money to do the system change, an overall abstract design that guides your implementation in a detailed way.

        In the real world you will have system specific non-abstract code, tailored code, and all of that may break while upgrading.

    3. I can see this being really popular in the server and embedded space. For a lot of embedded you just don’t need 16 or 32 bit code because you are often building all the software from source. Doing this for Atom first would make a lot of sense. For servers again you would probably never use code that old for most tasks.
      The simple solution is keep the legacy arch in production.

  6. In terms of user space code, dropping support of 16 bit code wouldn’t have much effect on anybody other than some retrocomputing fans. Dropping support of 32 bit code would have a much bigger impact, as there are plenty of legacy applications (and even an occasional new one) that are 32 bit only, and users of applications that are no longer under development would be SOL.

    In the OS space it’s a much bigger deal. It means that no existing OS would boot. Every version of Windows before 11 (or maybe 12 depending on how long it takes for Intel to do this) would be out the window. So would all existing Linux distributions and all the OSes and single-purpose builds based on one of the BSDs; ones that are still under active development would likely get updates that could support the new boot process but older ones might not. So would all existing builds of macOS for x86, and it’s not likely that any of those would get updated.

    As for AMD, a big question is whether this stripped-down architecture would be covered under the existing cross-licensing agreements between Intel or AMD, or whether those would have to be renegotiated.

    1. You would be surprised. Last night I tried to put a modern linux distro on my eeepc 701. Most distros dropped 32bit support long ago, of those that didn’t, literally the only one that would boot is Debian 11. (Likely due to pae, but still).

      Like it or not 32 bit *is* already dead.

        1. For user space software using 32 bit can actually be beneficial, which is why there is the x32 ABI under Linux. It can also have memory use advantages, due to pointer and integer size. But also computations can be faster on 32 bit registers. Image size of compiled 64 bit binaries are larger.

          But most importantly, a lot of software does not need to support 64 bit, and gain nothing from it. A text editor is a good example.

          Making it compatible with 64 bit is one thing, requiring all software to be exclusively 64 bit however is unnecessary, given possible performance and size differences.

      1. I was wondering how far I would have to scroll before someone mentioned that an awful lot of games that people play are 32-bit.

        Remember the fuss over Ubuntu wanting to drop their 32-bit layer? That quickly became “.. apart from the bits that Steam needs.”

      1. Bad idea. A lot of modern and common software is 32 bit, because there is no need for 64 bit. Solving that by emulation would be a waste of energy. It’s not hard to support 32 bit user mode apps on a 64 bit system.

    1. Anything that decreases the size of the processor die helps with power efficiency. I can’t see how it would directly affect IPC except as it would be affected by better thermals, I guess it would let them cram some more cores in though.

      1. I beg to differ. The 80286 had about 134.000 transistors.
        In comparison to a current Intel processor, this is almost nothing.
        Partial removal of the remaining 16-Bit features won’t even save 150k transistors. That’s how insignificant it is. It’s in no relation to the loss of compatibility. It’s a sacrifice that does not but hurt.

        1. What’s the point of comparing it to a 286? Modern x86 doesn’t just fetch, execute, fetch, execute, etc. A modern CPU with just the 286 instructions would still have many times more transistors than a 80286 from 1982.

      2. the entirety of instruction decoding is nothing compared to the L1 cache in a modern processor. I am convinced this isn’t about trimming transistors but in lowering development and manufacturing costs associated with verification, testing and characterization (shmoo plot, etc).

  7. I think they should leave a method of emulating 16/32 bit code in hardware. As for boot, this should be configurable, with some way of returning to 16-bit boot mode available without running code, such as a special reset sequence or something. So the processor can still boot and run legacy OS’s and code, but not necessarily in a performant manner, and without using a lot of gates to do so.

    1. Keep in mind that very little of the x86 architecture actually exists in hardware. A vast majority of the instruction decoding and execution is performed in microcode. Even when I was working as a firmware engineer at Intel it wasn’t entirely clear what was going on in hardware. Long story short, most of the x86 instruction set is already emulated.

    2. Sorry, apparently “in hardware” is overloaded language. I just mean internal to the CPU such that you don’t need to load code into RAM to achieve it. I had hoped that the term “emulate” would be enough to suggest using minimal actual hardware.

  8. I see literally everyone, including the article poster, didn’t read the linked article.

    What they’re suggesting is getting rid of real mode and finishing getting rid of most (but not quite all)16-bit functionality.

    They’re very clear and explicit that 32-bit userland under a 64-bit kernel is still going to be supported just fine.

    1. To be fair, I tried to read the first “sentence” that was one long paragraph in length with several nested thoughts and a couple of syntax/use/basic grammar errors and gave up.

    2. “What they’re suggesting is getting rid of real mode and finishing getting rid of most (but not quite all)16-bit functionality.”

      How does this affect virtual machines and hardware-assisted virtualization (Intel VT) in the long run?

      Also, 16-Bit Protected Mode code execution used to be completely valid in x86_64 Long Mode. Is that ability gone, too?

      Then, what about Arca OS, which is based on OS/2 Warp 4? It is quite modern and can boot via UEFI, but OS/2 made heavy use of the x86 ring scheme. Doesn’t this change break modern OS/2?

      Even if not directly, what about 16-Bit Protected Mode applications for Presentation Manager running on 32-Bit OS/2?

      Would they still work after the change that x86-S is going to make?

  9. I am only buying Intel to run my MS-DOS 1.0 Applications native.
    As it is absolutely impossible to emulate an 4.77MHz 8086.
    Therefore sorry Intel, this means goodbye.

    1. What is the hardware and performance requirements of your use case? A Vortex86 at up to a GHz, and selectably lower in BIOS is quite feasible. There are some quite thin clients available for peanuts on the bay if no ISA slots are needed.

    2. I would’ve thought an emulator like PCem can do that. There is also MCL86 if you want to go via the FPGA route. AFAIK, both options are 100% instruction set and timing compatible with i808x series.

      I once wrote an i808x compatible system emulator solution for the i8052 microcontroller, but that is not cycle-exact only cycle-approximate ;-)

    1. Just do it in pure software. Let qemu or whatever do the dirty work. You can even JIT the emulation these days. The CPU part of emulation was always the more straight forward part. It was the fiddly little hardware engines and synchronizing them accurate enough that made some game console emulators very slow and difficult to get right.
      The cynic in me is reading this as Intel asking the industry: Would you pay us the same amount for CPUs if we left this feature out?

  10. I wonder what the intention behind this is.
    Quite some years ago I read that the old x86 support is not going away because keeping it is just a few percent of the silicon.

    I’ve got a suspicion the main reason is for patented instruction sets and yet another attempt to push AMD out of the market. Intel has quite a long history with illegal and anti-competitive behavior in attempts to destroy AMD. So from that point, it would be interesting indeed what AMD’s thoughts about this is.

    For most users it probably won’t matter much. Software writers will have to re-compile (and debug) a bunch of software. I have no idea how much legacy code there is for which the difference would be significant. Best I know bios update ultilities still run on a bootable version of FreeDOS these days. Not being able to boot your PC because of such a software incompatibility would be quite a nuisance.

    1. Let’s assume in 2 years we will see the first 64-bit-only cpu.

      Comments read like 8/16/32-bit will be forbidden and cpus vanish from earth… How many “old” boxes with “old” cpus will then being still available? I’m sitting in front of a i7-2600, it is from 2011 and still rocking my boat.

      I’m sure there will be a FORTRAN/COBOL compiler to rescue the world. (Kidding, it already exists.)

      1. I just put my 2600k based system in a new case, and lapped the CPU heat spreader for fun. I replaced it in 2021 with a zen3 system, but it was out of extravagance rather than need. It is likely to become my 4 year old’s first PC. Her knowledge of PC gaming is currently Tux Racer, Beamng.drive and Kerbal Space Program.

    2. “For most users it probably won’t matter much.”

      Not directly, but indirectly maybe. Let’s imagine crucial computer parts used in infrastructure are built on legacy technology. Water works, subway trains, power plants.

      Some of them rely on electronics that do their jobs dutifully for decades without being noticed. What happens if one day they need repair or replacement parts? x86 used to be future-proof, so no one thought about hording replacement parts. Until now, I mean.

      If x86 continued being faithful to itself, we wouldn’t need to worry about these things. If IT was wise, it would keep the lowest common denominator intact (i386) and rather remove outdated extensions of the past years (SIMDs like SSE123 or AVX).

      Alas, that’s not the case. We don’t value what we have until it’s gone. 😔

      1. Sorry, no.

        If you use decades old infrastructure and do not invest into the future and don’t have an inventory of parts used and (as expected when not prepared) no spare parts as replacement, then, well, welcome to the fail dome. I don’t understand why we have to think/organize/take care for those who don’t do their homework.

        1. Sorry, yes. What about when the homework has been done but the people responsible are unwilling to fund the investment in maintaining/transitioning the old infrastructure or build anew? Take, for example, the ASCE’s low grades for U.S. infrastructure. We know the infrastructure is in poor shape but we are unwilling to make the necessary investment. Saying “Tough luck” to the people driving across a bridge when it suddenly collapses seems to be our current choice.

          Joshua’s examples are things that cannot just be pushed into a “fail dome” followed by brushing them off one’s hands.

        2. There is no point in at great expense rebuilding a system that works perfectly with lots of headroom and runs a proved dependable program. As soon as you start shifting architecture etc you end up having to start effectively from square zero. And will likely end up building around hardware that is overly complex for the task in hand and introducing way more failure points along with the inevitable new software/features bugs. The same sort of bugs might well have existed in the 80’s when the original software was created, but its already gone though however much patching it required and then been largely left alone for decades most likely – There is no need to break what isn’t broken.

          If you are already starting at step zero on a brand new project the sensible thing to do is not expect to rely on hardware features not in common use now – as that just makes finding the right designers, programmers, hardware challenging even now, let alone in 10 years when the system you build will still hopefully be working away. But that is a very different thing to keeping legacy programs and compatible hardware for as long as it is up to the task and viable to do so.

      2. Those places using outdated hardware? They do exist, and they already can’t buy replacement parts. When was the last time you saw a new motherboard with an ISA slot? SCSI? Or even a PCI slot?
        Intel removing some chip instructions won’t help, but modern PCs are already incompatible with older equipment.

    3. Most users, if you look at the numbers, now are using ARM architecture.
      Apple switched from Intel to ARM on Macintosh, and this is the fourth processor switch.
      It somewhat worked due the tight CPU, hardware and operating system integration that exist in the Apple ecosystem. Intel ecosystem it’s very diverse and not so vertical. Most user could switch to a Mac and do the same stuff they’re doing on a Windows systems, but not all.

      On a side note Intel tried at least four times to kill 80×86 architetcure without success due the legacy software and backward compatibility requests. Last time was Itanium, and AMD killed it with a backward compatible x84 architecture. AMD nowadays has to do absoultely notung to dio with the possible Itanium II .

  11. Speaking as somebody who’s written bare-metal protected mode stuff, and might like to dust it off at some point for old-times’ sake: how good is the MiSTer project’s ‘486 emulation?

  12. I think that going through all sorts of hassle just for this is not a good plan, they need to make a bigger more radical move so that the hassle has more benefits and is more future-proof than maybe 2 years; at which point this would start to be dated.

  13. All for it if it helps to have faster CPUs with easier firmware and less security flaws. At some point you have to cut the old stuff. Everything 16/32 bit can run through emulation. Maybe not cycle exact, but if you care then use FPGA emulation or whatnot.
    But if Intel is just doing that for patent shenenigans, screw them!

  14. I like my Apple //e and 6502.
    I use AppleWin and JACE to emulate a //e.
    All because of a little program called Diversi-Dial.
    Who knew a 1 mhz machine could do so much?
    So a true 64 bit machine would have no 8/16/32 instructions at all?
    I think that would be a disaster because there are still plenty of systems out there
    that only have 16/32 bit versions of their control software. There are systems that
    are still running XP because it just works. If it isn’t broke, why mess with it?
    That’s the school of thought for a lot of systems still running XP.
    How many companies are still using Windows 95? How many are using Windows 98?
    When Windows 7 came out, a lot of people didn’t upgrade at first, but after it gained widespread
    adoption, it was pretty much Microsoft’s “sweet spot”. Windows 7 was simple to operate.
    Windows 10? Why did MS come out with that? And after decades of selling their Windows software,
    why did MS make Windows 10 free? What do they gain? The conspiracy theorists say that Windows 10
    sends every keystroke, every program, every website etc. back to Microsoft.
    Do I like 10? No. I don’t. It doesn’t have the simplicity of Windows 7. Windows 10 does a lot of unwanted
    (in my opinion) stuff in the background, kind of like the old TSR programs that ate up memory “just in case”
    you wanted to run that program again, the school of though on that being to reduce load times and make
    the system appear to be faster. It was a good idea for it’s time, but I think the real problem that needs to
    be solved is the speed and heat/cooling issues. Looking at AMD’s 7950X, I’ve seen literature that says
    it’s supposed to run at 90C. It took a while for CPU’s to break the 4 Ghz speed limit, but for a lot of people,
    myself included, the old system I have running 10, an AMD FX-8150 with 8 GB of RAM (top of the line when
    I bought it) is plenty fast enough for me. Do I want a faster CPU? Of course, but do I NEED a faster CPU?
    No, but if you came up with a 10 ghz CPU that used 1/4 of the power my current AMD FX-8150 uses
    and runs cooler and more efficiently, then I’m on board with it but to just go out and buy a new system
    just to say I have the fastest CPU on the planet? Not needed. Do I miss Win 7? Yes. Simple is best.
    And for you D-Dialer’s missing the simplicity of text chat, check out https://magviz.ca

    1. The AMD FX CPU series was akin the nVidia’s FX series GPUs. Highly disappointing because they provided little performance improvement over the previous generation and in some functions were slightly worse. Unlike with the FX GPUs there were no driver updates to squeeze out a bit more performance for the FX CPUs.

      On my previous PC I upgraded from a quad core Phenom II to a 6 core FX. I ran a whole bunch of benchmarks and performance improvements were at most 5% in some categories. Some were slightly worse. Software HEVC video encoding improved by one frame per second.

      So I saved my $ and bought a mostly all new Ryzen 5 six core system. Same GPU, hard drives, and optical drives, Same Windows 10, everything else new. I’ve since upgraded GPUs to get hardware HEVC encoding with B frame support. I’m pretty happy with the new box over the past couple of years.

      Get a motherboard with a good chipset, 32 gig DDR4, an 8 core Ryzen 5 without video. It will burn your old FX system to the ground and do a sacrificial fairy dance around the ashes.

    2. >There are systems that are still running XP because it just works. If it isn’t broke,
      > why mess with it? That’s the school of thought for a lot of systems still running XP.

      I would put it a bit more fairly: if you invested money in something and it is making money back for you, and if it continues to do so far past the time that you thought it would, it’s all profit (minus the power bill), many hundreds of percents of return on investment and growing.

      Why would you change the system or upgrade, if it works? Just for a more fancy user-interface? That’s for desktop users, people who use the computer for general-purpose things.

  15. Way back when the 80386 was introduced I wondered why Intel, Microsoft, and IBM didn’t get together and eliminate 16 bit. They finally had a hardware system fully capable of protected mode operation, but it took much more time to get the software there.

    In 1985 the PC market was a mere 4 years old. The installed base of PC’s and clones was far smaller so it should have been possible to relatively painlessly transition software to 32bit and cut loose from the old BIOS way of booting and operating hardware.

    PCs, XTs, and ATs (and clones) would have soon become worthless scrap, but the process would have been far less painful, with a much shorter time keeping old stuff on life support since there wasn’t all that much CNC, industrial, medical etc running on 8088 and 80286 systems by 1985.

    1. Because the 286 proved that people still needed real-mode apps. In fact, one of the main additions in the 386 was the Virtual8086 mode, that allowed to emulate several 8086 processors (thus, in real mode) from a protected-mode operating system. The 286 couldn’t do that, and OS/2 had to use tricks to allow users to run their DOS programs.

  16. I read the document, and one line jumped off the page at me:
    “After INIT, NMIs are blocked until explicitly unblocked by ERETS/ERERU/IRET.”
    That change, suggests that NMI’s have been abused “own” machines, but probably with local access.

  17. As long as there are also models available with the full instruction set I don’t see the problem.
    If they also make the simplified CPU open source it would be even better!
    Operating systems will have plenty of time to adapt and there would be software emulation for older applications, but most people will not need to use that. The result would be cheaper and more efficient processors.

  18. I say just leave it be. We have gone too far to start dropping 8bit, 16bit and 32bit. This is why the PC has reined over the market because nothing has been pulled out from the x86 structure. I know a lot of Apple fan boys that ditched Apple when they swapped over to ARM. They had to ditch PPC once before for x86 and that is just a computer company that has 10% of the computing industry. Asking the x86 world to let Intel neuter the x86 will hurt a lot of infrastructure. Heck the oldest part in the computing world is banking and financing and they use COBOL on the back end for everything. Doubt they would have time to convert their code over to 64bit if even possible.

    If Intel really wants to neuter x86 then release a consumer CPU that has that option. Leave the commercial server grade stuff alone.

    1. If you are talking about consumer CPUs: those ‘Apple fan boys’ proved that ditching hardware architectures is not a problem related to hardware at all.

      As long as the manufacturers can easily port all their software to the new architecture and the user does not loose performance in their applications (and preferably gain), it’s going to be a win.

      So, it’s up to the Operating System and Compiler writers to make that transition go smooth and low-cost. Up to the hardware guys to come up with a stable and well-performing hardware platform.

      Apple has been able to do that at least 5 times. For Microsoft, it’s all quite a bit harder. Because they chose to write a monolithic Operating System that’s very much tied into the architecture of the PC. Windows is much more tied in to the hardware architecture than Macos.

      Also, all hardware manufacturers will need to write new drivers, which they obviously don’t want. This is also putting a brake on Windows’ capability of transitioning to a new hardware platform.

  19. After reading it, I think that this change wouldn’t suppose a big problem for the current software. First, it only removes Operating System capabilities (thus, when it removes “16 and 32 bits”, it means that it doesn’t support “16 or 32bit Operating systems”, but 32-bit user-space programs ARE supported; 16-bit user-space programs I think that weren’t already supported in long mode).

    Also, remember that UEFI works in protected/long mode, and that the drivers/loaders can also work in either protected or long mode, so, at most, a change in the UEFIs would be needed, and maybe in GRUB-UEFI to make it work in long mode if it isn’t already working on it. The same for any Windows version that works with UEFI.

    Also, they say that those capabilities won’t be removed from inside virtualized environments (probably because they already triggers an exception, to be emulated by the virtualizer).

    So I don’t think that this would be a problem at the software side.

    1. The UEFI stuff is already there and has been for at least a decade. I don’t think they’d even need to change the platform initialization spec as that covers x64 among the list of architectures covered in the spec.
      This is Intel pitching this so I’m suspicious of their motives, but from a technical standpoint, cutting 16 and 32 bit support from the ISA seems pretty low impact on the surface.

  20. I seem to recall years ago that Intel said it wasn’t that many transistors in the grand scheme of things to maintain reverse compatibility. That seems to imply it isn’t much die area or impact to the overall architecture. I wonder what’s changed.

  21. Some how I was under the impression that AMD’s move to openSIL was meant to coincide with a move towards removing 16bit and 32bit modes from the boot process too. But when looking at the articles on that I see nothing of the sort.

    What I find curious is that so many people here are outraged by this when it doesn’t even prevent running 32bit applications. It just prevents booting a 32bit OS and might have some implications for pre-Windows Vista drivers. 32bit applications will still work fine.

    For those needing 16bit support just use old hardware. You are limited to 640KB of memory addressing so 10 year old hardware will have more last level cache than your application can use.

  22. It sounds like Intel sees the benefit – but what are these in real terms?

    Intel just says “With this evolution, Intel believes there are opportunities for simplification in our hardware and software ecosystem. ”

    does it boot faster ?
    save a material number of transistors ?
    use less energy somehow ?
    make it cheaper to design / build / test ?
    make it more secure ?

    If any or all of these are improved, I think this is a good move. OS Bootstrap should be easier and wouldn’t take long to make the changes.

    Maybe we will see a prototype … or AMD will.

  23. This post repeats errors I’ve seen numerous times on this proposal. Specifically 32-bit support is not dropped for user-mode code, only ring 0. Also, OSes are already booting in long mode because that’s how UEFI works. I’m not sure an existing OS would even notice the difference.

  24. That’s a very bad solution. If you have to produce extra chips the price of that chip will be determined by the demand of the hardware.
    I like the solution with one small atom core inside a modern 64 CPU much better. Ideally with the ability to freely change the clockrate from 4 MHz to maxium what the rest of the 64 CPU offers for this small atom core. And if it provides virtualization, this would be even better.

  25. DO IT. Boot in long mode, run in long mode, ditch the baggage. The hardware is fast enough that you can virtualize 32-bit software in trap-and-emulate mode and it’ll still run faster than it did when 32-bit hardware was state of the art.

    And if Intel doesn’t do it, AMD will do it first.

Leave a Reply to alialialiCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.