Although the world of the X86 instruction set architecture (ISA) and related ecosystem is often accused of being ‘stale’ and ‘bloated’, we have seen a flurry of recent activity that looks to shake up and set the future course for what is still the main player for desktop, laptop and server systems. Via Tom’s Hardware comes the news that the controversial X86S initiative is now dead and buried. We reported on this proposal when it was first announced and a whitepaper released. This X86S proposal involved stripping 16- and 32-bit features along with rings 1 and 2, along with a host of other ‘legacy’ features.
This comes after the creation of a new x86 advisory group that brings together Intel, AMD, as well as a gaggle of industry giants ranging from HP and Lenovo to Microsoft and Meta. The goal here appears to be to cooperate on any changes and new features in the ISA, which is where the unilateral X86S proposal would clearly have been a poor fit. This means that while X86S is dead, some of the proposed changes may still make it into future x86 processors, much like how AMD’s 64-bit extensions to the ISA, except this time it’d be done in cooperation.
In an industry where competition from ARM especially is getting much stronger these days, it seems logical that x86-oriented companies would seek to cooperate rather than compete. It should also mean that for end users things will get less chaotic as a new Intel or AMD CPU will not suddenly sneak in incompatible extensions. Those of us who remember the fun of the 1990s when x86 CPUs were constantly trying to snipe each other with exclusive features (and unfortunate bugs) will probably appreciate this.
Just leave the 16-bit compatibility to simulators/emulators.
I agree, and support the argument with an observation about the “qemu” VM emulator as one example. I loaded the latest qemu system emulation package in a freshly updated workstation Linux installation yesterday. It includes 40 ISA models (some are “derivations” of others, there are 4 MIPS variations, 5 ARM, etc.).
Qemu uses hardware-assisted VMs where possible (Linux kernel virtual machine for example). I think it also has some just-in-time translation though I have not researched that topic recently.
For really ancient systems, there are Bochs- and SimH based emulators. Obsolescence Guaranteed has several projects, https://obsolescence.wixsite.com/obsolescence/pidp-11 which is a replica of a PDP-11/70 front panel. Similar for https://obsolescence.wixsite.com/obsolescence/pidp10 which is a replica of a PDP-10. Both projects use a Raspberry Pi to host the emulated systems. I have no relationship with Obsolescence Guaranteed other than wishing that I could justify building the PiDP-11/70.
If you need to somehow get that old Win3.11 program to work it makes way more sense nowadays to just use Dosbox-X or 86Box/PCem.
Can QEMU emulate in 16-bit mode and switch to virtualization once the guest switches to 64-it mode?
That would hurt Windows 3.1 application performance on WINE, for example.
16-Bit Protected Mode compatible code used to be perfectly legal in x86-64 long mode (native mode, not compatibility mode).
As long as no segment arithmetics is performed by 16-Bit code it can be run in either Real-Mode and 16-Bit Protected-Mode (no V86 required).
That’s how Concurrent DOS 286 and FlexOS 286 managed to run “well behaved” DOS programs in 16-Bit Protected Mode on an iAPX286 CPU.
In short, removing Real-Mode support is one thing.
Removing the whole classic x86 registers such as AX, BX and so on is a different matter.
The original 80286 Protected-Mode instructions should be kept supported, at very least.
They were historically independent of 8086/Real-Mode.
Same goes for the Ring Scheme and for the Segmentation Unit, that’s part of the MMU next to the Paging Unit.
There were OSes like 16-Bit OS/2 that indirrctly used segmentation for security.
Segments can be marked as executable and non executable.
Things like buffer overflows don’t cause exploits in a segmented memory.
Could you illuminate what kind of practical real-world software this would adversely affect?
Probably software in the special fields of chip design, chemistry and architecture etc.
The ancient Auto Sketch 2 for Windows was used to sketch car accidents, for example. It was required because of the macro language only found in that version.
Intel had internal software written for Windows 3.1x, I had read 10 years ago or so.
There’s a huge number of Windows 3.x software no one talks about.
Unless it fails and gets viral in the media (software at airports, subway railways etc).
It’s not consumer software, after all.
I mean, sure, there’s DOSBox that can run Windows 3.x.
And complete PC emulators such as PCem/86Box, Bochs, Qemu etc.
There are also OSes with a DOS VM built-in. OS/2, Windows NT, DESQView/X..
But the problem is performance here. That’s what I was concerned about from very start here.
Games and little tools work fine in emulation no problem.
But what if real killer applications are being used?
Programs that do control things or process information?
Windows 3.x programs that used to work with huge amouts of data?
Early SQL, PCB or raytracing software for Windows 3.1 can handle quite some data that may end up in the hundreds of megabytes if not gigabytes.
A 486DX-66 couldn’t do that in real time even back in the day.
Raytracing a scene at 1600×1200 pixels in 16mil. colors took ages (hours, days) on contemporary hardware.
In an emulator on a modern hot-rod “gamer PC” it’s no better, because performance is being limited through emulation (full CPU core emulation in software).
If same Windows 3.1 software was run on a modern PC, it could do same thing in 3 seconds, but at 4k resolution.
And DOSBox+WfW 3.11 simply can’t provide same power.
Emulators such as 86Box top out with Pentium II emulation.
That’s notable, but a joke compared to a native Windows 3.1 installation on a Pentium IV from 20 years ago. Or Windows 9x on that same thing.
So why not using Windows 9x or XP? Remember, soon modern PCs can’t natively boot 32-Bit OSes (with their 16-Bit compatibility) anymore.
They have to be virtualized on modern PCs. By contrast, x86 or x86-64 code on 64-Bit Windows can be executed through the processor directly.
Same could be done with 16-Bit code in principle, still, if Windows had a modernized WoW that doesn’t need NTVDM/V86.
WineVDM/OTVDM, essentially, but without an 80286 CPU emulator.
– If 16-Bit compatibility is kept, maybe this day would come eventually?
Really, for such demannding applications, it would be nice to have Win16 programs being able to run their code natively on real silicon.
Somehow, by using certain software environments.
Be it through Windows NT or ancient Wabi, -the Windows 3.1 compatibility layer for *nix-, WINE or a VM.
(Wabi runs the 386 Protected Mode kernal on Unix/Linux kernal, without DOS. The Windows applications are mostly 16-Bit code, still.
They use 64 KB blocks, so they work in both Real- and Protected Mode.
They are mode-less so to say. Merely do make API calls.
Windows 386 kernal then translates them to 4 KB blocks transparently.)
The latter (VM) can be made supported by hardware-assisted virtualization.
You know, AMD-V and Intel VT. They’re expanded meanwhile and have I/O MMU and what not.
So I hope that all these virtualization features aren’t being removed by the next incarnation of x86S.
But if they can be kept, then why not support for 16-Bit Protected Mode code, too ?
Normal Win16 applications don’t mess with segments. They use pointers. The OS has to handle segment things.
On a 32-Bit ot 64-Bit OS, the things can be handled differently than on a real 80286. A software MMU can be used, in form of a DLL.
Importantant is to keep compatibility for 16-Bit instructions in x86-64 long mode, I think.
So that open source projects or commercial VM software can continue to make use of it in the future.
It would make things for modern VM software or projects like WINE easier (yes, I know that WINE has DOSBox code incorporated since a while).
Long story short: It would be nice, performance wise.
Especially since it does barely require die area. It’s rather a microcode thing.
Removing 16-Bit code support entirely saves the processor die 0,1% on resources or so?
Best regards,
Joshua
PS: Sorry for long comment and poor English. It’s night here and I’m tired. It could have been worded differently. :(
I love the idea of critical, multibillion dollar industry, driven by top engineers that secures it’s future with hope that things will remain like in 1993. I hope that core element of that software was written and thanklessly maintained by someone from Nebraska to fulfill that XKCD feeling :-)
It’s not the die resources. It’s a security and support nightmare. It’s also a forward-thinking point: 16 bit software does not benefit from modern processor speeds, because single threaded 16 bit performance hasn’t been an optimization target for 40 years. At some point soon you’re either faster virtualizing/emulating, or just using the older processors.
I’ll just point out with dynamic recompilation techniques the overhead of running under emuilation is roughly 50%. So that render using win3.1 era software that takes 3 seconds would maybe take 6. It wouldn’t go back to Pentium II level performance or anything.
(That said your point is valid, if maintaining this mode is.a matter of some microcode then why not keep it?)
The cost of all of these legacy features isn’t just “some microcode”, it’s the added complexity, and effort needed to validate the correctness of the processor. They add an absolute truckload of corner cases to an already hideously complex architecture that is suffering from a steady stream of security problems, some of which have been caused by corner cases in these rarely-used features.
This completely misses that you’re better off running Win 3.x and older things under Boche, DOSBox, etc. that won’t be affected by these changes at all, and will be even more portable for the weird legacy things you find in the corporate basement.
“Could you illuminate what kind of practical real-world software this would adversely affect?”
Yes. WINE used to require 16-Bit Support on Linux kernal.
https://news.softpedia.com/news/Linux-Kernel-3-14-Breaks-Wine-for-16-bit-Windows-Applications-447273.shtml
Since this is all open source software, forks of older versions could still happen.
Sooo CPU manufacturers should be forced to deal with all the added legacy junk just on the off chance that someone might want to fork a decade-old version of some software, rather than use the current, supported, and working versions? That argument just doesn’t hold water.
Most of the listed “use cases” can run perfectly well in fully virtualized environments with performance far exceeding the hardware it was originally designed for, or can’t run on modern machines anyway due to hardware dependencies – it’s getting difficult to find PCI slots, let alone ISA, outside industrial computers.
I think that you mixed things: “Removing the whole classic x86 registers” is NOT what protected mode or x86_64 was about. Also, “pure, 64-bit” long mode was never fully compatible with 16-bit protected mode because it fully removed segmentation. That’s why you require the “compatibility mode” inside the “long mode” (check https://en.wikipedia.org/wiki/X86-64#OPMODES ). About the bug that you talk about in your other comment, it seems unrelated to support itself, but looks more as that 16-bit applications did take advantage of a security problem, or did something “in a not secure way”.
I was under the impression that on x64 systems Wine uses emulation to run 16-bit programs.
Thing is, many virtual machine providers still use legacy BIOS boot, so the compatibility mode is required to get a VM off the ground.
We need ring 1 and 2 to have more support to isolate drivers and the malware packages from game and security companies better. More security not less!
Which operating systems use more than ring 0 and 3?
OS/2 and eComStation did. And ArcaOS is still alive.
And being improved for working with newer technologies, such as UEFI, USB, SATA.
Well… in other words: OS/2. Period :-P
They all should, but because the big ones didn’t back in the day the modern x86 extensions only really support 3 and 0 correctly.
Well a lot didn’t back in the day because literally every CPU architecture except x86 had a user mode and a (depending on what they called it) systen, supervisor, or protected mode. So os designers used to his used ring 0 and ring 3.
VAX had four rings, and AFAIK Intel added four rings to the i386 specifically in the hopes that DEC would port VMS, which they never did. In fact, it wasn’t until 2020 that OpenVMS V9.0 was released with support for x86-64.
Just (1) take the whole, hideous, horrendous ISA out and shoot it, and (2) stop trying to distribute software as machine code, so you don’t have to care about backward compatibility with the 4004.
Good. Old, perfectly functional and incredibly expensive industrial and lab equipment need 32 and sometimes 16 bit compatibility. If you personally prefer replacing equipment and code every year, use Apple products.
It feels like there could be a compatibility layer for those. I don’t expect my PC to directly control my printer’s motors, only for it to speak to something onboard that can or a middleman that can like Google Cloud Print. I certainly don’t need motor controllers on my PC so badly I’d be willing to pay extra for them or slow down development of faster chips.
And then the middleman goes down. Why create such dependencies?
“If you personally prefer replacing equipment and code every year, use Apple products.”
There were just three architectures before Apple Silicon, in 40 years.
There was Motorola 68000 from 1984 to mid 90s.
Power PC from early 90s to mid-2000s.
Intel x86 from mid-2000 to now (still supported, albeit being phased out).
Also, Mac OS X 10.4 (mid 2000s, on Power PCs) had supported vintage Macintosh software all the way down to early 80s.
It contained Classic Environment, which ran a copy Mac OS 9.2.
This Mac OS 9.2 had an internal Motorola 68000 emulator for code that wasn’t Power PC.
And when the switch to intel happened in mid-2000s, Mac OS had provided Rosetta emulator.
It did run Power PC applications in as late as Mac OS X 10.6.8 which was supported until the 2010s.
The emulation also supported Mac OS 8/9 applications from the 90s, which did use Carbon API (as opposed to Mac OS X’s Cocoa API).
Carbon API was a sub set of the old MacOS API and could be used on both Mac OS 8/9 and Mac OS X.
In practice, though, this was not so much of an issue.
On Macintosh, applications are usually Fat Binaries (Mac OS 7/8/9) or Universal Binaries (OS X).
That means the application bundle can contain multiple binary files, one for each processor.
Currently made Macintosh applications thus contain both intel x86 and Mx binary files.
Rosetta 2 isn’t even needed all the time.
But (why macos was brought up) Apple removed support for 32-bit x86 binaries entirely. One version supported them, next deprecated, next unsupported. So gone within like 18 months. They seem to be cutting back on what Intel models they support pretty quickly with macOS too but that’s another kettle of fish,
32-bit compatibility was not removed in the proposed x86s, only 16-bit. Any 16-bit programs can be emulated just fine on the cheapest modern hardware.
Intel/AMD could also continue to produce some older ISA chips indefinitely for industrial purposes since they won’t need performance improvements.
“Any 16-bit programs can be emulated just fine on the cheapest modern hardware.”
But at cost of performance. They would be thrown back to the 90s, performance wise.
And 16-Bit Windows applications are a thing of its own, I think.
They had been used seamlessly on 32-Bit OSes for many years via 16-Bit sub system.
As if they were native 32-Bit applications, with same use cases.
Not all people would be happy to go back from Pentium IV to 486 speeds. Let’s think about it.
My x86 Software runs just fine on my ARM Mac.
Maybe no one needs HW-Level compatibility with a more than 40 year old CPU.
Then ARM is obsolete too, because it is also 40 Years old.
A lot of the world runs on, “if it isn’t broke”…like Windows 10. ;-)
You misspelled “Windows 7”
B^)
You misspelled “Windows NT”
Can the latest ARM v9.6 or whatever run 40-year-old ARM OS’s unmodified?
these days its all about more cores, faster cores or more efficient cores. the architecture seems to matter a lot less. if it can crunch fp64 for days or run for days, who really cares? the time it will take to get a new architecture, like risc-v up to snuff is not insignificant. people dont consider than x86 and arm are where they are today because they have had 40 years to work out all the bugs.
“My x86 Software runs just fine on my ARM Mac.”
Windows 11 for ARM, too. Via Parallels Desktop.
And the funny thing is that Windows 11 for ARM can run Win32 (x86), Win64 (x86-64) and Win64 (ARM) applications.
– Maybe older Win32 (32-Bit ARM) or Metro apps, too, haven’t tried yet.
The funny thing is, that I’m able to play 32-Bit Windows 3.1 games that used Win32s extension and WinG, even.
I’m just hoping that WineVDM (OTVDM) gets an ARM port eventually.
So I will be able to play classic Windows 3.x games such as WinTrek, MicroMan, Bang Bang or Comet Busters! :D
DING DONG the witch is dead!
i generally think de-crufting x86 is a good idea. its a better idea if we can de-cruft all the x86.
And then what? ARM has no standard bootloader, for example, and is highly vendor dependent. PowerPC is expensive. RISC-V is still having performance problems (and not without it’s own security issues). Who to choose?
ARM and RISC-V support UEFI just fine. RISC-V isn’t having “performance problems” — it’s just that nobody with the time and money to make server-grade CPUs has implemented one with RISC-V yet. The ISA itself has nothing to do with that.
“Potentially support” and “all of them support” are two different things. What I have seen from ARM was a mess of different vendor solutions. Everything, but a kitchen sink.
RISC-V – as much as I appreciate ‘free’ ISA – it didn’t happen yet.
Compatibility is the only thing good about x86.
Same for Windows.
its still a lot easier to stick your program in a virtual machine running a period correct operating system than it is to get all your stuff to run on the latest version (given that updates like to break things). especially when you can just port the image files every time you upgrade. i think thats what im going to do in the post windows world (when 10 reaches end of life). going to arm or whatever you will need to transition to emulation, and thats fine too.
“its still a lot easier to stick your program in a virtual machine running a period correct operating system than it is to get all your stuff to run on the latest version (given that updates like to break things). ”
Often yes, but not necessarily. Windows 3.1+Win32s, for example, runs quite unstable in VMs because the latter is using lots of hacks (all the thunking stuff).
Hardware assisted-virtualization is required, at very least.
But even if MS Free Cell finally executes just fine, some Win32s applications still fail mysteriosly that would otherwise run ok on physical hardware.
Also, CPU emulation isn’t completely being understood yet (and even VM software must fall back to emulation sometimes due to privileges and other things).
For example, things like Concurrent DOS 286 can’t run in most emulators yet, because the 80286 emulation is still incomplete in emulators.
Again, I’m not going to disagree with you.
Windows 11 on ARM does surprinsingly well at the moment.
I just like to point out that there are exception.
In reality, emulation isn’t the magic solution for everything.
Also because popular emulators are being written by dedicated individuals who have to figure out everything through trial and error.
Official documents by intel and AMD are no big help, they are either incomplete or just plain false.
Especially the old stuff, from 386 era and before is not completely understood by authors of newer technical documents.
So it’s necessary to check ancient 8086/80286 documents and old patents and figure out the correct CPU behavior.
So turn off those hacks in VN and Qemu, or skip them them in Boche. These complaints you have are not unsolvable, even if the real solution is getting a military SBC based NUC with an old Intel ISA inside, they’re still being made for industrial control and you aren’t going to put it online.
The question is, do you really think it is a benefit to you that a Core Ultra 200 can boot Windows 3.1 just fine, at the cost of complexity that can lead to bugs and lower performance/efficiency?
Hi. It’s not about “booting Windows 3.1”. That’s not possible anymore, already, because CSM is removed from UEFI.
It’s rather about 16-Bit instruction compatibility on modern x64 OSes.
If, say, WineVDM/OTVDM (open source) had a virtualization backend, it could be very quick.
Second, what “complexity” exactly?
The 80286 was from 1982 and had a total of 134,000 transistors.
Total. With bus interface unit, ALU, MMU and microcode.
How much die space would this probably require on a recent CPU? A multi core CPU?
The real 286 was made in 1,5 micro meter (Micron). That’s 1500 nano meter (nm)!
On a modern CPU die, at 3 nm, how much would 134,000 transistors take up? By percent? 1%, maybe?
How much would a CPU cache take up, by comparison?
And then we must realize that a whole 80286 circuit isn’t even required for backwards compatibility.
Only a fraction of the complete 80286 would be needed to maintain backwards compatibility.
The segmentation unit, maybe. And parts of the original microcode, maybe.
Seriously, the whole “backwards compatibility takes up die space” thing is questionable, I think.
I’m sure most things could be re-implemented in microcode.
Considering that we have SSDs in the TB range by now and have CPUs with SRAM based caches that are larger in capacity than the memory expansion of a complete 80486 PC, this excuse is nothing but just a bad joke. :(
X86 needs improvements if it wants to take on arm, also of people are so angry…they can u see the current generation PCs for the next 10 20 years safely
No! The x86 needs free evolution! That’s what made x86 so wonderful in the past years! So unique.
It outgrew itself over the years, despite being so limited at first.
All the additions, like MMX (SIMDs on top of x86 registers) were cool hacks that were shoehorned on an existing architecture.
To this day, has x86 keept compatibility to its grandfather.
And with it, strange but useful CPU instructions of the past that can come in handy in certain situations (like adjust flag/AF, parity flag/PF etc).
That’s what made x86 so unique! It’s a makeshift solution that did stand test of time!
Something we all could depend on anytime. And intel does its best to ruin this, sadly. All just because the company fails to compete with ARM.
X86S was (is) meant as a sacrifice, to save intel’s own back.
It’s not about improving. It’s about butchering the heart of x86 for a temporary gain in performance/chip waver yield.
The real reason why intel wants co-operation is another one, I think.
And it’s simple. If, say, AMD keeps making full-fledged x86-64 CPUs then intel’s x86S would look inferior (which it is).
Thus, AMD would be the new leader of x86 architecture (which it basically is since AMD64 aka x86-64 was introduced).
So intel basically wants all CPU makers to be “partners in crime”.
They all should make inferior X86S processors, so that intel could remain relevant.
In return, intel will surely reward its partners through licensing agreements and what not.
So that everyone is happy. Except the users.
But that’s just me thinking out loud. I’m sure others have similar thoughts here.
Where can I bet on polymarket that X86 is still going to be what we’re stuck with for Serious™ computering for a long long time
May as well make this interesting
Not a hack…
I still have 20 IBM PCs, 5 different models, made between 1981 and 1985. I run Borland Turbo C regularly and other programs as well. I also have a NEC PC running Windows 98 version 2. I have a Dell laptop with no USB ports because USB wasn’t invented yet. They all run fine and I can’t part with any of them. I have new 64 bit computers that have lots of memory and lots of storage, I wish the hardware and software would be 64 bit only. I want them to be lean and mean. I don’t want a lot of old crap on my new computers.
Everybody is talking about performance, I thought the issue was power consumption, that’s where ARM is ahead.