Although the world of the X86 instruction set architecture (ISA) and related ecosystem is often accused of being ‘stale’ and ‘bloated’, we have seen a flurry of recent activity that looks to shake up and set the future course for what is still the main player for desktop, laptop and server systems. Via Tom’s Hardware comes the news that the controversial X86S initiative is now dead and buried. We reported on this proposal when it was first announced and a whitepaper released. This X86S proposal involved stripping 16- and 32-bit features along with rings 1 and 2, along with a host of other ‘legacy’ features.
This comes after the creation of a new x86 advisory group that brings together Intel, AMD, as well as a gaggle of industry giants ranging from HP and Lenovo to Microsoft and Meta. The goal here appears to be to cooperate on any changes and new features in the ISA, which is where the unilateral X86S proposal would clearly have been a poor fit. This means that while X86S is dead, some of the proposed changes may still make it into future x86 processors, much like how AMD’s 64-bit extensions to the ISA, except this time it’d be done in cooperation.
In an industry where competition from ARM especially is getting much stronger these days, it seems logical that x86-oriented companies would seek to cooperate rather than compete. It should also mean that for end users things will get less chaotic as a new Intel or AMD CPU will not suddenly sneak in incompatible extensions. Those of us who remember the fun of the 1990s when x86 CPUs were constantly trying to snipe each other with exclusive features (and unfortunate bugs) will probably appreciate this.
Just leave the 16-bit compatibility to simulators/emulators.
I agree, and support the argument with an observation about the “qemu” VM emulator as one example. I loaded the latest qemu system emulation package in a freshly updated workstation Linux installation yesterday. It includes 40 ISA models (some are “derivations” of others, there are 4 MIPS variations, 5 ARM, etc.).
Qemu uses hardware-assisted VMs where possible (Linux kernel virtual machine for example). I think it also has some just-in-time translation though I have not researched that topic recently.
For really ancient systems, there are Bochs- and SimH based emulators. Obsolescence Guaranteed has several projects, https://obsolescence.wixsite.com/obsolescence/pidp-11 which is a replica of a PDP-11/70 front panel. Similar for https://obsolescence.wixsite.com/obsolescence/pidp10 which is a replica of a PDP-10. Both projects use a Raspberry Pi to host the emulated systems. I have no relationship with Obsolescence Guaranteed other than wishing that I could justify building the PiDP-11/70.
That would hurt Windows 3.1 application performance on WINE, for example.
16-Bit Protected Mode compatible code used to be perfectly legal in x86-64 long mode (native mode, not compatibility mode).
As long as no segment arithmetics is performed by 16-Bit code it can be run in either Real-Mode and 16-Bit Protected-Mode (no V86 required).
That’s how Concurrent DOS 286 and FlexOS 286 managed to run “well behaved” DOS programs in 16-Bit Protected Mode on an iAPX286 CPU.
In short, removing Real-Mode support is one thing.
Removing the whole classic x86 registers such as AX, BX and so on is a different matter.
The original 80286 Protected-Mode instructions should be kept supported, at very least.
They were historically independent of 8086/Real-Mode.
Same goes for the Ring Scheme and for the Segmentation Unit, that’s part of the MMU next to the Paging Unit.
There were OSes like 16-Bit OS/2 that indirrctly used segmentation for security.
Segments can be marked as executable and non executable.
Things like buffer overflows don’t cause exploits in a segmented memory.
We need ring 1 and 2 to have more support to isolate drivers and the malware packages from game and security companies better. More security not less!
Which operating systems use more than ring 0 and 3?
OS/2 and eComStation did. And ArcaOS is still alive.
And being improved for working with newer technologies, such as UEFI, USB, SATA.
Just (1) take the whole, hideous, horrendous ISA out and shoot it, and (2) stop trying to distribute software as machine code, so you don’t have to care about backward compatibility with the 4004.
Good. Old, perfectly functional and incredibly expensive industrial and lab equipment need 32 and sometimes 16 bit compatibility. If you personally prefer replacing equipment and code every year, use Apple products.
It feels like there could be a compatibility layer for those. I don’t expect my PC to directly control my printer’s motors, only for it to speak to something onboard that can or a middleman that can like Google Cloud Print. I certainly don’t need motor controllers on my PC so badly I’d be willing to pay extra for them or slow down development of faster chips.
“If you personally prefer replacing equipment and code every year, use Apple products.”
There were just three architectures before Apple Silicon, in 40 years.
There was Motorola 68000 from 1984 to mid 90s.
Power PC from early 90s to mid-2000s.
Intel x86 from mid-2000 to now (still supported, albeit being phased out).
Also, Mac OS X 10.4 (mid 2000s, on Power PCs) had supported vintage Macintosh software all the way down to early 80s.
It contained Classic Environment, which ran a copy Mac OS 9.2.
This Mac OS 9.2 had an internal Motorola 68000 emulator for code that wasn’t Power PC.
And when the switch to intel happened in mid-2000s, Mac OS had provided Rosetta emulator.
It did run Power PC applications in as late as Mac OS X 10.6.8 which was supported until the 2010s.
The emulation also supported Mac OS 8/9 applications from the 90s, which did use Carbon API (as opposed to Mac OS X’s Cocoa API).
Carbon API was a sub set of the old MacOS API and could be used on both Mac OS 8/9 and Mac OS X.
In practice, though, this was not so much of an issue.
On Macintosh, applications are usually Fat Binaries (Mac OS 7/8/9) or Universal Binaries (OS X).
That means the application bundle can contain multiple binary files, one for each processor.
Currently made Macintosh applications thus contain both intel x86 and Mx binary files.
Rosetta 2 isn’t even needed all the time.
My x86 Software runs just fine on my ARM Mac.
Maybe no one needs HW-Level compatibility with a more than 40 year old CPU.
Then ARM is obsolete too, because it is also 40 Years old.
A lot of the world runs on, “if it isn’t broke”…like Windows 10. ;-)
these days its all about more cores, faster cores or more efficient cores. the architecture seems to matter a lot less. if it can crunch fp64 for days or run for days, who really cares? the time it will take to get a new architecture, like risc-v up to snuff is not insignificant. people dont consider than x86 and arm are where they are today because they have had 40 years to work out all the bugs.
“My x86 Software runs just fine on my ARM Mac.”
Windows 11 for ARM, too. Via Parallels Desktop.
And the funny thing is that Windows 11 for ARM can run Win32 (x86), Win64 (x86-64) and Win64 (ARM) applications.
– Maybe older Win32 (32-Bit ARM) or Metro apps, too, haven’t tried yet.
The funny thing is, that I’m able to play 32-Bit Windows 3.1 games that used Win32s extension and WinG, even.
I’m just hoping that WineVDM (OTVDM) gets an ARM port eventually.
So I will be able to play classic Windows 3.x games such as WinTrek, MicroMan, Bang Bang or Comet Busters! :D
DING DONG the witch is dead!
i generally think de-crufting x86 is a good idea. its a better idea if we can de-cruft all the x86.
And then what? ARM has no standard bootloader, for example, and is highly vendor dependent. PowerPC is expensive. RISC-V is still having performance problems (and not without it’s own security issues). Who to choose?
Compatibility is the only thing good about x86.
Same for Windows.