The 80386 was — arguably — Intel’s first modern CPU. The 8086 was commercially successful, but the paged memory model was stifling. The 80286 also had a protected mode, which differed from the 386’s. [Ken Shirriff] takes the 386 apart for us in a recent blog post.
The 286’s protected mode was less successful than the 386 because of several key limitations as it was a 16-bit processor with a 24-bit address bus. It still required segment changes to access larger amounts of memory, and it had no good way to call back into real mode for compatibility reasons. The 386 fixed all that. You could adopt a segment strategy if you wanted to. But you could also load the segment registers once to point to a 4 GB linear address space and then essentially forget them. You also had a virtual 86 mode that could simulate real mode with some work.
The CPU used a 1-micron process, compared to the 1.5-micron process used earlier. The chip had 285,000 transistors (although the 80386SL had many more). That was ten times the number of devices on the 8086. The cheaper 386SX did use the 1.5 micron process for a while, but with a 16-bit external bus, this was feasible. While 285,000 sounds like a lot, a Core i9 has around 4.2 billion transistors. Times have changed.
A smaller design also allowed chips like the 386SL for laptops. The CPU took up only about a fourth of the die. The rest held bus controllers and cache interfaces to cut costs on laptops. That’s why it had so many more transistors.
[Ken] does his usual in-depth analysis of both the die and the history behind this historic device. We spent a lot of time writing protected mode 386 code, and it was nice to see the details of a very old friend. These days, you can get a pretty capable CPU system on a solderless breadboard, but designing a working 386 system took a few extra parts. The 80286 was a stepping stone between the 8086 and 80386, but even it had some secrets to give up.
I can’t work out if AI in “AI Williams” is a not-so-subtle way of saying that this summary was written by Artificial Intelligence as the short sentence structure makes it super difficult to read. There is no flow; sentences often don’t link to the previous one.
Some fonts make it hard to read the L in Al. Al’s been a writer on HaD since 2015 I think, as well as a published author and editor.
maybe Al is Ai and has been all along. Maybe this note has been written by Sarcasto Bot.
The article sentence length reads pretty well for me. On the other hand using semicolons in sentences seems a little unusual compared to using commas.
Sorry, but if Al is a real person AND an editor then he needs to take a bit more care. The third paragraph, for example, looks like it was written by a child. But, looking back at his previous work, this seems to be his style. Let’s try a rewrite and see if we can make it a little more elegant:
“The CPU used a 1-micron process, compared to the 1.5-micron process used earlier. The chip had 285,000 transistors (although the 80386SL had many more). That was ten times the number of devices on the 8086. The cheaper 386SX did use the 1.5 micron process for a while, but with a 16-bit external bus, this was feasible. While 285,000 sounds like a lot, a Core i9 has around 4.2 billion transistors. Times have changed.”
“The full 386 used a 1-micron process which compared to the 1.5 micron process used on earlier CPUs and the earlier revisions of the cheaper 386SX. The chip had 285,000 transistors (and the 80386SL had many more) which was ten times the number on the 8086. While 285,000 sounds like a lot, a Core i9 has around 4.2 billion transistors which goes to show how much times have changed.”
Honestly, tell me the original is better…
OK, I’m being overly critical and I apologise. I get easily irked by bad grammar but everyone is entitled to their own style.
Maybe someone needs to write AI web filters that tailors to people preferred sentence structure and vocabulary. Highlight and right click to expand the meaning, summarize, and/or fix the grammar to your particular flavor of authoritarianism.
The original is better.
Really, this doesn’t even rise to the level of a style guideline. It’s just personal style. I happen to find the short sentences MORE readable than yours. I wish I found it easier to write in that style myself. (One of my standard editing steps is “now go back and change nearly all those semicolons to full stops”.)
Holy crap, I thought this was Hack-a-day not Criticize-grammar-a-day 🙄 The article read perfectly fine to me, nothing about it made me think “wow, I sure wish this author had better grammar so I could actually comprehend the article”.
You had to be there in the 80’s, I still have my 1st 286 mother board. I saved months to get it and could not wait to save up for 386.
Wonder if AI could help translate a high-rez image of the die to a RTL synthesis?
Resistor Transistor Logic? :)
Register Transfer Language. It encompasses languages like VHDL and Verilog which are used to describe systems at a very low level for synthesis in silicon.
I guess someone could add AI to degate ( https://www.degate.org ), but my guess would be that it might make things worse long before they make them better.
I guess the 80186 (and its AMD equivalents) have long since faded into obscurity. I used on e in an embedded design around 2000, after Intel had quit making them. It was a weird feeling being able to edit, compile, and debug your embedded application ON your embedded application, something we take for granted now.
Might be an IP block for someone doing a design.
That reminds me of the HERO robot series.
One model had essentially an x86 PC on-board that ran BASIC, at least.
There also was MS-DOS software available, I vaguely remember.
Not sure if it ran on the unit itself, though.
https://en.wikipedia.org/wiki/HERO_(robot)#HERO_2000_(ET-19)
What also comes to mind is the Intel 8052 AH BASIC platform.
The embedded system could be controlled via serial port while being in-circuit; it even had an auto baud detection feature.
Technically the Intel i960 (32-bit RISC) beat 80386 by 1 year. A 386 was around 275,000 transistors, and the early i960 models (not exactly the first one, because I don’t have that data) was 250,000 transistors. These are chips without cache so while the number seems small, there is no bloat from cache and the numbers reflect mostly random logic.
Intel having “random logic” in their chips could explain a lot!
B^)
24 bit? Or 20 bit, with 64k standing on the top rung, a 16 byte click below 1MB
Pretty sure it was 24 bits for a 16MB physical address space. It is true that the segment registers and the offset formed an effective 20-bit address (because each segment overlapped the previous by 16 bytes in real mode). But with a descriptor you could get to 16MB which is 24 bits.
The original 386 was a DX and could address 4 GB with 32-bit address lines wired, and could access 32-bit data. Later versions were cut down to fit into a smaller package and could only address 24-bit (16MB) with 16-bit data. Both models could theoretically have 64 Terabytes of virtual memory, but in practice few operating systems used enough segment selectors to offer so much virtual memory.
Likewise, the 80286 could address 1GB of virtual memory, thanks to segmentation.
It also had memory protection based on segmentation.
Something that “flat mode” had undermined. It took about two decades up until technologies like DEP and NX-Bit/XD-Bit had fixed the issue.
If segmentation had been continued to be used back in the 90s, data and program code could have always been separated in a clean fashion. Buffer overrun exploits wouldn’t have been possible so easily.
Unfortunately, OS/2 1.3 was about the only advanced PC operating system that took full advantage of the x86 ring scheme, segmentation and virtual memory.
Makes me wonder how powerful it could have been if it had taken advantage of the 80386 feature set, while keeping its original design (vs. OS/2 2.x+). Working with 64TB of virtual memory would have been fascinating, for example.
I toyed with a proof-of-concept kernel using segmented memory. The idea that most segments are backed by disk, and you don’t need an explicit file-system but instead allocate segments that held persistent memory-mapped objects. sadly the actual swap partition like structure of the disk was something I struggled with implementing at the time and the proof of concept didn’t quite live up to my grand ambitions. I’d try it against but x86-64 “long mode” did away with most of the functionality of segmented memory. Very large page tables to hold some 256 TB (48-bit virtual address) is not as interesting to me as a segmented virtual memory model.
I don’t disagree but we were talking about the 286 the smaller address bus.
“The 8086 was commercially successful, but the paged memory model was stifling.” – we can all be pretty sure that the non-existing paged memory model must have been very stiffling – almost as stiffling as the segmented actually used by the 86, 186 and 286 – paged memory came with the 386 ;)