[Ken] Looks At The 386

The 80386 was — arguably — Intel’s first modern CPU. The 8086 was commercially successful, but the paged memory model was stifling. The 80286 also had a protected mode, which differed from the 386’s. [Ken Shirriff] takes the 386 apart for us in a recent blog post.

The 286’s protected mode was less successful than the 386 because of several key limitations as it was a 16-bit processor with a 24-bit address bus. It still required segment changes to access larger amounts of memory, and it had no good way to call back into real mode for compatibility reasons. The 386 fixed all that. You could adopt a segment strategy if you wanted to. But you could also load the segment registers once to point to a 4 GB linear address space and then essentially forget them. You also had a virtual 86 mode that could simulate real mode with some work.

The CPU used a 1-micron process, compared to the 1.5-micron process used earlier. The chip had 285,000 transistors (although the 80386SL had many more). That was ten times the number of devices on the 8086. The cheaper 386SX did use the 1.5 micron process for a while, but with a 16-bit external bus, this was feasible. While 285,000 sounds like a lot, a Core i9 has around 4.2 billion transistors. Times have changed.

A smaller design also allowed chips like the 386SL for laptops. The CPU took up only about a fourth of the die. The rest held bus controllers and cache interfaces to cut costs on laptops. That’s why it had so many more transistors.

[Ken] does his usual in-depth analysis of both the die and the history behind this historic device. We spent a lot of time writing protected mode 386 code, and it was nice to see the details of a very old friend. These days, you can get a pretty capable CPU system on a solderless breadboard, but designing a working 386 system took a few extra parts. The 80286 was a stepping stone between the 8086 and 80386, but even it had some secrets to give up.

27 thoughts on “[Ken] Looks At The 386

  1. I can’t work out if AI in “AI Williams” is a not-so-subtle way of saying that this summary was written by Artificial Intelligence as the short sentence structure makes it super difficult to read. There is no flow; sentences often don’t link to the previous one.

      1. Sorry, but if Al is a real person AND an editor then he needs to take a bit more care. The third paragraph, for example, looks like it was written by a child. But, looking back at his previous work, this seems to be his style. Let’s try a rewrite and see if we can make it a little more elegant:

        “The CPU used a 1-micron process, compared to the 1.5-micron process used earlier. The chip had 285,000 transistors (although the 80386SL had many more). That was ten times the number of devices on the 8086. The cheaper 386SX did use the 1.5 micron process for a while, but with a 16-bit external bus, this was feasible. While 285,000 sounds like a lot, a Core i9 has around 4.2 billion transistors. Times have changed.”

        “The full 386 used a 1-micron process which compared to the 1.5 micron process used on earlier CPUs and the earlier revisions of the cheaper 386SX. The chip had 285,000 transistors (and the 80386SL had many more) which was ten times the number on the 8086. While 285,000 sounds like a lot, a Core i9 has around 4.2 billion transistors which goes to show how much times have changed.”

        Honestly, tell me the original is better…

        1. Maybe someone needs to write AI web filters that tailors to people preferred sentence structure and vocabulary. Highlight and right click to expand the meaning, summarize, and/or fix the grammar to your particular flavor of authoritarianism.

        2. The original is better.

          Really, this doesn’t even rise to the level of a style guideline. It’s just personal style. I happen to find the short sentences MORE readable than yours. I wish I found it easier to write in that style myself. (One of my standard editing steps is “now go back and change nearly all those semicolons to full stops”.)

    1. Holy crap, I thought this was Hack-a-day not Criticize-grammar-a-day 🙄 The article read perfectly fine to me, nothing about it made me think “wow, I sure wish this author had better grammar so I could actually comprehend the article”.

  2. I guess the 80186 (and its AMD equivalents) have long since faded into obscurity. I used on e in an embedded design around 2000, after Intel had quit making them. It was a weird feeling being able to edit, compile, and debug your embedded application ON your embedded application, something we take for granted now.

    1. What also comes to mind is the Intel 8052 AH BASIC platform.
      The embedded system could be controlled via serial port while being in-circuit; it even had an auto baud detection feature.

  3. Technically the Intel i960 (32-bit RISC) beat 80386 by 1 year. A 386 was around 275,000 transistors, and the early i960 models (not exactly the first one, because I don’t have that data) was 250,000 transistors. These are chips without cache so while the number seems small, there is no bloat from cache and the numbers reflect mostly random logic.

    1. Pretty sure it was 24 bits for a 16MB physical address space. It is true that the segment registers and the offset formed an effective 20-bit address (because each segment overlapped the previous by 16 bytes in real mode). But with a descriptor you could get to 16MB which is 24 bits.

      1. The original 386 was a DX and could address 4 GB with 32-bit address lines wired, and could access 32-bit data. Later versions were cut down to fit into a smaller package and could only address 24-bit (16MB) with 16-bit data. Both models could theoretically have 64 Terabytes of virtual memory, but in practice few operating systems used enough segment selectors to offer so much virtual memory.

        1. Likewise, the 80286 could address 1GB of virtual memory, thanks to segmentation.
          It also had memory protection based on segmentation.

          Something that “flat mode” had undermined. It took about two decades up until technologies like DEP and NX-Bit/XD-Bit had fixed the issue.

          If segmentation had been continued to be used back in the 90s, data and program code could have always been separated in a clean fashion. Buffer overrun exploits wouldn’t have been possible so easily.

          Unfortunately, OS/2 1.3 was about the only advanced PC operating system that took full advantage of the x86 ring scheme, segmentation and virtual memory.

          Makes me wonder how powerful it could have been if it had taken advantage of the 80386 feature set, while keeping its original design (vs. OS/2 2.x+). Working with 64TB of virtual memory would have been fascinating, for example.

          1. I toyed with a proof-of-concept kernel using segmented memory. The idea that most segments are backed by disk, and you don’t need an explicit file-system but instead allocate segments that held persistent memory-mapped objects. sadly the actual swap partition like structure of the disk was something I struggled with implementing at the time and the proof of concept didn’t quite live up to my grand ambitions. I’d try it against but x86-64 “long mode” did away with most of the functionality of segmented memory. Very large page tables to hold some 256 TB (48-bit virtual address) is not as interesting to me as a segmented virtual memory model.

  4. “The 8086 was commercially successful, but the paged memory model was stifling.” – we can all be pretty sure that the non-existing paged memory model must have been very stiffling – almost as stiffling as the segmented actually used by the 86, 186 and 286 – paged memory came with the 386 ;)

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.