The Intel 8088 And 8086 Processor’s Instruction Prefetch Circuitry

The 8088 die under a microscope, with main functional blocks labeled. This photo shows the chip's single metal layer; the polysilicon and silicon are underneath. (Credit: Ken Shirriff)
The 8088 die under a microscope, with main functional blocks labeled. This photo shows the chip’s single metal layer; the polysilicon and silicon are underneath. (Credit: Ken Shirriff)

Cache prefetching is what allows processors to have data and/or instructions ready for use in a fast local cache rather than having to wait for a fetch request to trickle through to system RAM and back again. The Intel 8088  (and its big brother 8086) processor was among the first microprocessors to implement (instruction) prefetching in hardware, which [Ken Shirriff] has analyzed based on die images of this famous processor. This follows last year’s deep-dive into the 8086’s prefetching hardware, with (unsurprisingly) many similarities between these two microprocessors, as well as a few differences that are mostly due to the 8088’s cut-down 8-bit data bus.

While the 8086 has 3 16-bit slots in the instruction prefetcher the 8088 gets 4 slots, each 8-bit. The prefetching hardware is part of the Bus Interface Unit (BIU), which effectively decouples the actual processor (Execution Unit, or EU) from the system RAM. While previous MPUs would be fully deterministic, with instructions being loaded from RAM and subsequently executed, the 8086 and 8088’s prefetching meant that such assumptions no longer were true. The added features in the BIU also meant that the instruction pointer (IP) and related registers moved to the BIU, while the ringbuffer logic around the queue had to somehow keep the queueing and pointer offsets into RAM working correctly.

Even though these days CPUs have much more complicated, multi-level caches that are measured in kilobytes and megabytes, it’s fascinating to see where it all began, with just a few bytes and relatively straight-forward hardware logic that you easily follow under a microscope.

Breaking Through The 1 MB Barrier In DOS With Unreal Mode And More

The memory map of the original 8086 computer with its base and extended memory made the original PC rather straightforward, but also posed countless issues for DOS-based applications as they tried to make use of memory beyond the legacy 1 MB address space. The initial ways to deal with this like EMS, XMS and UMB were rather cumbersome and often impractical, but with the arrival of the 80286 and 80386 processors more options opened up, including protected mode. More interestingly, this led to unreal mode, DOS extenders and the somewhat more obscure LOADALL instruction, as covered by [Julio Merino] in a new article.

This article builds on the first one which covered the older methods and covered the basics of protected mode. Where protected mode is convenient compared to real mode is that with the former the memory accesses go via the MMU and thus allows for access to 16 MB on the 80286 and 4 GB on the 80386. The segment descriptors and resolving of these that make this possible can be (ab)used on the 80286 and up by realizing that these segment descriptors are also used in real mode. Unreal mode is thus about switching to protected mode, loading arbitrary segment descriptors and switching back to real mode. As this is outside the original processor spec, it is commonly called ‘unreal mode’.

Continue reading “Breaking Through The 1 MB Barrier In DOS With Unreal Mode And More”

NEC V20 - Konstantin Lanzet, CC BY-SA 3.0 via Wikimedia Commons

Intel V. NEC : The Case Of The V20’s Microcode

Back in the last century, Intel saw itself faced with a need to have ‘second source’ suppliers of its 8088 and 8086 processors, which saw NEC being roped in to be one of those alternative suppliers to keep Intel’s customers happy with the μPD 8086 and μPD 8088 offerings. Yet rather than using the Intel provided design files, NEC reverse-engineered the Intel CPUs, which led to Intel suing NEC over copying the microcode that forms an integral part of the x86 architecture. In a recent The Chip Letter entry by [Babbage] this case is covered in detail.

Although this lawsuit was cleared up, and NEC licensed the microcode from Intel, this didn’t stop NEC from creating their 8086 and 8088 compatible CPUs in the form of the V30 and V20 respectively. Although these were pin- and ISA-compatible, the internal microcode was distinct from the Intel microcode due to the different internal microarchitecture. In addition the V20 and V30 also had a special 8080 mode, that provided partial compatibility with Z80 software.

Long story short, Intel sued NEC with accusations of copyright infringement of the microcode, which led to years of legal battle, which both set many precedents about what is copyrightable about microcode, and ultimately cleared NEC to keep selling the V20 and V30. Unfortunately by then the 1990s had already arrived, and sales of the NEC chips had not been brisk due to the legal issues while Intel’s new 80386 CPU had taken the market by storm. This left NEC’s x86-compatible CPUs legacy mostly in the form of legal precedents, instead of the technological achievements it had hoped for, and set the tone for the computer market of the 1990s.

Thanks to [Stephen Walters] for the tip.

Teardown Of An Aircraft Video Symbol Generator

[Adrian Smith] recently scored an avionics module taken from a British Aerospace 146 airliner and ripped it open for our viewing pleasure. This particular aircraft was designed in the early 1980s when the electronics used to feed the various displays in the cockpit were very different from modern designs. This particular box is called a ‘symbol generator’ and is used to generate the various real-time video feeds that are sent to the cockpit display units. Various instruments, for example, the weather radar, feed into it, and it then reformats the video if needed, mixing in any required additional display.

Top view of the symbol generator instrument rack

There are many gold-plated chips on these boards, which indicates these may be radiation-hardened versions of familiar devices, most of which are 54xx series logic. 54xx series logic is essentially the same functionally as the corresponding 74xx series, except for the much wider operating temperature range mandated by military and, by extension, commercial aviation needs. The main CPU board appears to be based around the Intel 8086, with some Zilog Z180 compatible processors used on the two video display controller boards. We noted the Zilog Z0853604, which is their counter/timer/GPIO chip. Obviously, there are many custom ASICs produced by Honeywell as well as other special order items that you’ll never find the datasheet for. Now there’s a challenge!

Finally, we note the standard 400 Hz avionics-standard power supply, which, as some may know, is the standard operating frequency for the AC power system used within modern aircraft systems. The higher frequency (compared to 50 or 60 Hz) means the magnetic components can be physically smaller and, therefore, lighter for a given power handling capability.

We see a lot of avionics teardowns, likely because they’re fascinating. Here’s some more British military gear, an interesting RF distance measuring box from the 1970s, and finally, some brave soul building their own avionics gear. What could possibly go wrong?

Continue reading “Teardown Of An Aircraft Video Symbol Generator”

8086 Multiply Algorithm Gets Reverse Engineered

The 8086 has been around since 1978, so it’s pretty well understood. As the namesake of the prevalent x86 architecture, it’s often studied by those looking to learn more about microprocessors in general. To this end, [Ken Shirriff] set about reverse engineering the 8086’s multiplication algorithm.

[Ken]’s efforts were achieved by using die photos of the 8086 chip. Taken under a microscope, they can be used to map out the various functional blocks of the microprocessor. The multiplication algorithm can be nutted out by looking at the arithmetic/logic unit, or ALU. However, it’s also important to understand the role that microcode plays, too. Even as far back as 1978, designers were using microcode to simplify the control logic used in microprocessors.

[Ken] breaks down his investigation into manageable chunks, exploring how the chip achieves both 8-bit and 16-bit multiplication in detail. He covers how the numbers make their way through various instructions and registers to come out with the right result in the end.

It’s a fun look at what’s going on at the ground level in a chip that’s been around since before the personal computer revolution. For any budding chip designers, it’s a great academic exercise to follow along at home. If you’ve been doing your own digging deep into CPU architectures, don’t hesitate to drop us a line!

Reverse-Engineering The Conditional Jump Circuitry In The 8086 Processor

The condition PLA evaluates microcode conditionals.
The condition PLA evaluates microcode conditionals.

As simple as a processor’s instruction set may seem, especially in a 1978-era one like the Intel 8086, there is quite a bit going on to go from something like a conditional jump instruction to a set of operations that the processor can perform. For the CISC 8086 CPU this is detailed in a recent article by [Ken Shirriff], which covers exactly how the instructions with their parameters are broken down into micro-instructions using microcode, which allows the appropriate registers and flags to be updated.

Where the 8086 is interesting compared to modern x86 CPUs is how the microcode is implemented, using gate logic to reduce the complexity of the microcode by for example generic parameter testing when processing a jump instruction. Considering the limitations of 1970s VLSI manufacturing, this was very much a necessary step, and an acceptable trade-off.

Each jump instruction is broken down into a number of micro-instructions that test a range of flags and updates (temporary) registers as well as the program counter as needed. All in all a fascinating look at the efforts put in by Intel engineers over forty years ago on what would become one of the cornerstones of modern day computing.