Musings On A Good Parallel Computer

Until the late 1990s, the concept of a 3D accelerator card was something generally associated with high-end workstations. Video games and kin would run happily on the CPU in one’s desktop system, with later extensions like MMX, 3DNow!, and SSE providing a significant performance boost for games that supported them. As 3D accelerator cards (colloquially called graphics processing units, or GPUs) became prevalent, they took over almost all SIMD vector tasks, but one thing that they’re not good at is being a general-purpose parallel computer. This really ticked [Raph Levien] off and it inspired him to cover his grievances.

Although the interaction between CPUs and GPUs has become tighter over the decades, with PCIe in particular being a big improvement over AGP and PCI, GPUs are still terrible at running arbitrary computing tasks, and even PCIe links are still glacial compared to communication within the GPU and CPU dies. With the introduction of asynchronous graphic APIs this divide became even more intense. [Raph]’s proposal is to invert this relationship.

There’s precedent for this already, with Intel’s Larrabee and IBM’s Cell processor merging CPU and GPU characteristics on a single die, though both struggled with developing for such a new kind of architecture. Sony’s PlayStation 3 was forced to add a GPU due to these issues. There is also the DirectStorage API in DirectX, which bypasses the CPU when loading assets from storage, effectively adding CPU features to GPUs.

As [Raph] notes, so-called AI accelerators also have these characteristics, with often multiple SIMD-capable, CPU-like cores. Maybe the future is Cell after all.

The Fastest MS-DOS Gaming PC Ever

After [Andy]’s discovery of an old ISA soundcard at his parents’ place that once was inside the family PC, the onset of a wave of nostalgia for those old-school sounds drove him off the deep end. This is how we get [Andy] building the fastest MS-DOS gaming system ever, with ISA slot and full hardware compatibility. After some digging around, the fastest CPU for an Intel platform that still retained ISA compatibility turned out to be Intel’s 4th generation Core series i7-4790K CPU, along with an H81 chipset-based MiniITX mainboard.

Of note is that ISA slots on these newer boards are basically unheard of outside of niche industrial applications, ergo [Andy] had to tap into the LPC (low pin count) debug port & hunt down the LDRQ signal on the mainboard. LPC is a very compact version of the ISA bus that works great with ISA adapter boards, specially an LPC to ISA adapter like [Andy]’s dISAppointment board as used here.

A PCIe graphics card (NVidia 7600 GT, 256 MB VRAM), ISA soundcard, dodgy PSU and a SATA SSD were added into a period-correct case. After this Windows 98 was installed from a USB stick within a minute using [Eric Voirin]’s Windows 98 Quick Install. This gave access to MS-DOS and enabled the first tests, followed by benchmarking.

Benchmarking MS-DOS on a system this fast turned out to be somewhat messy with puzzling results. The reason for this was that the BIOS default settings under MS-DOS limited the CPU to non-turbo speeds. After this the system turned out to be actually really quite fast at MS-DOS (and Windows 98) games, to nobody’s surprise.

If you’d like to run MS-DOS on relatively modern hardware with a little less effort, you could always pick up a second-hand ThinkPad and rip through some Descent.

Continue reading “The Fastest MS-DOS Gaming PC Ever”

Biosynthesis Of Polyester Amides In Engineered Escherichia Coli

Polymers are one of the most important elements of modern-day society, particularly in the form of plastics. Unfortunately most common polymers are derived from fossil resources, which not only makes them a finite resource, but is also problematic from a pollution perspective. A potential alternative being researched is that of biopolymers, in particular those produced by microorganisms such as everyone’s favorite bacterium Escherichia coli (E. coli).

These bacteria were the subject of a recent biopolymer study by [Tong Un Chae] et al., as published in Nature Chemical Biology (paywalled, break-down on Arstechnica).

By genetically engineering E. coli bacteria to use one of their survival energy storage pathways instead for synthesizing long chains of polyester amides (PEAs), the researchers were able to make the bacteria create long chains of mostly pure PEA. A complication here is that this modified pathway is not exactly picky about what amino acid monomers to stick onto the chain next, including metabolism products.

Although using genetically engineered bacteria for the synthesis of products on an industrial scale isn’t uncommon (see e.g. the synthesis of insulin), it would seem that biosynthesis of plastics using our prokaryotic friends isn’t quite ready yet to graduate from laboratory experiments.

Producing Syngas From CO2 And Sunlight With Direct Air Capture

The prototype DACCU device for producing syngas from air. (Credit: Sayan Kar, University of Cambridge)

There is more carbon dioxide (CO2) in the atmosphere these days than ever before in human history, and while it would be marvelous to use these carbon atoms for something more useful, capturing CO2 directly from the air isn’t that easy. After capturing it would also be great if you could do something more with it than stuff it into a big hole. Something like producing syngas (CO + H2) for example, as demonstrated by researchers at the University of Cambridge.

Among the improvements claimed in the paper as published in Nature Energy for this direct air capture and utilization (DACCU) approach are that it does not require pure CO2 feedstock, but will adsorb it directly from the air passing over a bed of solid silica-amine. After adsorption, the CO2 can be released again by exposure to concentrated light. Following this the conversion to syngas is accomplished by passing it over a second bed consisting of silica/alumina-titania-cobalt bis(terpyridine), that acts as a photocatalyst.

The envisioned usage scenario would be CO2 adsorption during the night, with concentrated solar power releasing it the day with subsequent production of syngas. Inlet air would be passed only over the adsorption section before switching the inlet off during the syngas generating phase. As a lab proof-of-concept it seems to work well, with outlet air stripped from virtually all CO2 and very high conversion ratio from CO2 to syngas.

Syngas has historically been used as a replacement for gasoline, but is also used as a source of hydrogen (e.g. steam reformation (SMR) of natural gas) where it’s used for reduction of iron ore, as well as the production of methanol as a precursor to many industrial processes. Whether this DACCU approach provides a viable alternative to SMR and other existing technologies will become clear once this technology moves from the lab into the real world.

Thanks to [Dan] for the tip.

So What Is A Supercomputer Anyway?

Over the decades there have been many denominations coined to classify computer systems, usually when they got used in different fields or technological improvements caused significant shifts. While the very first electronic computers were very limited and often not programmable, they would soon morph into something that we’d recognize today as a computer, starting with World War 2’s Colossus and ENIAC, which saw use with cryptanalysis and military weapons programs, respectively.

The first commercial digital electronic computer wouldn’t appear until 1951, however, in the form of the Ferranti Mark 1. These 4.5 ton systems mostly found their way to universities and kin, where they’d find welcome use in engineering, architecture and scientific calculations. This became the focus of new computer systems, effectively the equivalent of a scientific calculator. Until the invention of the transistor, the idea of a computer being anything but a hulking, room-sized monstrosity was preposterous.

A few decades later, more computer power could be crammed into less space than ever before including ever higher density storage. Computers were even found in toys, and amidst a whirlwind of mini-, micro-, super-, home-, minisuper- and mainframe computer systems, one could be excused for asking the question: what even is a supercomputer?

Continue reading “So What Is A Supercomputer Anyway?”

The Capacitor Plague Of The Early 2000s

Somewhere between the period of 1999 and 2007 a plague swept through the world, devastating lives and businesses. Identified by a scourge of electrolytic capacitors violently exploding or splurging their liquid electrolyte guts all over the PCB, it led to a lot of finger pointing and accusations of stolen electrolyte formulas. In a recent video by [Asianometry] this story is summarized.

Blown electrolytic capacitors. (Credit: Jens Both, Wikimedia)

The bad electrolyte in the faulty capacitors lacked a suitable depolarizer, which resulted in more gas being produced, ultimately leading to build-up of pressure and the capacitor ultimately failing in a way that could be rather benign if the scored top worked as vent, or violently if not.

Other critical elements in the electrolyte are passivators, to protect the aluminium against the electrolyte’s effects. Although often blamed on a single employee stealing an (incomplete) Rubycon electrolyte formula, the video questions this narrative, as the problem was too widespread.

More likely it coincided with the introduction of low-ESR electrolytic capacitors, along with computers becoming increasingly more power-hungry, and thus stressing the capacitors in a much warmer environment than in the early 1990s. Combine this with the presence of counterfeit capacitors in the market and the truth of what happened to cause the Capacitor Plague probably involves a bit from each column, a narrative that seems to be the general consensus.

Continue reading “The Capacitor Plague Of The Early 2000s”

Checking In On The ISA Wars And Its Impact On CPU Architectures

An Instruction Set Architecture (ISA) defines the software interface through which for example a central processor unit (CPU) is controlled. Unlike early computer systems which didn’t define a standard ISA as such, over time the compatibility and portability benefits of having a standard ISA became obvious. But of course the best part about standards is that there are so many of them, and thus every CPU manufacturer came up with their own.

Throughout the 1980s and 1990s, the number of mainstream ISAs dropped sharply as the computer industry coalesced around a few major ones in each type of application. Intel’s x86 won out on desktop and smaller servers while ARM proclaimed victory in low-power and portable devices, and for Big Iron you always had IBM’s Power ISA. Since we last covered the ISA Wars in 2019, quite a lot of things have changed, including Apple shifting its desktop systems to ARM from x86 with Apple Silicon and finally MIPS experiencing an afterlife in  the form of LoongArch.

Meanwhile, six years after the aforementioned ISA Wars article in which newcomer RISC-V was covered, this ISA seems to have not made the splash some had expected. This raises questions about what we can expect from RISC-V and other ISAs in the future, as well as how relevant having different ISAs is when it comes to aspects like CPU performance and their microarchitecture.

Continue reading “Checking In On The ISA Wars And Its Impact On CPU Architectures”