How To Restore Your 19th-Century Lancashire Boiler To Hold 120 PSI

The Industrial Revolution was powered by steam, with boilers being a crucial part of each steam engine, yet also one of the most dangerous elements due to the high pressures involved. The five Lancashire boilers at the Claymills Pumping Station are relatively benign in this regard, as they operate at a mere 80 PSI unlike e.g. high-pressure steam locomotives that can push 200 – 300 PSI. This doesn’t mean that refurbishing one of these boilers is an easy task and doesn’t involve plugging a lot of leaks, as the volunteers at this pumping station found out.

At this Victorian-era pumping station there are a total of five of these twin-flue Lancashire boilers, all about 90 years old after a 1930s-era replacement, with them all gradually being brought back into service. The subject of the video is boiler 1, which was last used in 1971 before the pumping station was decommissioned. Boilers 2 and 3 were known to be in a pretty bad condition, and they needed a replacement for boiler 5 as it was about to go down for maintenance soon.

Although the basic idea behind a Lancashire boiler is still to boil water to create steam, it’s engineered to do this as efficiently as possible to save fuel. This is why it has two flues where the burning coal deposits its thermal energy, which then goes on to heat the surrounding water. The resulting pressure from the steam also means that there are a lot of safeties to ensure that things do not get too spicy.

Continue reading “How To Restore Your 19th-Century Lancashire Boiler To Hold 120 PSI”

Testing The Pressure Limits For Glass In Water Cooling Blocks

Many people who use water cooling in their computer systems like to go full-bore with ‘aquarium’ aesthetic, which includes adding a window to their cooling blocks so that they see the water flowing through the window from behind the case’s window. Traditionally PMMA acrylic is used for these windows, as it’s quite durable and easy to handle.

Using glass offers some advantages over acrylic, but has its own disadvantages, most of all that it’s hard to process, but also that it’s known for shattering quite easily if pushed beyond its limits.

This is why [der8auer] as a manufacturer of such water blocks has now spent a few years investigating the viability of using glass for this purpose. First and foremost is safety, with an early prototype glass water block suddenly shattering without clear cause.

Although normally the water cooling loop is only expected to experience pressures of about 600 mbar, the new glass windows that are now entering mass-production had to be tested to their breaking point. This involves pumping water into a few test blocks until they fail, using the test rig that you can see above.

First the big GPU water block was tested, with the acrylic version breaking at around 8-9 bar, while the glass plate shattered at around 5 bar. The failure mode was also interesting, with the glass plate shattering into fragments, while the two acrylic plates tested failed in a completely different location and manner.

A smaller water block with glass window failed at about 10 bar, demonstrating mostly that smaller glass windows are a lot sturdier. Effectively glass windows in water cooling loops are viable, and they also do not suffer from e.g. discoloration, but you do give up a big chunk of your safety margin if your water cooling loop suffers a major pressurization event. Which of course should never happen, but we’re definitely looking forward to the upcoming field trials of these new water blocks.

Continue reading “Testing The Pressure Limits For Glass In Water Cooling Blocks”

Inside A Compact Intel 3000 W Water-Cooled Power Supply

Recently [ElecrArc240] got his paws on an Intel-branded 3 kW power supply that apparently had been designed as a reference PSU for servers. At 3 kW in such a compact package air cooling would be rather challenging, so it has a big water block sandwiched between the two beefy PCBs. In the full teardown and analysis video of the PSU we can see the many design decisions made to optimize efficiency and minimize losses to hit its 80 Plus Platinum rating.

For the power input you’d obviously need to provide it with 240 VAC at sufficient amps, which get converted into 12 VDC at a maximum of 250 A. This also highlights why 48 VDC is becoming more common in server applications, as the same amount of power would take only 62.5 A at that higher voltage.

The reverse-engineered schematic shows it using an interleaved totem-pole PFC design with 600 V-rated TI LMG3422 600V GaN FETs in the power stages. After the PFC section we find a phase-shifted full bridge rectifier with OnSemi’s SiC UF3C065030K4S Power N-Channel JFETs.

There were a few oddities in the design, such as the Kelvin source of the SiC JFET being tied into the source, which renders that feature useless. Sadly the performance of the PSU was not characterized before it was torn apart which might have provided some clues here.

Continue reading “Inside A Compact Intel 3000 W Water-Cooled Power Supply”

Why Chains Are Still Better For Bicycles Than Belts

Theoretically a belt drive makes for a great upgrade to a bicycle, as it replaces the heavier, noisy and relatively maintenance-heavy roller chain with a zero-maintenance, whisper-quiet and extremely reliable belt that’s rated at an amazing 20-30,000 km before needing a replacement. Of course, that’s the glossy marketing brochure version of reality, which differed significantly from what [Tristan Ridley] experienced whilst cycling around the globe.

Although initially he was rather happy with his bike, its sealed car-like Pinion gearbox and Gates carbon belt drive system, while out in the wilds of Utah he had a breakdown when the belt snapped. When the spare belt that he had carried with him for the past months also snapped minutes later after fitting it on, it made him decide to switch back to the traditional bush roller chain.

Despite this type of chain drive tracing its roots all the way back to Leonardo da Vinci, they actually offer many advantages over the fancy carbon-fiber-reinforced polyurethane belt. Although with the Pinion gearbox the inability to use a derailleur gearing system is no big deal, [Tristan] found that the ‘zero maintenance’ part of the belt was not true for less hospitable roads

Continue reading “Why Chains Are Still Better For Bicycles Than Belts”

How The Intel 8087 FPU Knows Which Instructions To Execute

An interesting detail about the Intel 8087 floating point processor (FPU) is that it’s a co-processor that shares a bus with the 8086 or 8088 CPU and system memory, which means that somehow both the CPU and FPU need to know which instructions are intended for the FPU. Key to this are eight so-called ESCAPE opcodes that are assigned to the co-processor, as explained in a recent article by [Ken Shirriff].

The 8087 thus waits to see whether it sees these opcodes, but since it doesn’t have access to the CPU’s registers, sharing data has to occur via system memory. The address for this is calculated by the CPU and read from by the CPU, with this address registered by the FPU and stores for later use in its BIU register. From there the instruction can be fully decoded and executed.

This decoding is mostly done by the microcode engine, with conditional instructions like cos featuring circuitry that sprawls all over the IC. Explained in the article is how the microcode engine even knows how to begin this decoding process, considering the complexity of these instructions. The biggest limitation at the time was that even a 2 kB ROM was already quite large, which resulted in the 8087 using only 22 microcode entry points, using a combination of logic gates and PLAs to fully implement the entire ROM.

Only some instructions are directly implemented in hardware at the bus interface (BIU), which means that a lot depends on this microcode engine and the ROM for things to work half-way efficiently. This need to solve problems like e.g. fetching constants resulted in a similarly complex-but-transistor-saving approach for such cases.

Even if the 8087 architecture is convoluted and the ISA not well-regarded today, you absolutely have to respect the sheer engineering skills and out-of-the-box thinking of the 8087 project’s engineers.

Retrotechtacular: Bleeding-Edge Memory Devices Of 1959

Although digital computers are – much like their human computer counterparts – about performing calculations, another crucial element is that of memory. After all, you need to fetch values from somewhere and store them afterwards. Sometimes values need to be stored for long periods of time, making memory one of the most important elements, yet also one of the most difficult ones. Back in the 1950s the storage options were especially limited, with a 1959 Bell Labs film reel that [Connections Museum] digitized running through the bleeding edge of 1950s storage technology.

After running through the basics of binary representation and the difference between sequential and random access methods, we’re first taking a look at punch cards, which can be read at a blistering 200 cards/minute, before moving onto punched tape, which comes in a variety of shapes to fit different applications.

Electromechanical storage in the form of relays are popular in e.g. telephone exchanges, as they’re very fast. These use two-out-of-five code to represent the phone numbers and corresponding five relay packs, allowing the crossbar switch to be properly configured.

Continue reading “Retrotechtacular: Bleeding-Edge Memory Devices Of 1959”

Porting Super Mario 64 To The Original Nintendo DS

Considering that the Nintendo DS already has its own remake of Super Mario 64, one might be tempted to think that porting the original Nintendo 64 version would be a snap. Why you’d want to do this is left as an exercise to the reader, but whether due to nostalgia or out of sheer spite, the question of how easy this would be remains. Correspondingly, [Tobi] figured that he’d give it a shake, with interesting results.

Of note is that someone else already ported SM64 to the DSi, which is a later version of the DS with more processing power, more RAM and other changes. The reason why the 16 MB of RAM of the DSi is required, is because it needs to load the entire game into RAM, rather than do on-demand reads from the cartridge. This is why the N64 made do with just 4 MB of RAM, which is as much RAM as the NDS has. Ergo it can be made to work.

The key here is NitroFS, which allows you to implement a similar kind of segmented loading as the N64 uses. Using this the [Hydr8gon] DSi port could be taken as the basis and crammed into NitroFS, enabling the game to mostly run smoothly on the original DS.

There are still some ongoing issues before the project will be released, mostly related to sound support and general stability. If you have a flash cartridge for the DS this means that soon you too should be able to play the original SM64 on real hardware as though it’s a quaint portable N64.

Continue reading “Porting Super Mario 64 To The Original Nintendo DS”