An 80386 Upgrade Deal And Intel 486 Competitor: The Cyrix Cx486DLC

The x86 CPU landscape of the 1980s and 1990s was competitive in a way that probably seems rather alien to anyone used to the duopoly that exists today between AMD and Intel. At one point in time, Cyrix was a major player, who mostly sought to provide a good deal that would undercut Intel. One such attempt was the Cx486DLC and the related Tx486DLC by Texas Instruments. These are interesting because they fit in a standard 386DX mainboard, are faster than a 386 CPU and add i486 instructions. Check your mainboard though, as these parts require a mainboard that supports them.

This is something that [Bits und Bolts] over at YouTube discovered as well when poking at a TX486DLC (TI486DLC) CPU. The Ti version of the Cyrix Cx486DLC CPU increases the 1 kB L1 cache to 8 kB but is otherwise essentially the same. He found the CPU and the mainboard in the trash and decided to adopt it. After removing the very dead battery from the Jamicon KMC-40A Baby AT mainboard, the mainboard was found to be in good working order. The system fired right up with the Ti CPU, some RAM, and a video card installed.

Continue reading “An 80386 Upgrade Deal And Intel 486 Competitor: The Cyrix Cx486DLC”

Pong In A Petri Dish: Teasing Out How Brains Work

Experimental setup for the EAP hydrogel free energy principle test. (Credit: Vincent Strong et al., Cell, 2024)
Experimental setup for the EAP hydrogel free energy principle test. (Credit: Vincent Strong et al., Cell, 2024)

Of the many big, unanswered questions in this Universe, the ones pertaining to the functioning of biological neural networks are probably among the most intriguing. From the lowliest neurally gifted creatures to us brainy mammals, neural networks allow us to learn, to predict and adapt to our environments, and sometimes even stand still and wonder puzzlingly how all of this even works. Such puzzling has led to a number of theories, with a team of researchers recently investigating one such theory, as published in Cell. The focus here was that of Bayesian approaches to brain function, specifically the free energy principle, which postulates that neural networks as inference engines seek to minimize the difference between inputs (i.e. the model of the world as perceived) and its internal model.

This is where Electro Active Polymer (EAP) hydrogel comes into play, as it features free ions that can migrate through the hydrogel in response to inputs. In the experiment, these inputs are related to the ball position in the game of Pong. Much like experiments involving biological neurons, the hydrogel is stimulated via electrodes (in a 2 x 3 grid, matching the 2 by 3 grid of the game world), with other electrodes serving as outputs. The idea is that over time the hydrogel will ‘learn’ to optimize the outputs through ion migration, so that it ‘plays’ the game better, which should be reflected in the scores (i.e. the rally length).

Based on the results some improvement in rally length can be observed, which the researchers present as statistically significant. This would imply that the hydrogel displays active inference and memory. Additional tests with incorrect inputs resulted in a marked decrease in performance. This raises many questions about whether this truly displays emergent memory, and whether this validates the free energy principle as a Bayesian approach to understanding biological neural networks.

To the average Star Trek enthusiast the concept of hydrogels, plasmas, etc. displaying the inklings of intelligent life would probably seem familiar, and for good reason. At this point, we do not have a complete understanding of the operation of the many billions of neurons in our own brains. Doing a bit of prodding and poking at some hydrogel and similar substances in a dish might be just the kind of thing we need to get some fundamental answers.

How Photomultipliers Detect Single Photons

If you need to measure the presence of photons down to a very small number of them, you are looking at the use of a photomultiplier, as explained in a recent video by [Huygens Optics] on YouTube. The only way to realistically measure at such a sensitivity level is to amplify them with a photomultiplier tube (PMT). Although solid-state alternatives exist, this is still a field where vacuum tube-based technology is highly relevant.

Despite being called ‘photomultipliers’, these PMTs actually amplify an incoming current (electron) in a series of dynode stages, to create an output current that is actually easy to quantify for measurement equipment. They find uses in everything from Raman spectroscopy to medical diagnostics and night vision sensors.

The specific PMT that [Huygens Optics] uses in the video is the Hamamatsu R928. This has a spectral response from 185 nm to 900 nm. The electrode mesh is where photons enter the tube, triggering the photo cathode which then ejects electrons. These initial electrons are then captured and amplified by each dynode stage, until the anode grid captures most of the electrons. The R928 has a gain of 1.0 x 107 (10 million) at -1 kV supply voltage, so each dynode multiplies the amount of electrons by six, with a response time of 22 ns.

PMTs are unsurprisingly not cheap, but [Huygens Optics] was lucky to find surplus R928s on Marktplaats (Dutch online marketplace) for €100 including a cover, optics and a PCB with the socket, high-voltage supply (Hamamatsu C4900) and so on. Without documentation the trick was to reverse-engineer the PCB’s connections to be able to use it. In the video the components and their function are all briefly covered, as well as the use of opamps like the AD817 to handle the output signal of the R928. Afterwards the operation of the PMT is demonstrated, which makes clear just how sensitive the PMT is as it requires an extremely dark space to not get swamped with photons.

An interesting part about the demonstration is that it also shows the presence of thermionic emissions: anode dark current in the datasheet. This phenomenon is countered by cooling the PMT to prevent these emissions if it is an issue. In an upcoming video the R928 will be used for more in-depth experiments, to show much more of what these devices are capable of.

Thanks to [cliff claven] for the tip.

Continue reading “How Photomultipliers Detect Single Photons”

Using A Potato As Photographic Recording Surface

Following in the tracks of unconventional science projects, [The Thought Emporium] seeks to answer the question of whether you can use a potato as a photograph recording medium. This is less crazy than it sounds, as ultimately analog photographs (and photograms) is about inducing a light-based change in some kind of medium, which raises the question of whether there is anything about potatoes that is light-sensitive enough to be used for capturing an image, or what we can add to make it suitable.

Unfortunately, a potato by itself cannot record light as it is just starch and salty water, so it needs a bit of help. Here [The Thought Emporium] takes us through the history of black and white photography, starting with a UV-sensitive mixture consisting out of turmeric and rubbing alcohol. After filtration and staining a sheet of paper with it, exposing only part of the paper to strong UV light creates a clear image, which can be intensified using a borax solution. Unfortunately this method fails to work on a potato slice.

The next attempt was to create a cyanotype, which involves covering a surface in a solution of 25 g ferric ammonium oxalate, 10 g of potassium ferricyanide and 100 mL water and exposing it to UV light. This creates the brilliant blue that gave us the term ‘blueprint’. As it turns out, this method works really well on potato slices too, with lots of detail, but the exposure process is very slow.

Speeding up cyanotype production is done by spraying the surface with an ammonium oxalate and oxalic acid solution to modify the pH, exposing the surface to UV, and then spraying it with a 10 g / 100 mL potassium ferricyanide solution, leading to fast exposure and good details.

This is still not as good on paper as an all-time favorite using silver-nitrate, however. These silver prints are the staple of black and white photography, with the silver halide reacting very quickly to light exposure, after which a fixer, like sodium thiosulfate, can make the changes permanent. When using cyanotype or silver-nitrate film like this in a 35 mm camera, it does work quite well too, but of course creates a negative image, that requires inverting, done digitally in the video, to tease out the recorded image.

Here the disappointment for potatoes hit, as using the developer with potatoes was a soggy no-go. Ideally a solution like that used with direct positive paper that uses a silver solution suspended in a gel, but creates a positive image unlike plain silver-nitrate. As for the idea of using the potato itself as the camera, this was also briefly attempted to by using a pinhole in a potato and a light-sensitive recording surface on the other side, but the result did indeed look like a potato was used to create the photograph.

Continue reading “Using A Potato As Photographic Recording Surface”

Voyager 1 Completes Tricky Thruster Reconfiguration

After 47 years it’s little wonder that the hydrazine-powered thrusters of the Voyager 1, used to orient the spacecraft in such a way that its 3.7 meter (12 foot) diameter antenna always points back towards Earth, are getting somewhat clogged up. As a result, the team has now switched back to the thrusters which they originally retired back in 2018. The Voyager spacecraft each have three sets (branches) of thrusters. Two sets were originally intended for attitude propulsion, and one for trajectory correction maneuvers, but since leaving the Solar System many years ago, Voyager 1’s navigational needs have become more basic, allowing all three sets to be used effectively interchangeably.

The first set was used until 2002, when clogging of the fuel tubes was detected with silicon dioxide from an aging rubber diaphragm in the fuel tank. The second set of attitude propulsion thrusters was subsequently used until 2018, until clogging caused the team to switch to the third and final set. It is this last set that is now more clogged then the second set, with the fuel tube opening reduced from about 0.25 mm to 0.035 mm. Unlike a few decades ago, the spacecraft is much colder due energy-conserving methods, complicating the switching of thruster sets. Switching on a cold thruster set could damage it, so it had to be warmed up first with its thruster heaters.

The conundrum was where to temporarily borrow power from, as turning off one of the science instruments might be enough to not have it come back online. Ultimately a main heater was turned off for an hour, allowing the thruster swap to take place and allowing Voyager 1 to breathe a bit more freely for now.

Compared to the recent scare involving Voyager 1 where we thought that its computer systems might have died, this matter probably feels more routine to the team in charge, but with a spacecraft that’s the furthest removed man-made spacecraft in outer space, nothing is ever truly routine.

Credit: Silversea cruises

Cruise Ship-Lengthening Surgery: All The Cool Companies Are Doing It

Sliding in an extra slice of cruise ship to lengthen it. (Credit: Silversea cruises)
Sliding in an extra slice of cruise ship to lengthen it. (Credit: Silversea cruises)

The number of people going on cruises keeps rising year over year, with the number passengers carried increasing from just over 3.7 million in 1990 to well over 28 million in 2023. This has meant an increasing demand for more and also much larger cruise ships, which has led to an interesting phenomenon where it has become more economical to chop up an existing cruise ship and put in an extra slice to add many meters to each deck. This makes intuitively sense, as the segment added is fairly ‘dumb’, with no engine room, control systems, but mostly more rooms and cabins.

The current top-of-the-line cruise ship experience is exemplified by the Icon class that’s being constructed for the Royal Caribbean Group. The first in this line is the Icon of the Seas, which is the largest cruise ship in the world with a length of 364.75 meters and a gross tonnage of 248,663. All of this cost €1.86 billion and over two years of construction time, compared to around $80 million and a few months in the drydock. When combined with a scheduled maintenance period in the drydock, this ‘Jumboization’ process can be considered to be a great deal that gives existing cruise ships a new lease on life.

Extending a ship in this manner is fairly routine as well, with many ships beyond cruise ships seeing the torch before being split. A newly built segment is then slid in place, the metal segments are welded together, wires, tubing and more are spliced together, before the in and outside are ready for a new coat of paint that makes it seem like nothing ever happened to the ship.

Continue reading “Cruise Ship-Lengthening Surgery: All The Cool Companies Are Doing It”

Assessing The Energy Efficiency Of Programming Languages

Programming languages are generally defined as a more human-friendly way to program computers than using raw machine code. Within the realm of these languages there is a wide range of how close the programmer is allowed to get to the bare metal, which ultimately can affect the performance and efficiency of the application. One metric that has become more important over the years is that of energy efficiency, as datacenters keep growing along with their power demand. If picking one programming language over another saves even 1% of a datacenter’s electricity consumption, this could prove to be highly beneficial, assuming it weighs up against all other factors one would consider.

There have been some attempts over the years to put a number on the energy efficiency of specific programming languages, with a paper by Rui Pereira et al. from 2021 (preprint PDF) as published in Science of Computer Programming covering the running a couple of small benchmarks, measuring system power consumption and drawing conclusions based on this. When Hackaday covered the 2017 paper at the time, it was with the expected claim that C is the most efficient programming language, while of course scripting languages like JavaScript, Python and Lua trailed far behind.

With C being effectively high-level assembly code this is probably no surprise, but languages such as C++ and Ada should see no severe performance penalty over C due to their design, which is the part where this particular study begins to fall apart. So what is the truth and can we even capture ‘efficiency’ in a simple ranking?

Continue reading “Assessing The Energy Efficiency Of Programming Languages”