Hydrogen Generation With Seawater, Aluminum, And… Coffee?

A team at MIT led by [Professor Douglas Hart] has discovered a new, potentially revelatory method for the generation of hydrogen. Using seawater, pure aluminum, and components from coffee grounds, the team was able to generate hydrogen at a not insignificant rate, getting the vast majority of the theoretical yield of hydrogen from the seawater/aluminum mixture. Though the process does use indium and gallium, rare and expensive materials, the process is so far able to recover 90% of the indium-gallium used which can then be recycled into the next batch. Aluminum holds twice as much energy as diesel, and 40x that of Li-Ion batteries. So finding a way to harness that energy could have a huge impact on the amount of fossil fuels burned by humans!

Pure, unoxidized aluminum reacts directly with water to create hydrogen, as well as aluminum oxyhydroxide and aluminum hydroxide. However, any aluminum that has had contact with atmospheric air immediately gets a coating of hard, unreactive aluminum oxide, which does not react in the same way. Another issue is that seawater significantly slows the reaction with pure aluminum. The researchers found that the indium-gallium mix was able to not only allow the reaction to proceed by creating an interface for the water and pure aluminum to react but also coating the aluminum pellets to prevent further oxidization. This worked well, but the resulting reaction was very slow.

Apparently “on a lark” they added coffee grounds. Caffeine had already been known to act as a chelating agent for both aluminum and gallium, and the addition of coffee grounds increased the reaction rate by a huge margin, to the point where it matched the reaction rate of pure aluminum in deionized, pure water. Even with wildly varying concentrations of caffeine, the reaction rate stayed high, and the researchers wanted to find out specifically which part of the caffeine molecule was responsible. It turned out to be imidazole, which is a readily available organic compound. The issue was balancing the amount of caffeine or imidazole added versus the gallium-indium recovery rate — too much caffeine or imidazole would drastically reduce the recoverable amount of gallium-indium.

Continue reading “Hydrogen Generation With Seawater, Aluminum, And… Coffee?”

Peering Into The Black Box Of Large Language Models

Large Language Models (LLMs) can produce extremely human-like communication, but their inner workings are something of a mystery. Not a mystery in the sense that we don’t know how an LLM works, but a mystery in the sense that the exact process of turning a particular input into a particular output is something of a black box.

This “black box” trait is common to neural networks in general, and LLMs are very deep neural networks. It is not really possible to explain precisely why a specific input produces a particular output, and not something else.

Why? Because neural networks are neither databases, nor lookup tables. In a neural network, discrete activation of neurons cannot be meaningfully mapped to specific concepts or words. The connections are complex, numerous, and multidimensional to the point that trying to tease out their relationships in any straightforward way simply does not make sense.

Continue reading “Peering Into The Black Box Of Large Language Models”

The ’80s Multi-Processor System That Never Was

Until the early 2000s, the computer processors available on the market were essentially all single-core chips. There were some niche layouts that used multiple processors on the same board for improved parallel operation, and it wasn’t until the POWER4 processor from IBM in 2001 and later things like the AMD Opteron and Intel Pentium D that we got multi-core processors. If things had gone just slightly differently with this experimental platform, though, we might have had multi-processor systems available for general use as early as the 80s instead of two decades later.

The team behind this chip were from the University of Califorina, Berkeley, a place known for such other innovations as RAID, BSD, SPICE, and some of the first RISC processors. This processor architecture would be based on RISC as well, and would be known as Symbolic Processing Using RISC. It was specially designed to integrate with the Lisp programming language but its major feature was a set of parallel processors with a common bus that allowed for parallel operations to be computed at a much greater speed than comparable systems at the time. The use of RISC also allowed a smaller group to develop something like this, and although more instructions need to be executed they can often be done faster than other architectures.

The linked article from [Babbage] goes into much more detail about the architecture of the system as well as some of the things about UC Berkeley that made projects like this possible in the first place. It’s a fantastic deep-dive into a piece of somewhat obscure computing history that, had it been more commercially viable, could have changed the course of computing. Berkeley RISC did go on to have major impacts in other areas of computing and was a significant influence on the SPARC system as well.

Implantable Battery Charges Itself

Battery technology is the major limiting factor for the large-scale adoption of electric vehicles and grid-level energy storage. Marginal improvements have been made for lithium cells in the past decade but the technology has arguably been fairly stagnant, at least on massive industrial scales. At smaller levels there have been some more outside-of-the-box developments for things like embedded systems and, at least in the case of this battery that can recharge itself, implantable batteries for medical devices.

The tiny battery uses sodium and gold for the anode and cathode, and takes oxygen from the body to complete the chemical reaction. With a virtually unlimited supply of oxygen available to it, the battery essentially never needs to be replaced or recharged. In lab tests, it took a bit of time for the implant site to heal before there was a reliable oxygen supply, though, but once healing was complete the battery’s performance leveled off.

Currently the tiny batteries have only been tested in rats as a proof-of-concept to demonstrate the chemistry and electricity generation capabilities, but there didn’t appear to be any adverse consequences. Technology like this could be a big improvement for implanted devices like pacemakers if it can scale up, and could even help fight diseases and improve healing times. For some more background on implantable devices, [Dan Maloney] catches us up on the difficulties of building and powering replacement hearts for humans.

Experiencing Visual Deficits And Their Impact On Daily Life, With VR

Researchers presented an interesting project at the 2024 IEEE Conference on Virtual Reality and 3D User Interfaces: it uses VR and eye tracking to simulate visual deficits such as macular degeneration, diabetic retinopathy, and other visual diseases and impairments.

Typical labels and pill bottles can be shockingly inaccessible to a variety of common visual deficits.

VR offers a unique method of allowing people to experience the impact of living with such conditions, a point driven home particularly well by having the user see for themselves the effect on simple real-world tasks such as choosing a pill bottle, or picking up a mug. Conditions like macular degeneration (which causes loss of central vision) are more accurately simulated by using eye tracking, a technology much more mature nowadays than it was even just a few years ago.

The abstract for the presentation is available here, and if you have some time be sure to check out the main index for all of the VR research demos because there are some neat ones there, including a method of manipulating a user’s perception of the shape of the ground under their feet by electrically-stimulating the tendons of the ankle.

Eye tracking is in a few consumer VR products nowadays, but it’s also perfectly feasible to roll your own in a surprisingly slick way. It’s even been used on jumping spiders to gain insights into the fascinating and surprisingly deep perceptual reality these creatures inhabit.

Reprogrammable Transistors

Not every computer can make use of a disk drive when it needs to store persistent data. Embedded systems especially have pushed the development of a series of erasable programmable read-only memories (EPROMs) because of their need for speed and reliability. But erasing memory and writing it over again, whether it’s an EPROM, an EEPROM, an FPGA, or some other type of configurable solid-state memory is just scratching the surface of what it might be possible to get integrated circuits and their transistors to do. This team has created a transistor that itself is programmable.

Rather than doping the semiconductor material with impurities to create the electrical characteristics needed for the transistor, the team from TU Wien in Vienna has developed a way to “electrostatically dope” the semiconductor, using electric fields instead of physical impurities to achieve the performance needed in the material. A second gate, called the program gate, can be used to reconfigure the electric fields within the transistor, changing its properties on the fly. This still requires some electrical control, though, so the team doesn’t expect their new invention to outright replace all transistors in the future, and they also note that it’s unlikely that these could be made as small as existing transistors due to the extra complexity.

While the article from IEEE lists some potential applications for this technology in the broad sense, we’d like to see what these transistors are actually capable of doing on a more specific level. It seems like these types of circuits could improve efficiency, as fewer transistors might be needed for a wider variety of tasks, and that there are certainly some enhanced security features these could provide as well. For a refresher on the operation of an everyday transistor, though, take a look at this guide to the field-effect transistor.

Arctic Adventures With A Data General Nova II — The Equipment

As I walked into the huge high bay that was to be my part-time office for the next couple of years, I was greeted by all manner of abandoned equipment haphazardly scattered around the room. As I later learned, this place was a graveyard for old research projects, cast aside to be later gutted for parts or forgotten entirely. This was my first day on the job as a co-op student at the Georgia Tech Engineering Experiment Station (EES, since renamed to GTRI). The engineer who gave me the orientation tour that day pointed to a dusty electronic rack in one corner of the room. Steve said my job would be to bring that old minicomputer back to life. Once running, I would operate it as directed by the radar researchers and scientists in our group. Thus began a journey that resulted in an Arctic adventure two years later.

The Equipment

The computer in question was a Data General (DG) mini computer. DG was founded by former Digital Equipment Corporation (DEC) employees in the 1960s. They introduced the 16-bit Nova computer in 1969 to compete with DEC’s PDP-8. I was gawking at a fully-equipped Nova 2 system which had been introduced in 1975. This machine and its accessories occupied two full racks, with an adjacent printer and a table with a terminal and pen plotter. There was little to no documentation. Just to turn it on, I had to pester engineers until I found one who could teach me the necessary front-panel switch incantation to boot it up. Continue reading “Arctic Adventures With A Data General Nova II — The Equipment”