Peering Into The Black Box Of Large Language Models

Large Language Models (LLMs) can produce extremely human-like communication, but their inner workings are something of a mystery. Not a mystery in the sense that we don’t know how an LLM works, but a mystery in the sense that the exact process of turning a particular input into a particular output is something of a black box.

This “black box” trait is common to neural networks in general, and LLMs are very deep neural networks. It is not really possible to explain precisely why a specific input produces a particular output, and not something else.

Why? Because neural networks are neither databases, nor lookup tables. In a neural network, discrete activation of neurons cannot be meaningfully mapped to specific concepts or words. The connections are complex, numerous, and multidimensional to the point that trying to tease out their relationships in any straightforward way simply does not make sense.

Continue reading “Peering Into The Black Box Of Large Language Models”

The ’80s Multi-Processor System That Never Was

Until the early 2000s, the computer processors available on the market were essentially all single-core chips. There were some niche layouts that used multiple processors on the same board for improved parallel operation, and it wasn’t until the POWER4 processor from IBM in 2001 and later things like the AMD Opteron and Intel Pentium D that we got multi-core processors. If things had gone just slightly differently with this experimental platform, though, we might have had multi-processor systems available for general use as early as the 80s instead of two decades later.

The team behind this chip were from the University of Califorina, Berkeley, a place known for such other innovations as RAID, BSD, SPICE, and some of the first RISC processors. This processor architecture would be based on RISC as well, and would be known as Symbolic Processing Using RISC. It was specially designed to integrate with the Lisp programming language but its major feature was a set of parallel processors with a common bus that allowed for parallel operations to be computed at a much greater speed than comparable systems at the time. The use of RISC also allowed a smaller group to develop something like this, and although more instructions need to be executed they can often be done faster than other architectures.

The linked article from [Babbage] goes into much more detail about the architecture of the system as well as some of the things about UC Berkeley that made projects like this possible in the first place. It’s a fantastic deep-dive into a piece of somewhat obscure computing history that, had it been more commercially viable, could have changed the course of computing. Berkeley RISC did go on to have major impacts in other areas of computing and was a significant influence on the SPARC system as well.

Implantable Battery Charges Itself

Battery technology is the major limiting factor for the large-scale adoption of electric vehicles and grid-level energy storage. Marginal improvements have been made for lithium cells in the past decade but the technology has arguably been fairly stagnant, at least on massive industrial scales. At smaller levels there have been some more outside-of-the-box developments for things like embedded systems and, at least in the case of this battery that can recharge itself, implantable batteries for medical devices.

The tiny battery uses sodium and gold for the anode and cathode, and takes oxygen from the body to complete the chemical reaction. With a virtually unlimited supply of oxygen available to it, the battery essentially never needs to be replaced or recharged. In lab tests, it took a bit of time for the implant site to heal before there was a reliable oxygen supply, though, but once healing was complete the battery’s performance leveled off.

Currently the tiny batteries have only been tested in rats as a proof-of-concept to demonstrate the chemistry and electricity generation capabilities, but there didn’t appear to be any adverse consequences. Technology like this could be a big improvement for implanted devices like pacemakers if it can scale up, and could even help fight diseases and improve healing times. For some more background on implantable devices, [Dan Maloney] catches us up on the difficulties of building and powering replacement hearts for humans.

Experiencing Visual Deficits And Their Impact On Daily Life, With VR

Researchers presented an interesting project at the 2024 IEEE Conference on Virtual Reality and 3D User Interfaces: it uses VR and eye tracking to simulate visual deficits such as macular degeneration, diabetic retinopathy, and other visual diseases and impairments.

Typical labels and pill bottles can be shockingly inaccessible to a variety of common visual deficits.

VR offers a unique method of allowing people to experience the impact of living with such conditions, a point driven home particularly well by having the user see for themselves the effect on simple real-world tasks such as choosing a pill bottle, or picking up a mug. Conditions like macular degeneration (which causes loss of central vision) are more accurately simulated by using eye tracking, a technology much more mature nowadays than it was even just a few years ago.

The abstract for the presentation is available here, and if you have some time be sure to check out the main index for all of the VR research demos because there are some neat ones there, including a method of manipulating a user’s perception of the shape of the ground under their feet by electrically-stimulating the tendons of the ankle.

Eye tracking is in a few consumer VR products nowadays, but it’s also perfectly feasible to roll your own in a surprisingly slick way. It’s even been used on jumping spiders to gain insights into the fascinating and surprisingly deep perceptual reality these creatures inhabit.

Reprogrammable Transistors

Not every computer can make use of a disk drive when it needs to store persistent data. Embedded systems especially have pushed the development of a series of erasable programmable read-only memories (EPROMs) because of their need for speed and reliability. But erasing memory and writing it over again, whether it’s an EPROM, an EEPROM, an FPGA, or some other type of configurable solid-state memory is just scratching the surface of what it might be possible to get integrated circuits and their transistors to do. This team has created a transistor that itself is programmable.

Rather than doping the semiconductor material with impurities to create the electrical characteristics needed for the transistor, the team from TU Wien in Vienna has developed a way to “electrostatically dope” the semiconductor, using electric fields instead of physical impurities to achieve the performance needed in the material. A second gate, called the program gate, can be used to reconfigure the electric fields within the transistor, changing its properties on the fly. This still requires some electrical control, though, so the team doesn’t expect their new invention to outright replace all transistors in the future, and they also note that it’s unlikely that these could be made as small as existing transistors due to the extra complexity.

While the article from IEEE lists some potential applications for this technology in the broad sense, we’d like to see what these transistors are actually capable of doing on a more specific level. It seems like these types of circuits could improve efficiency, as fewer transistors might be needed for a wider variety of tasks, and that there are certainly some enhanced security features these could provide as well. For a refresher on the operation of an everyday transistor, though, take a look at this guide to the field-effect transistor.

Arctic Adventures With A Data General Nova II — The Equipment

As I walked into the huge high bay that was to be my part-time office for the next couple of years, I was greeted by all manner of abandoned equipment haphazardly scattered around the room. As I later learned, this place was a graveyard for old research projects, cast aside to be later gutted for parts or forgotten entirely. This was my first day on the job as a co-op student at the Georgia Tech Engineering Experiment Station (EES, since renamed to GTRI). The engineer who gave me the orientation tour that day pointed to a dusty electronic rack in one corner of the room. Steve said my job would be to bring that old minicomputer back to life. Once running, I would operate it as directed by the radar researchers and scientists in our group. Thus began a journey that resulted in an Arctic adventure two years later.

The Equipment

The computer in question was a Data General (DG) mini computer. DG was founded by former Digital Equipment Corporation (DEC) employees in the 1960s. They introduced the 16-bit Nova computer in 1969 to compete with DEC’s PDP-8. I was gawking at a fully-equipped Nova 2 system which had been introduced in 1975. This machine and its accessories occupied two full racks, with an adjacent printer and a table with a terminal and pen plotter. There was little to no documentation. Just to turn it on, I had to pester engineers until I found one who could teach me the necessary front-panel switch incantation to boot it up. Continue reading “Arctic Adventures With A Data General Nova II — The Equipment”

Could Moon Mining Spoil Its Untouched Grandeur And Science Value?

It’s 2024. NASA’s Artemis program is in full swing, and we’re hoping to get back to the surface of the Moon real soon. Astronauts haven’t walked on the beloved sky rock since 1972! A human landing was scheduled for 2025, which has now been pushed back to 2026, and we’re all getting a bit antsy about it. Last time we wanted to go, it only took 8 years!

Now, somehow, it’s harder, but NASA also has its sights set higher. It no longer wants to just toddle about the Moon for a bit to wave at the TV cameras. This time, there’s talk of establishing permanent bases on the Moon, and actually doing useful work, like mining. It’s a tantalizing thought, but what does this mean for the sanctity of one of the last pieces of real estate yet to be spoilt by humans? Researchers are already arguing that we need to move to protect this precious, unique environment.

Continue reading “Could Moon Mining Spoil Its Untouched Grandeur And Science Value?”