The World Wide Web And The Death Of Graceful Degradation

In the early days of the World Wide Web – with the Year 2000 and the threat of a global collapse of society were still years away – the crafting of a website on the WWW was both special and increasingly more common. Courtesy of free hosting services popping up left and right in a landscape still mercifully devoid of today’s ‘social media’, the WWW’s democratizing influence allowed anyone to try their hands at web design. With varying results, as those of us who ventured into the Geocities wilds can attest to.

Back then we naturally had web standards, courtesy of the W3C, though Microsoft, Netscape, etc. tried to upstage each other with varying implementation levels (e.g. no iframes in Netscape 4.7) and various proprietary HTML and CSS tags. Most people were on dial-up or equivalently anemic internet connections, so designing a website could be a painful lesson in optimization and targeting the lowest common denominator.

This was also the era of graceful degradation, where us web designers had it hammered into our skulls that using and navigating a website should be possible even in a text-only browser like Lynx, w3m or antique browsers like IE 3.x. Fast-forward a few decades and today the inverse is true, where it is your responsibility as a website visitor to have the latest browser and fastest internet connection, or you may even be denied access.

What exactly happened to flip everything upside-down, and is this truly the WWW that we want?

Continue reading “The World Wide Web And The Death Of Graceful Degradation”

An Awful 1990s PDA Delivers AI Wisdom

There was a period in the 1990s when it seemed like the personal data assistant (PDA) was going to be the device of the future. If you were lucky you could afford a Psion, a PalmPilot, or even the famous Apple Newton — but to trap the unwary there were a slew of far less capable machines competing for market share.

[Nick Bild] has one of these, branded Rolodex, and in a bid to make using a generative AI less alluring, he’s set it up as the interface to an LLM hosted on a Raspberry Pi 400. This hack is thus mostly a tale of reverse engineering the device’s serial protocol to free it from its Windows application.

Finding the baud rate was simple enough, but the encoding scheme was unexpectedly fiddly. Sadly the device doesn’t come with a terminal because these machines were very much single-purpose, but it does have a memo app that allows transfer of text files. This is the wildly inefficient medium through which the communication with the LLM happens, and it satisfies the requirement of making the process painful.

We see this type of PDA quite regularly in second hand shops, indeed you’ll find nearly identical devices from multiple manufacturers also sporting software such as dictionaries or a thesaurus. Back in the day they always seemed to be advertised in Sunday newspapers and aimed at older people. We’ve never got to the bottom of who the OEM was who manufactured them, or indeed cracked one apart to find the inevitable black epoxy blob processor. If we had to place a bet though, we’d guess there’s an 8051 core in there somewhere.

Continue reading “An Awful 1990s PDA Delivers AI Wisdom”

The five picos on two breadboards and the results of image convolution.

PentaPico: A Pi Pico Cluster For Image Convolution

Here’s something fun. Our hacker [Willow Cunningham] has sent us a copy of their homework. This is their final project for the “ECE 574: Cluster Computing” course at the University of Maine, Orono.

It was enjoyable going through the process of having a good look at everything in this project. The project is a “cluster” of 5x Raspberry Pi Pico microcontrollers — with one head node as the leader and four compute nodes that work on tasks. The software for both types of node is written in C. The head node is connected to a workstation via USB 1.1 allowing the system to be controlled with a Python script.

The cluster is configured to process an embarrassingly parallel image convolution. The input image is copied into the head node via USB which then divvies it up and distributes it to n compute nodes via I2C, one node at a time. Results are given for n = {1,2,4} compute nodes.

It turns out that the work of distributing the data dwarfs the compute by three orders of magnitude. The result is that the whole system gets slower the more nodes we add. But we’re not going to hold that against anyone. This was a fascinating investigation and we were impressed by [Willow]’s technical chops. This was a complicated project with diverse hardware and software challenges and they’ve done a great job making it all work and in the best scientific tradition.

It was fun reading their journal in which they chronicled their progress and frustrations during the project. Their final report in IEEE format was created using LaTeX and Overleaf, at only six pages it is an easy and interesting read.

For anyone interested in cluster tech be sure to check out the 256-core RISC-V megacluster and a RISC-V supercluster for very low cost.