Illustrated Kristina with an IBM Model M keyboard floating between her hands.

Keebin’ With Kristina: The One With The Holey And Wholly Expensive Keyboard

An Ultimate Hacking Keyboard (UHK) with DIY rainbow keycaps.
Image by [jwr] via reddit
The Ultimate Hacking Keyboard (UHK) line is, as the name suggests, a great choice for a lot of people. They’re each a toe-dip into the ergonomic waters with their split-ability and those beginner thumb clusters.

However, [jwr] was not completely satisfied and decided to make a custom set of keycaps. The idea was to create ‘caps without the “annoyingly abrasive texture of PBT”, that are larger than average for larger-than-average fingers. Finally, [jwr] wanted the Function row to tower over the number row a little, so these have a taller profile.

So, what are they made of? The look kind rubbery, don’t they? They are cast of pigmented polyurethane resin. First, [jwr] designed five molds in Fusion360, one for each row. Then it was time to machine master molds via CNC in foam tooling board. These were filled with silicone along with 3D-printed inserts, which produced silicone molds for casting keycaps four at a time in resin.

Continue reading “Keebin’ With Kristina: The One With The Holey And Wholly Expensive Keyboard”

Using Audio Hardware To Drive Neopixels Super Fast

Here’s the thing about running large strings of Neopixels—also known as WS2812 addressable LEDs. You need to truck out a ton of data, and fast. There are a dozen different libraries out there to drive them already, but [Zorxx] decided to strike out with a new technique—using I2S hardware to get the job done. 

Fast!

Microcontrollers traditionally use I2S interfaces to output digital audio. However, I2s also just happens to be perfect for driving tons of addressable LEDs. At the lowest level, I2S hardware is really just flipping a serial data line really fast with a clock line and a word select line for good measure. If, instead of sound, you pipe a data stream for addressable LEDs to the I2S hardware, it will clock that data out just the same!

[Zorxx] figured that at with an ESP32 trucking out I2S data at a rate of 2.6 megabits per second on the ESP32,  it would be possible to update a string of 256 pixels in just 7.3 milliseconds. In other words, you could have a 16 by 16 grid updating at over 130 frames per second. Step up to 512 LEDs, and you can still run at almost 70 fps.

There’s some tricks to pulling this off, but it’s nothing you can’t figure out just by looking at the spec sheets for the WS2812B and the ESP32. Or, indeed, [Zorxx’s] helpful Github page. We’ve featured some other unorthodox methods of driving these LEDs before, too! Meanwhile, if you’ve got your own ideas on how to datablast at ever greater speeds, don’t hesitate to let us know!

Modern AI On Vintage Hardware: LLama 2 Runs On Windows 98

[EXO Labs] demonstrated something pretty striking: a modified version of Llama 2 (a large language model) that runs on Windows 98. Why? Because when it comes to personal computing, if something can run on Windows 98, it can run on anything. More to the point: if something can run on Windows 98 then it’s something no tech company can control how you use, no matter how large or influential they may be. More on that in a minute.

Ever wanted to run a local LLM on 25 year old hardware? No? Well now you can, and at a respectable speed, too!

What’s it like to run an LLM on Windows 98? Aside from the struggles of things like finding compatible peripherals (back to PS/2 hardware!) and transferring the required files (FTP over Ethernet to the rescue) or even compilation (some porting required), it works maybe better than one might expect.

A Windows 98 machine with Pentium II processor and 128 MB of RAM generates a speedy 39.31 tokens per second with a 260K parameter Llama 2 model. A much larger 15M model generates 1.03 tokens per second. Slow, but it works. Going even larger will also work, just ever slower. There’s a video on X that shows it all in action.

It’s true that modern LLMs have billions of parameters so these models are tiny in comparison. But that doesn’t mean they can’t be useful. Models can be shockingly small and still be perfectly coherent and deliver surprisingly strong performance if their training and “job” is narrow enough, and the tools to do that for oneself are all on GitHub.

This is a good time to mention that this particular project (and its ongoing efforts) are part of a set of twelve projects by EXO Labs focusing on ensuring things like AI models can be run anywhere, by anyone, independent of tech giants aiming to hold all the strings.

And hey, if local AI and the command line is something that’s up your alley, did you know they already exist as single-file, multi-platform, command-line executables?