Bionode Is Hand Truck Transformed Into Mobile Computing Lab

[Steven K. Roberts] is the original digital nomad, having designed and built mobile computing for his own use since the 80s. His latest project is Bionode, a portable computing lab built into a hand truck that can accommodate a wide spectrum of needs for a person on the go.

Far more than just a portable computer with wheels and a handle, Bionode is an integrated collection of systems with power management, a sensor suite, multiple computers, NAS for storage, networking, video production tools, and even the ability to be solar charged. [Steven] also uses a laptop, and Bionode complements it by being everything else.

If one truly wishes to be mobile and modular as well as effective, then size and weight begins to be just as important as usability. Everything in Bionode has a purpose, and it currently contains a PC with GPU for local AI and machine learning work, a NAS with 14 TB of storage, an Ubuntu machine, a Raspberry Pi 5 running Home Assistant, another Raspberry Pi 5 for development work, a Raspberry Pi 3 for running his 3D printer, and a Raspberry Pi 4 for SDR (software-defined radio) work. A smart KVM means a single keyboard, mouse, and display can be shared among machines as needed and additional hardware in a thoughtful layout makes audio and video projects workable. Everything is integrated with sensors and Home Assistant with local AI monitoring, which [Steven] likes to think of as the unit’s nervous system.

Bionode is therefore more than just a collection of computers crammed into a hand truck; it’s a carefully-selected array of hardware that provides whatever [Steven] needs.

Give it a look if you want to see what such a system looks like when it’s been designed and assembled by someone who’s “been there, done that” when it comes to mobile computing. Bionode would complement something like a mobile workshop quite nicely; something [Steven] has also done before.


Thanks [Paul] for the tip!

A screenshot of the world's first 64kB boomer shooter

QUOD Is A Quake-Like In Only 64kB

The demoscene is still alive and well, and the proof is in this truly awe-inspiring game demo by [daivuk] : a Quake-like “boomer shooter” squeezed into a Windows executable of only 64 kB he calls “QUOD”. We’ve included the full explanation video below, but before you check out all the technical details, consider playing the game. It’ll make his explanations even more impressive.

OK, what’s so impressive? Well, aside from the fact that this is a playable 3D shooter in 64kB, with multiple enemies, multiple levels, oodles of textures, running, jumping et cetera–it’s so Quake-like he’s using TrenchBroom to make the levels. Of course he’s reprocessing them into a more space-efficient, optimized format. Yeah, unlike the famous .kkrieger and a lot of other demos in the 64kB space, this isn’t all procedurally generated. [daivuk] did make his own image editing program for procedurally generated textures, though. Which makes sense: as a PNG, the QUOD logo is probably half the size of the (compressed) executable.

The low-poly models are created in Blender, and all created to be symmetric–having the engine mirror the meshes saves 50% of the vertex data. . Blender is just exporting half of a low-poly mesh; just as he wrote his own image editor, he has his own bespoke model tool. This allows tiling model elements, as well as handling bones and poses to keyframe the model’s animation.

Audio is treated similarly to textures and meshes: built up at runtime from stored data and a layered series of effects. When you realize all the sounds were put together in his sound tool from square and sine waves, it makes it very impressive. He’s also got an old-style tracker to create the music. All of these tools output byte arrays that get embedded directly in the game code.

The video also gets into some of his optimization techniques; we like his use of a map file and analyzing it with a python tool to find the exact size of game elements and test his optimizations thereby. One thing he notes is that his optmizations are all for space, not for speed. Except, perhaps, for one thing: [daivuk] created a new language and virtual machine for the game, which seems downright extravagant. It actually makes sense, though, as the virtual machine can be optimized for the limits of the game, as he explains starting at about 20 minutes into the video. Apparently it saved a whole 2kB, which seems like nothing these days but actually let [daivuk] fit an extra level into his 64kB limit. Sure, it’s still bigger than Quake13k–and how did we never cover that?–but you get a lot more game, too.

So, to recap: [daivuk] didn’t just make a game with an impressively tiny size on disk, he made the entire toolchain, and a language for it to boot. If you think this is overoptimized, check out Wolfenstien in 600 lines of AWK. Of course in spite of the 1980s file size, this needs modern hardware to run. You can get surprising graphics performance from a fraction of that, like this ATtiny sprite engine.

Thanks to [Keith Olson] for the tip, which probably took up more than 64kB on our tips line.

Continue reading “QUOD Is A Quake-Like In Only 64kB”

NextSilicon’s Maverick-2: The Future Of High-Performance Computing?

A few months back, Sandia National Laboratories announced they had acquired a new supercomputer. It wasn’t the biggest, but it still offered in their eyes something unique. This particular supercomputer contains NextSilicon’s much-hyped Maverick-2 ‘dataflow accelerator’ chips. Targeting the high-performance computing (HPC) market, these chips are claimed to hold a 10x advantage over the best GPU designs.

NextSilicon Maverick-2 OAM-2 module. (Credit: NextSilicon)
NextSilicon Maverick-2 OAM-2 module. (Credit: NextSilicon)

The strategy here appears to be somewhat of a mixture between VLIW, FPGAs and Sony’s Cell architecture, with a dedicated compiler that determines the best mapping of a particular calculation across the compute elements inside the chip. Naturally, the exact details about the internals are a closely held secret by NextSilicon and its partners (like Sandia), so we basically have only the public claims and PR material to go by.

Last year The Register covered this architecture along with a more in-depth look. What we can surmise from this is that it should perform pretty well for just about all applications, except for single-threaded performance. Of course, as a dedicated processor it cannot do CPU things, which is where NextSilicon’s less spectacular RISC-V-based CPU comes into the picture.

What’s apparent from glancing at the product renders on the NextSilicon site is that these Maverick-2 chips have absolutely massive dies, so they’re absolutely not cheap to manufacture. Whether they’ll make more of a splash than Intel’s Itanium or NVIDIA’s brute force remains to be seen.

Cheap Writing Deck Eschews Distractions

A modern computer can be a great productivity tool. It can also be a great source of distractions. To solve that issue, [Quackieduckie] built the e-typer—a device for writing without distraction.

[Quackieduckie] refers to the device as a “low-cost e-ink typewriter” which lays out the basic mode of operation. It consists of a 4.2 inch e-ink screen, combined with an Orange Pi Zero 2W running the Armbian operating system. It’s set up to boot straight into a document editor so there’s no messing around with other software that could get in the way of productivity. The components are all wrapped up in a tidy 3D printed housing, which includes a foldable stand so you can prop the screen up wherever you happen to be working. [Quackieduckie] built the device to work with any USB-C keyboard, probably figuring that those eager to maximize productivity will already have the typing device of their dreams on hand. Code for the project is available on GitHub for those eager to replicate the build.

We’ve featured similar builds in the past, often referred to as “writing decks.” They’re becoming increasingly popular as people look for distraction-free, ad-free tech experiences. A great example is this clamshell design with an integrated keyboard. If you’re building your own productivity aids in your home lab, don’t hesitate to notify the tipsline!

Upgrading An Old Macbook With An Old Processor

The Core Duo processor from Intel may not have been the first multi-core processor available to consumers, but it was arguably the one that brought it to the masses. Unfortunately, the first Core Duo chips were limited to 32-bit at a time when the industry was shifting toward 64-bit. The Core 2 Duo eventually filled this gap, and [dosdude1] recently completed an upgrade to a Macbook Pro that he had always wanted to do by replacing the Core Duo processor it had originally with a Core 2 Duo from a dead motherboard.

The upgrade does require a bit more tooling than many of us may have access too, but the process isn’t completely out of reach, and centers around desoldering the donor processor and making sure the new motherboard gets heated appropriately when removing the old chip and installing the new one. These motherboards had an issue of moisture ingress which adds a pre-heating step that had been the cause of [dosdude1]’s failures in previous attempts. But with the new chip cleaned up, prepared with solder balls, and placed on the new motherboard it was ready to solder into its new home.

Upon booting the upgraded machine, the only hiccup seemed to be that the system isn’t correctly identifying the clock speed. A firmware update solved this problem, though, and the machine is ready for use. For those who may be wondering why one would do something like this given the obsolete hardware, we’d note that beyond the satisfaction of doing it for its own sake these older Macbooks are among the few machines that can run free and open firmware, and also that Macbooks that are a decade or older can easily make excellent Linux machines even given their hardware limitations.

Continue reading “Upgrading An Old Macbook With An Old Processor”

An Event Badge Re-Imagined As A Cyberdeck

We’re used to handheld Linux devices of varying usefulness appearing on a regular basis, but there’s something about the one in a video from [Rootkit Labs] which sets it aside from the herd. It’s a fork of a conference badge.

The WHY2025 badge had pretty capable hardware, with an ESP32-P4, a really nice screen, and the lovely SolderParty keyboard. Here it’s been forked, to become a carrier board for their previous project, the Flipper Blackhat. This is a Linux add-on for the Flipepr Zero, and it seems that plenty of people wanted it in a more useful context. The result is something that looks a lot like a WHY badge, but running Linux.

It’s a great shame when badges end up lying unused after the event, and ones like the WHY 2025 badge are a serious effort to make something that endures. Here, the badge endures in spirit by being forked and re-engineered, and we like it a lot. The full video is below the break.

Continue reading “An Event Badge Re-Imagined As A Cyberdeck”

Swissbit 2GB PC2-5300U-555

Surviving The RAM Price Squeeze With Linux In-Kernel Memory Compression

You’ve probably heard — we’re currently experiencing very high RAM prices due mostly to increased demand from AI data centers.

RAM prices gone up four times

If you’ve been priced out of new RAM you are going to want to get as much value out of the RAM you already have as possible, and that’s where today’s hack comes in: if you’re on a Debian system read about ZRam for how to install and configure zram-tools to enable and manage the Linux kernel facilities that enable compressed RAM by integrating with the swap-enabled virtual memory system. We’ve seen it done with the Raspberry Pi, and the concept is the same.

Ubuntu users should check out systemd-zram-generator instead, and be aware that zram might already be installed and configured by default on your Ubuntu Desktop system.

If you’re interested in the history of in-kernel memory compression LWN.net has an old article covering the technology as it was gestating back in 2013: In-kernel Memory Compression. For those trying to get a grip on what has happened with RAM prices in recent history, a good place to track memory prices is memory.net and if you swing by you can see that a lot of RAM has gone up as much as four times in the last three or four months.

If you have any tips or hacks for memory compression on other platforms we would love to hear from you in the comments section!