Classy Desk Simulates Beehive Activity

Beehives are impressive structures, an example of the epic building feats that are achievable by nature’s smaller creatures. [Full Stack Woodworking] was recently building a new work desk, and decided to make this piece of furniture a glowing tribute to the glorious engineering of the bee. (Video, embedded below.)

The piece is a conventional L-shaped desk, but with a honeycomb motif inlaid into the surface itself. [Full Stack Woodworking] started by iterating on various designs with stacked hexagons made out of laser cut plywood and Perspex, filled with epoxy. Producing enough hexagons to populate the entire desk was no mean feat, requiring a great deal of cutting, staining, and gluing—and all this before the electronics even got involved! Naturally, each cell has a custom built PCB covered in addressable LEDs, and they’re linked with smaller linear PCBs which create “paths” for bees to move between cells.

What’s cool about the display is that it’s not just running some random RGB animations. Instead, the desk has a Raspberry Pi 5 dedicated to running a beehive simulation, where algorithmic rules determine the status (and thus color) of each hexagonal cell based on the behavior of virtual bees loading the cells with honey. It creates an organic, changing display in a way that’s rather reminiscent of Conway’s Game of Life.

It was a huge build, but the final result is impressive. We’ve featured some other great custom desks over the years too. Video after the break.

Continue reading “Classy Desk Simulates Beehive Activity”

Ask Hackaday: When Good Lithium Batteries Go Bad

Friends, I’ve gotten myself into a pickle and I need some help.

A few years back, I decided to get into solar power by building a complete PV system inside a mobile trailer. The rationale for this doesn’t matter for the current discussion, but for the curious, I wrote an article outlining the whole design and build process. Briefly, though, the system has two adjustable PV arrays mounted on the roof and side of a small cargo trailer, with an integrated solar inverter-charger and a 10-kWh LiFePO4 battery bank on the inside, along with all the usual switching and circuit protection stuff.

It’s pretty cool, if I do say so myself, and literally every word I’ve written for Hackaday since sometime in 2023 has been on a computer powered by that trailer. I must have built it pretty well, because it’s been largely hands-off since then, requiring very little maintenance. And therein lies the root of my current conundrum.

Continue reading “Ask Hackaday: When Good Lithium Batteries Go Bad”

Nanochat Lets You Build Your Own Hackable LLM

Few people know LLMs (Large Language Models) as thoroughly as [Andrej Karpathy], and luckily for us all he expresses that in useful open-source projects. His latest is nanochat, which he bills as a way to create “the best ChatGPT $100 can buy”.

What is it, exactly? nanochat in a minimal and hackable software project — encapsulated in a single speedrun.sh script — for creating a simple ChatGPT clone from scratch, including web interface. The codebase is about 8,000 lines of clean, readable code with minimal dependencies, making every single part of the process accessible to be tampered with.

An accessible, end-to-end codebase for creating a simple ChatGPT clone makes every part of the process hackable.

The $100 is the cost of doing the computational grunt work of creating the model, which takes about 4 hours on a single NVIDIA 8XH100 GPU node. The result is a 1.9 billion parameter micro-model, trained on some 38 billion tokens from an open dataset. This model is, as [Andrej] describes in his announcement on X, a “little ChatGPT clone you can sort of talk to, and which can write stories/poems, answer simple questions.” A walk-through of what that whole process looks like makes it as easy as possible to get started.

Unsurprisingly, a mere $100 doesn’t create a meaningful competitor to modern commercial offerings. However, significant improvements can be had by scaling up the process. A $1,000 version (detailed here) is far more coherent and capable; able to solve simple math or coding problems and take multiple-choice tests.

[Andrej Karpathy]’s work lends itself well to modification and experimentation, and we’re sure this tool will be no exception. His past work includes a method of training a GPT-2 LLM using only pure C code, and years ago we saw his work on a character-based Recurrent Neural Network (mis)used to generate baroque music by cleverly representing MIDI events as text.

Don’t Believe Planck’s Constant? Measure It Yourself

We aren’t sure if [Looking Glass Universe] didn’t trust the accepted number for Planck’s constant, or just wanted the experience of measuring it herself. Either way, she took some LEDs and worked out the correct figure. Apparently, it hasn’t changed since we first measured it in 1916. But it’s always good to check.

The constant, if you need a refresher, helps explain things like why the color of light changes how the photoelectric effect manifests, and is at the root of quantum physics. LEDs are perfect for this experiment because, of course, they come in different colors. You essentially use a pot to tune down the LED until it just reaches the point where it is dark. Presuming you know the wavelength of the LED, you can estimate Planck’s constant from that and the voltage across the virtually ready-to-light LED. We might have used the potentiometer in a voltage divider configuration, but it should work either way.

The experiment showed that even a disconnected LED emits a few stray photons. But it was still possible to interpret the results. The constant is very tiny, so you’ll want your scientific calculator get do the math or, as she used, Wolfram Alpha.

The first result was off by the alarming amount of 1 x 10-40. No, that’s not alarming at all. That number is amazingly small.

This is a fairly common home physics experiment. You can do it quick, like [Looking Glass] did, or you can build something elaborate.

Continue reading “Don’t Believe Planck’s Constant? Measure It Yourself”

CoreXY 3D Printer Has A Scissor-Lift Z-axis So It Folds Down!

We don’t know about you, but one of the biggest hassles of having a 3D printer at home or in the ‘shop is the space it takes up. Wouldn’t it be useful if you could fold it down? Well, you’re in luck because over on Hackaday.io, that’s precisely what [Malte Schrader] has achieved with their Portable CoreXY 3D printer.

The typical CoreXY design you find in the wild features a moving bed that starts at the top and moves downwards away from the XY gantry as the print progresses. The CoreXY kinematics take care of positioning the hotend in the XY plane with a pair of motors and some cunning pulley drives. Go check this out if you want to read more about that. Anyway, in this case, the bed is fixed to the base with a 3-point kinematic mount (to allow the hot end to be trammed) but is otherwise vertically immobile. That bed is AC-heated, allowing for a much smaller power supply to be fitted and reducing the annoying cooling fan noise that’s all too common with high-power bed heaters.

Both ends of the cable bundle are pivoted so it can fold flat inside the frame!

The XY gantry is mounted at each end on a pair of scissor lift mechanisms, which are belt-driven and geared together from a single stepper motor paired with a reduction gearbox. This hopefully will resolve any issues with X-axis tilting that [Malte] reports from a previous version.

The coarse tramming is handled by the bed mounts, with a hotend-mounted BLTouch further dialling it in and compensating for any bed distortion measured immediately before printing. Simple and effective.

As will be clear from the video below, the folding for storage is a natural consequence of the Z-axis mechanism, which we reckon is pretty elegant and well executed—check out those custom CNC machine Aluminium parts! When the Z-axis is folded flat for storage, the hotend part of the Bowden tube feed is mounted to a pivot, allowing it to fold down as well. They even added a pivot to the other end of the cable bundle / Bowden feed so the whole bundle folds down neatly inside the frame. Nice job!

If you want a little more detail about CoreXY kinematics, check out our handy guide. But what about the H-Bot we hear you ask? Fear not, we’re on it.


Left: old and busted. Right: New hotness.

Game Of Theseus Gets Graphics Upgrade, Force Feedback 30 Years On

Indycar Racing 2 was a good game, back in 1995; in some ways, it was the Crysis of the Clinton years, in that most mortals could not run it to its full potential when it was new. Still, that potential was surely fairly limited, as we’re talking about a DOS game from 30 years ago. Sure, it was limited– but limits are meant to be broken, and games are made to be modded. [TedMeat] has made a video showing the updates. (Embedded below.)

It turns out there was a 3D-accelerated version sold with the short-lived Rendition graphics cards. That version is what let the community upscale everything to the absurd resolutions our modern monitors are capable of. Goodbye SVGA, hello HD. Specifically, [sharangad] has created a wrapper to translate the Rendition API to modern hardware. It doesn’t sound like higher-res textures have been modded in, in which case this looks spectacular for graphics designed in 1995. It’s not the latest Forza, but for what it is, it impresses.

The second hack [TedMeat] discusses is a mod by [GPLaps] that pulls physics values from game memory to throw to a modern force-feedback wheel, and it shows just how good the physics was in 1995. You really can feel what’s going on– stopping a skid before it starts, for example. That’s normal these days, but for the kids playing with a keyboard in 1995, it would have been totally mind-blowing.

As tipster [Keith Olson] put it: “What can I say? Fans gonna fan!” — and we’re just as grateful for that fact as we are for the tipoff. If you’re in a fandom that’s hacked its way to keep old favourites alive, we’d love to hear about it: submit a tip.

Continue reading “Game Of Theseus Gets Graphics Upgrade, Force Feedback 30 Years On”

Hackaday Links Column Banner

Hackaday Links: October 19, 2025

After a quiet week in the news cycle, surveillance concern Flock jumped right back in with both feet, announcing a strategic partnership with Amazon’s Ring to integrate that company’s network of doorbell cameras into one all-seeing digital panopticon. Previously, we’d covered both Flock’s “UAVs as a service” model for combating retail theft from above, as well as the somewhat grassroots effort to fight back at the company’s wide-ranging network of license plate reader cameras. The Ring deal is not quite as “in your face” as drones chasing shoplifters, but it’s perhaps a bit more alarming, as it gives U.S. law enforcement agencies easy access to the Ring Community Request program directly through the Flock software that they (probably) already use.

Continue reading “Hackaday Links: October 19, 2025”