Different Algorithms Sort Christmas Lights

Sorting algorithms are a common exercise for new programmers, and for good reason: they introduce many programming fundamentals at once, including loops and conditionals, arrays and lists, comparisons, algorithmic complexity, and the tradeoff between correctness and performance. As a fun Christmas project, [Scripsi] set out to implement twelve different sorting algorithms over twelve days, using Christmas lights as the sorting medium.

The lights in use here are strings of WS2812 addressable LED strips, with the program set up to assign random hue values to each of the lights in the string. From there, an RP2040-based platform will step through the array of lights and implement the day’s sorting algorithm of choice. When operating on an element in the array the saturation is turned all the way up, helping to show exactly what it’s doing at any specific time. When the sorting algorithm has finished, the microcontroller randomizes the lights and starts the process all over again.

For each of the twelve days of Christmas [Scripsi] has chosen one of twelve of their favorite sorting algorithms. While there are a few oddballs like Bogosort which is a guess-and-check algorithm that might never sort the lights correctly before the next Christmas (although if you want to try to speed this up you can always try an FPGA), there are also a few favorites and some more esoteric ones as well. It’s a great way to get some visualization of how sorting algorithms work, learn a bit about programming fundamentals, and get in the holiday spirit as well.

How Wind Nearly Took Down Boulder NTP

NTP is one of the most interesting and important, but all too forgotten, protocols that makes the internet tick. Accurate clock synchronization is required for everything ranging from cryptography to business and science. NTP is closely tied around a handful of atomic clocks, some in orbit on GPS satellites, and some in laboratories. So the near-failure of one such atomic clock sparked a rather large, and nerdy, internet debate.

On December 17, 2025, the Colorado front range experience a massive wind storm. The National Center for Atmospheric Reassure in Boulder recorded gusts in excess of 100 mph (about 85 knots or 160 kph). This storm was a real doozy, but gusts this strong are not unheard of in Boulder either. That is no small reason the National Renewable Energy Laboratory (now the National Laboratory of the Rockies) has a wind turbine testing facility in the neighborhood.

Continue reading “How Wind Nearly Took Down Boulder NTP”

Only Known Copy Of UNIX V4 Recovered From Tape

UNIX version 4 is quite special on account of being the first UNIX to be written in C instead of PDP-11 ASM, but it was also considered to have been lost to the ravages of time. Joyfully, we can report that the more than fifty year old magnetic tape that was recently discovered in a University of Utah storeroom did in fact contain the UNIX v4 source code. As reported by Tom’s Hardware, [Al Kossow] of Bitsavers did the recovery by passing the raw flux data from the tape read head through the ReadTape program to reconstruct the stored data.

Since the tape was so old there was no telling how much of the data would still be intact, but fortunately it turned out that the tape was not only largely empty, but the data that was on it was in good nick. You can find the recovered files here, along with a README, with Archive.org hosting the multi-GB raw tape data. The recovered data includes the tape file in SimH format and the filesystem

Suffice it to say that you will not run UNIX v4 on anything other than a PDP-11 system or emulated equivalent, but if you want to run its modern successors in the form of BSD Unix, you can always give FreeBSD a shot.

Cloudflare’s Outages And Why Cool Kids Test On Prod

Every system administrator worth their salt knows that the right way to coax changes to network infrastructure onto a production network is to first validate it on a Staging network: a replica of the Production (Prod) network. Meanwhile all the developers who are working on upcoming changes are safely kept in their own padded safety rooms in the form of Test, Dev and similar, where Test tends to be the pre-staging phase and Dev is for new-and-breaking changes. This is what anyone should use, and yet Cloudflare apparently deems itself too cool for such a rational, time-tested approach based on their latest outage.

In their post-mortem on the December 5th outage, they describe how they started doing a roll-out of a change to React Server Components (RSC), to allow for a 1 MB buffer to be used as part of addressing the critical CVE-2025-55182 in RSC. During this roll-out on Prod, it was discovered that a testing tool didn’t support the increased buffer size and it was decided to globally disable it, bypassing the gradual roll-out mechanism.

This follows on the recent implosion at Cloudflare when their brand-new, Rust-based FL2 proxy keeled over when it encountered a corrupted input file. This time, disabling the testing tool created a condition in the original Lua-based FL1 where a NIL value was encountered, after which requests through this proxy began to fail with HTTP 500 errors.  The one saving grace here is that the issue was detected and corrected fairly quickly, unlike when the FL2 proxy fell over due to another issue elsewhere in the network and it took much longer to diagnose and fix.

Aside from Cloudflare clearly having systemic issues with actually testing code and validating configurations prior to ‘testing’ on Prod, this ought to serve as a major warning to anyone else who feels that a ‘quick deployment on Prod’ isn’t such a big deal. Many of us have dealt with companies where testing and development happened on Staging, and the real staging on Prod. Even if it’s management-enforced, that doesn’t help much once stuff catches on fire and angry customers start lighting up the phone queue.

Libxml2 Narrowly Avoids Becoming Unmaintained

In an excellent example of one of the most overused XKCD images, the libxml2 library has for a little while lost its only maintainer, with [Nick Wellnhofer] making good on his plan to step down by the end of the year.

XKCD's dependency model
Modern-day infrastructure, as visualized by XKCD. (Credit: Randall Munroe)

While this might not sound like a big deal, the real scope of this problem is rather profound. Not only is libxml2 part of GNOME, it’s also used as dependency by a huge number of projects, including web browsers and just about anything that processes XML or XSLT. Not having a maintainer in the event that a fresh, high-risk CVE pops up would obviously be less than desirable.

As for why [Nick] stepped down, it’s a long story. It starts in the early 2000s when the original author [Daniel Veillard] decided he no longer had time for the project and left [Nick] in charge. It should be said here that both of them worked as volunteers on the project, for no financial compensation. This when large companies began to use projects like libxml2 in their software, and were happy to send bug reports. Beyond a single Google donation it was effectively unpaid work that required a lot of time spent on researching and processing potential security flaws sent in.

Of note is that when such a security report comes in, the expectation is that you as a volunteer software developer drop everything you’re working on and figure out the cause, fix and patched-by-date alongside filing a CVE. This rather than you getting sent a merge request or similar with an accompanying test case. Obviously these kind of cases seems to have played a major role in making [Nick] burn out on maintaining both libxml2 and libxslt.

Fortunately for the project two new developers have stepped up to take over as maintainers, but it should be obvious that such churn is not a good sign. It also highlights the central problem with the conflicting expectations of open source software being both totally free in a monetary fashion and unburdened with critical bugs. This is unfortunately an issue that doesn’t seem to have an easy solution, with e.g. software bounties resulting in mostly a headache.

A Heavily Modified Rivian Attempts The Cannonball Run

There are few things more American than driving a car really fast in a straight line. Occasionally, the cars will make a few left turns, but otherwise, this is the pinnacle of American motorsport. And there’s no longer, straighter line than that from New York to Los Angeles, a time trial of sorts called the Cannonball Run, where drivers compete (in an extra-legal fashion) to see who can drive the fastest between these two cities. Generally, the cars are heavily modified with huge fuel tanks and a large amount of electronics to alert the drivers to the presence of law enforcement, but until now, no one has tried this race with an EV specifically modified for this task.

The vehicle used for this trial was a Rivian electric truck, chosen for a number of reasons. Primarily, [Ryan], the project’s mastermind, needed something that could hold a significant amount of extra batteries. The truck also runs software that makes it much more accepting of and capable of using an extra battery pack than other models. The extra batteries are also from Rivians that were scrapped after crash tests. The team disassembled two of these packs to cobble together a custom pack that fits in the bed of the truck (with the tonneau closed), which more than doubles the energy-carrying capacity of the truck.

Of course, for a time trial like this, an EV’s main weakness is going to come from charging times. [Ryan] and his team figured out a way to charge the truck’s main battery at one charging stall while charging the battery in the bed at a second stall, which combines for about a half megawatt of power consumption when it’s all working properly and minimizes charging time while maximizing energy intake. The other major factor for fast charging the battery in the bed was cooling, and rather than try to tie this system in with the truck’s, the team realized that using an ice water bath during the charge cycle would work well enough as long as there was a lead support vehicle ready to go at each charging stop with bags of ice on hand.

Although the weather and a few issues with the double-charging system stopped the team from completing this run, they hope to make a second attempt and finish it very soon. They should be able to smash the EV record, currently held by an unmodified Porsche, thanks to these modifications. In the meantime, though, there are plenty of other uses for EV batteries from wrecked vehicles that go beyond simple transportation.

Continue reading “A Heavily Modified Rivian Attempts The Cannonball Run”

Molecular beam epitaxy system Veeco Gen II at the FZU – Institute of Physics of the Czech Academy of Sciences. The system is designed for growth of monocrystalline semiconductors, semiconducting heterostructures, materials for spintronics and other compound material systems containing Al, Ga, As, P, Mn, Cu, Si and C.

Germanium Semiconductor Made Superconductor By Gallium Doping

Over on ScienceDaily we learn that an international team of scientists have turned a common semiconductor germanium into a superconductor.

Researchers have been able to make the semiconductor germanium superconductive for the first time by incorporating gallium into its crystal lattice through the process of molecular-beam epitaxy (MBE). MBE is the same process which is used in the manufacture of semiconductor devices such as diodes and MOSFETs and it involves carefully growing crystal lattice in layers atop a substrate.

When the germanium is doped with gallium the crystalline structure, though weakened, is preserved. This allows for the structure to become superconducting when its temperature is reduced to 3.5 Kelvin. Read all about it in the team’s paper here (PDF).

It is of course wonderful that our material science capabilities continue to advance, but the breakthrough we’re really looking forward to is room-temperature superconductors, and we’re not there yet. If you’re interested in progress in superconductors you might like to read about Floquet Majorana Fermions which we covered earlier this year.