UNIX version 4 is quite special on account of being the first UNIX to be written in C instead of PDP-11 ASM, but it was also considered to have been lost to the ravages of time. Joyfully, we can report that the more than fifty year old magnetic tape that was recently discovered in a University of Utah storeroom did in fact contain the UNIX v4 source code. As reported by Tom’s Hardware, [Al Kossow] of Bitsavers did the recovery by passing the raw flux data from the tape read head through the ReadTape program to reconstruct the stored data.
Since the tape was so old there was no telling how much of the data would still be intact, but fortunately it turned out that the tape was not only largely empty, but the data that was on it was in good nick. You can find the recovered files here, along with a README, with Archive.org hosting the multi-GB raw tape data. The recovered data includes the tape file in SimH format and the filesystem
Suffice it to say that you will not run UNIX v4 on anything other than a PDP-11 system or emulated equivalent, but if you want to run its modern successors in the form of BSD Unix, you can always give FreeBSD a shot.
Every system administrator worth their salt knows that the right way to coax changes to network infrastructure onto a production network is to first validate it on a Staging network: a replica of the Production (Prod) network. Meanwhile all the developers who are working on upcoming changes are safely kept in their own padded safety rooms in the form of Test, Dev and similar, where Test tends to be the pre-staging phase and Dev is for new-and-breaking changes. This is what anyone should use, and yet Cloudflare apparently deems itself too cool for such a rational, time-tested approach based on their latest outage.
In their post-mortem on the December 5th outage, they describe how they started doing a roll-out of a change to React Server Components (RSC), to allow for a 1 MB buffer to be used as part of addressing the critical CVE-2025-55182 in RSC. During this roll-out on Prod, it was discovered that a testing tool didn’t support the increased buffer size and it was decided to globally disable it, bypassing the gradual roll-out mechanism.
This follows on the recent implosion at Cloudflare when their brand-new, Rust-based FL2 proxy keeled over when it encountered a corrupted input file. This time, disabling the testing tool created a condition in the original Lua-based FL1 where a NIL value was encountered, after which requests through this proxy began to fail with HTTP 500 errors. The one saving grace here is that the issue was detected and corrected fairly quickly, unlike when the FL2 proxy fell over due to another issue elsewhere in the network and it took much longer to diagnose and fix.
Aside from Cloudflare clearly having systemic issues with actually testing code and validating configurations prior to ‘testing’ on Prod, this ought to serve as a major warning to anyone else who feels that a ‘quick deployment on Prod’ isn’t such a big deal. Many of us have dealt with companies where testing and development happened on Staging, and the real staging on Prod. Even if it’s management-enforced, that doesn’t help much once stuff catches on fire and angry customers start lighting up the phone queue.
In an excellent example of one of the most overused XKCD images, the libxml2 library has for a little while lost its only maintainer, with [Nick Wellnhofer] making good on his plan to step down by the end of the year.
Modern-day infrastructure, as visualized by XKCD. (Credit: Randall Munroe)
While this might not sound like a big deal, the real scope of this problem is rather profound. Not only is libxml2 part of GNOME, it’s also used as dependency by a huge number of projects, including web browsers and just about anything that processes XML or XSLT. Not having a maintainer in the event that a fresh, high-risk CVE pops up would obviously be less than desirable.
As for why [Nick] stepped down, it’s a long story. It starts in the early 2000s when the original author [Daniel Veillard] decided he no longer had time for the project and left [Nick] in charge. It should be said here that both of them worked as volunteers on the project, for no financial compensation. This when large companies began to use projects like libxml2 in their software, and were happy to send bug reports. Beyond a single Google donation it was effectively unpaid work that required a lot of time spent on researching and processing potential security flaws sent in.
Of note is that when such a security report comes in, the expectation is that you as a volunteer software developer drop everything you’re working on and figure out the cause, fix and patched-by-date alongside filing a CVE. This rather than you getting sent a merge request or similar with an accompanying test case. Obviously these kind of cases seems to have played a major role in making [Nick] burn out on maintaining both libxml2 and libxslt.
Fortunately for the project two new developers have stepped up to take over as maintainers, but it should be obvious that such churn is not a good sign. It also highlights the central problem with the conflicting expectations of open source software being both totally free in a monetary fashion and unburdened with critical bugs. This is unfortunately an issue that doesn’t seem to have an easy solution, with e.g. software bounties resulting in mostly a headache.
There are few things more American than driving a car really fast in a straight line. Occasionally, the cars will make a few left turns, but otherwise, this is the pinnacle of American motorsport. And there’s no longer, straighter line than that from New York to Los Angeles, a time trial of sorts called the Cannonball Run, where drivers compete (in an extra-legal fashion) to see who can drive the fastest between these two cities. Generally, the cars are heavily modified with huge fuel tanks and a large amount of electronics to alert the drivers to the presence of law enforcement, but until now, no one has tried this race with an EV specifically modified for this task.
The vehicle used for this trial was a Rivian electric truck, chosen for a number of reasons. Primarily, [Ryan], the project’s mastermind, needed something that could hold a significant amount of extra batteries. The truck also runs software that makes it much more accepting of and capable of using an extra battery pack than other models. The extra batteries are also from Rivians that were scrapped after crash tests. The team disassembled two of these packs to cobble together a custom pack that fits in the bed of the truck (with the tonneau closed), which more than doubles the energy-carrying capacity of the truck.
Of course, for a time trial like this, an EV’s main weakness is going to come from charging times. [Ryan] and his team figured out a way to charge the truck’s main battery at one charging stall while charging the battery in the bed at a second stall, which combines for about a half megawatt of power consumption when it’s all working properly and minimizes charging time while maximizing energy intake. The other major factor for fast charging the battery in the bed was cooling, and rather than try to tie this system in with the truck’s, the team realized that using an ice water bath during the charge cycle would work well enough as long as there was a lead support vehicle ready to go at each charging stop with bags of ice on hand.
Researchers have been able to make the semiconductor germanium superconductive for the first time by incorporating gallium into its crystal lattice through the process of molecular-beam epitaxy (MBE). MBE is the same process which is used in the manufacture of semiconductor devices such as diodes and MOSFETs and it involves carefully growing crystal lattice in layers atop a substrate.
When the germanium is doped with gallium the crystalline structure, though weakened, is preserved. This allows for the structure to become superconducting when its temperature is reduced to 3.5 Kelvin. Read all about it in the team’s paper here (PDF).
It is of course wonderful that our material science capabilities continue to advance, but the breakthrough we’re really looking forward to is room-temperature superconductors, and we’re not there yet. If you’re interested in progress in superconductors you might like to read about Floquet Majorana Fermions which we covered earlier this year.
Admit it or not, you probably have a teddy bear somewhere in your past that you were — or maybe are — fond of. Not to disparage your bear, but we think Bradfield might have had a bigger adventure than yours has. Bradfield was launched in November on a high-altitude balloon by Year 7 and 8 students at Walhampton School in the UK in connection with Southampton University. Dressed in a school uniform, he was supposed to ride to near space, but ran into some turbulence. The BBC reported that poor Bradfield couldn’t hold on any longer and fell from around 17 miles up. The poor bear looked fairly calm for being so high up.
A camera recorded the unfortunate stuffed animal’s plight. Apparently, a companion plushie, Bill the Badger (the Badger being the Southampton mascot), successfully completed the journey, returning to Earth with a parachute.
Back in March, a small aircraft in the UK lost engine power while coming in for a landing and crashed. The aircraft was a total loss, but thankfully, the pilot suffered only minor injuries. According to the recently released report by the Air Accidents Investigation Branch, we now know a failed 3D printed part is to blame.
The part in question is a plastic air induction elbow — a curved duct that forms part of the engine’s air intake system. The collapsed part you see in the image above had an air filter attached to its front (towards the left in the image), which had detached and fallen off. Heat from the engine caused the part to soften and collapse, which in turn greatly reduced intake airflow, and therefore available power.
Serious injury was avoided, but the aircraft was destroyed.
While the cause of the incident is evident enough, there are still some unknowns regarding the part itself. The fact that it was 3D printed isn’t an issue. Additive manufacturing is used effectively in the aviation industry all the time, and it seems the owner of the aircraft purchased the part at an airshow in the USA with no reason to believe anything was awry. So what happened?
The part in question is normally made from laminated fiberglass and epoxy, with a glass transition of 84° C. Glass transition is the temperature at which a material begins to soften, and is usually far below the material’s actual melting point.
When a part is heated at or beyond its glass transition, it doesn’t melt but is no longer “solid” in the normal sense, and may not even be able to support its own weight. It’s the reason some folks pack parts in powdered salt to support them before annealing.
The printed part the owner purchased and installed was understood to be made from CF-ABS, or ABS with carbon fiber. ABS has a glass transition of around 100° C, which should have been plenty for this application. However, the investigation tested two samples taken from the failed part and measured the glass temperature at 52.8°C and 54.0°C, respectively. That’s a far cry from what was expected, and led to part failure from the heat of the engine.
The actual composition of the part in question has not been confirmed, but it sure seems likely that whatever it was made from, it wasn’t ABS. The Light Aircraft Association (LAA) plans to circulate an alert to inspectors regarding 3D printed parts, and the possibility they aren’t made from what they claim to be.