KiDoom Brings Classic Shooter To KiCad

As the saying goes: if it has a processor and a display, it can run DOOM. The corollary here is that if some software displays things, someone will figure out a way to make it render the iconic shooter. Case in point KiDoom by [Mike Ayles], which happily renders DOOM in KiCad at a sedate 10 to 25 frames per second as you blast away at your PCB routing demons.

Obviously, the game isn’t running directly in KiCad, but it does use the doomgeneric DOOM engine in a separate process, with KiCad’s PCB editor handling the rendering. As noted by [Mike], he could have used a Python version of DOOM to target KiCad’s Python API, but that’s left as an exercise for the reader.

Rather than having the engine render directly to a display, [Mike] wrote code to extract the position of sprites and wall segments, which is then sent to KiCad via its Python interface, updating the view and refreshing the ‘PCB’. Controls are as usual, though you’ll be looking at QFP-64 package footprints for enemies, SOIC-8 for decorations and SOT-23-3 packages for health, ammo and keys.

If you’re itching to give it a try, the GitHub project can be found right here. Maybe it’ll bring some relief after a particularly frustrating PCB routing session.

Boosting Antihydrogen Production Using Beryllium Ions

Antihydrogen forms an ideal study subject for deciphering the secrets of fundamental physics due to it being the most simple anti-matter atom. However, keeping it from casually annihilating itself along with some matter hasn’t gotten much easier since it was first produced in 1995. Recently ALPHA researchers at CERN’s Antimatter Factory announced that they managed to produce and trap no fewer than 15,000 antihydrogen atoms in less than seven hours using a new beryllium-enhanced trap. This is an eight-fold increase compared to previous methods.

To produce an antihydrogen atom from a positron and an antiproton, the components and resulting atoms can not simply be trapped in an electromagnetic field, but requires that they are cooled to the point where they’re effectively stationary. This also makes adding more than one of such atom to a trap into a tedious process since the first successful capture in 2017.

In the open access paper in Nature Communications by [R. Akbari] et al. the process is described, starting with the merging of anti-protons from the CERN Antiproton Decelerator with positrons sourced from the radioactive decay of sodium-22 (β+ decay). The typical Penning-Malmberg trap is used, but laser-cooled beryllium ions (Be+) are added to provide sympathetic cooling during the synthesis step.

Together with an increased availability of positrons, the eight-fold increase in antihydrogen production was thus achieved. The researchers speculate that the sympathetic cooling is more efficient at keeping a constant temperature than alternative cooling methods, which allows for the increased rate of production.

Unusual Circuits In The Intel 386’s Standard Cell Logic

Intel’s 386 CPU is notable for being its first x86 CPU to use so-called standard cell logic, which swapped the taping out of individual transistors with wiring up standardized functional blocks. This way you only have to define specific gate types, latches and so on, after which a description of these blocks can be parsed and assembled by a computer into elements of a functioning application-specific integrated circuit (ASIC). This is standard procedure today with register-transfer level (RTL) descriptions being placed and routed for either an FPGA or ASIC target.

That said, [Ken Shirriff] found a few surprises in the 386’s die, some of which threw him for a loop. An intrinsic part of standard cells is that they’re arranged in rows and columns, with data channels between them where signal paths can be routed. The surprise here was finding a stray PMOS transistor right in the midst of one such data channel, which [Ken] speculates is a bug fix for one of the multiplexers. Back then regenerating the layout would have been rather expensive, so a manual fix like this would have made perfect sense. Consider it a bodge wire for ASICs.

Another oddity was an inverter that wasn’t an inverter, which turned out to be just two separate NMOS and PMOS transistors that looked to be wired up as an inverter, but seemed to actually there as part of a multiplexer. As it turns out, it’s hard to determine sometimes whether transistors are connected in these die teardowns, or whether there’s a gap between them, or just an artifact of the light or the etching process.

Testing The Survivability Of Moss In Space

The cool part about science is that you can ask questions like what happens if you stick some moss spores on the outside of the International Space Station, and then get funding for answering said question. This was roughly the scope of the experiment that [Chang-hyun Maeng] and colleagues ran back in 2022, with their findings reported in iScience.

Used as moss specimen was Physcomitrium patens, a very common model organism. After previously finding during Earth-based experiments that the spores are the most resilient, these were subsequently transported to the ISS where they found themselves placed in the exposure unit of the Kibo module. Three different exposure scenarios were attempted for the spores, with all exposed to space, but one set kept in the dark, another protected from UV and a third set exposed to the healthy goodness of the all-natural UV that space in LEO has to offer.

After the nine month exposure period, the spores were transported back to Earth, where the spores were allowed to develop into mature P. patens moss. Here it was found that only the spores which had been exposed to significant UV radiation – including UV-C unfiltered by the Earth’s atmosphere – saw a significant reduction in viability. Yet even after nine months of basking in UV-C, these still had a germination rate of 86%, which provides fascinating follow-up questions regarding their survivability mechanisms when exposed to UV-C as well as a deep vacuum, freezing temperatures and so on.

Deep Fission Wants To Put Nuclear Reactors Deep Underground

Today’s pressurized water reactors (PWRs) are marvels of nuclear fission technology that enable gigawatt-scale power stations in a very compact space. Though they are extremely safe, with only the TMI-2 accident releasing a negligible amount of radioactive isotopes into the environment per the NRC, the company Deep Fission reckons that they can make PWRs even safer by stuffing them into a 1 mile (1.6 km) deep borehole.

Their proposed DB-PWR design is currently in pre-application review at the NRC where their whitepaper and 2025-era regulatory engagement plan can be found as well. It appears that this year they renamed the reactor to Deep Fission Borehole Reactor 1 (DFBR-1). In each 30″ (76.2 cm) borehole a single 45 MWt DFBR-1 microreactor will be installed, with most of the primary loop contained within the reactor module.

As for the rationale for all of this, at the suggested depth the pressure would be equivalent to that inside the PWR, with in addition a column of water between it and the surface, which is claimed to provide a lot of safety and also negates the need for a concrete containment structure and similar PWR safety features. Of course, with the steam generator located at the bottom of the borehole, said steam has to be brought up all the way to the surface to generate a projected 15 MWe via the steam turbine, and there are also sampling tubes travelling all the way down to the primary loop in addition to ropes to haul the thing back up for replacing the standard LEU PWR fuel rods.

Whether this level of outside-the-box-thinking is a genius or absolutely daft idea remains to be seen, with it so far making inroads in the DoE’s advanced reactor program. The company targets having its first reactor online by 2026. Among its competition are projects like TerraPower’s Natrium which are already under construction and offer much more power per reactor, along with Natrium in particular also providing built-in grid-level storage.

One thing is definitely for certain, and that is that the commercial power sector in the US has stopped being mind-numbingly boring.

 

RavynOS: Open Source MacOS With Same BSD Pedigree

That macOS (formerly OS X) has BSD roots is a well-known fact, with its predecessor NeXTSTEP and its XNU kernel derived from 4.3BSD. Subsequent releases of OS X/macOS then proceeded to happily copy more bits from 4.4BSD, FreeBSD and other BSDs.

In that respect the thing that makes macOS unique compared to other BSDs is its user interface, which is what the open source ravynOS seeks to address. By taking FreeBSD as its core, and crafting a macOS-like UI on top, it intends to provide the Mac UI experience without locking the user into the Apple ecosystem.

Although FreeBSD already has the ability to use the same desktop environments as Linux, there are quite a few people who prefer the Apple UX. As noted in the project FAQ, one of the goals is also to become compatible with macOS applications, while retaining support for FreeBSD applications and Linux via the FreeBSD binary compatibility layer.

If this sounds good to you, then it should be noted that ravynOS is still in pre-release, with the recently released “Hyperpop Hyena” 0.6.1 available for download and your perusal. System requirements include UEFI boot, 4+ GB of RAM, x86_x64 CPU and either Intel or AMD graphics. Hardware driver support for the most part is that of current FreeBSD 14.x, which is generally pretty decent on x86 platforms, but your mileage may vary. For testing systems and VMs have a look at the supported device list, and developers are welcome to check out the GitHub page for the source.

Considering our own recent coverage of using FreeBSD as a desktop system, ravynOS provides an interesting counterpoint to simply copying over the desktop experience of Linux, and instead cozying up to its cousin macOS. If this also means being able to run all macOS games and applications, it could really propel FreeBSD into the desktop space from an unexpected corner.

Microsoft Open Sources Zork I, II And III

The history of the game Zork is a long and winding one, starting with MUDs and kin on university mainframes – where students entertained themselves in between their studies – and ending with the game being ported to home computers. These being pathetically undersized compared to even a PDP-10 meant that Zork got put to the axe, producing Zork I through III. Originally distributed by Infocom, eventually the process of Microsoft gobbling up game distributors and studios alike meant that Microsoft came to hold the license to these games. Games which are now open source as explained on the Microsoft Open Source blog.

Although the source had found its way onto the Internet previously, it’s now officially distributed under the MIT license, along with accompanying developer documentation. The source code for the three games can be found on GitHub, in separate repositories for Zork I, Zork II and Zork III.

We previously covered Zork’s journey from large systems to home computers, which was helped immensely by the Z-machine platform that the game’s code was ported to. Sadly the original games’s MDL code was a bit much for 8-bit home computers. Regardless of whether you prefer the original PDP-10 or the Z-machine version on a home computer system, both versions are now open sourced, which is a marvelous thing indeed.