Wine Is For Windows And Darling Is For MacOS

Wine has become a highly optimized and useful piece of software for those that live in Linux, but occasionally need to walk on the Windows side. In case you’d wondered, there’s a similar tool for when you need to run a MacOS program in your Linux environment. Enter Darling, the translation layer you’ve needed all along.

Just as Wine is not an emulator, nor is Darling. As a translation layer, it duplicates functions of the MacOS operating system that programs need to operate but within Linux. It’s fast, because it’s effectively running the MacOS software directly. Initially, Darling was mostly only capable of running MacOS apps at the console level. However, there is rudimentary support for running graphical applications that are based on the Cocoa framework.

Hilariously, if you’re into weird recursive situations, you can go deeper and run Darling within Windows Subsystem for Linux, itself running within Windows. Why? Well, you’re probably bored or just trying to for the sake of it. Regardless, we don’t judge. If you’ve got your own nifty translation or virtual machine hacks in the works, don’t hesitate to let us know!

Zork Zcode Interpreters Appear Out Of Nowhere

Some of our readers may know about Zork (and 1, 2, 3), the 1977 text adventure originally written for the PDP-10. The game has been public domain for a while now, but recently, the interpreters for several classic 1980s machines have also appeared on the internet.

What’s the difference? Zork is not a PDP-10 executable, it’s actually a virtual machine executable, which is in turn run by an interpreter written for the PDP-10. For example, Java compiles to Java bytecode, which runs on the Java virtual machine (but not directly on any CPU). In the same way, Zork was compiled to “Z-machine” program files, called ZIP (which was of course used in 1990 by the much more well known PKZIP). To date, the compiler, “Zilch” has not been released, but the language specification and ZIP specifications have, which has led some people to write custom ZIP compilers, though with a different input language.

For more on the VM, check out Maya’s Zork retrospective. (And dig the featured art. Subtle!)

Of course, that’s not the only type of interpreter. Some programming languages are interpreted directly from source, like this BASIC hidden in the ESP32’s ROM.

Bringing Back The CRT TV Experience In Software

Cathode-Retro is a collection of shaders and sample C++ code for reliving the glorious days when graphics were composite video signals displayed on a CRT screen. How? By faking it in software and providing more configuration options than any authentic setup ever had.

Love it or don’t, there’s nothing quite like it.

Not satisfied with creating CRT-style color images with optional scanlines and TV picture controls like tint and saturation, Cathode-Retro can emulate more nuanced elements as well.

The tool includes the ability to imitate things like the slight distortion of a period-correct curved screen, the subtle effects of different methods CRT displays used to actually work (such as shadow mask vs aperture grille), and even taking into account the slight distortion of light refracting imperfectly through the glass face of the CRT. There’s even options for adding noise and ghosting, which may spark some artistic ideas.

If all you need is software to recreate an old-school CRT terminal, we have you covered. But if your needs are a bit more low-level, Cathode-Retro might be what you’re missing.

There’s No AI In A Markov Chain, But They’re Fun To Play With

Amid all the hype about AI it sometimes seems as though the world has lost sight of the fact that software such as ChatGPT contains no intelligence. Instead it’s an extremely sophisticated system for extracting plausible machine generated content from the corpus on which it is trained. There’s a long history behind machine generated text, and perhaps the simplest example comes in the form of a Markov chain. [Ben Hoyt] takes us through how these work, and provides some Python code so that you can roll your own.

If you’re uncertain what a Markov chain is, consider the predictive text on your phone. It works by offering the statistically most likely next word in your sentence, and should you accept all of its choices it will deliver sentences which are superficially readable but otherwise complete nonsense. He demonstrates with very simple short source texts how a collocate probability map is generated for two-word phrases, and how from that a likely next word can be extracted. It’s not AI, but it can be a lot of fun to play with and it opens the door to the entire field of computational linguistics. We haven’t set one loose on Hackaday’s archive yet but we suspect it would talk a lot about the Arduino.

We’re talking about Markov chains here with respect to language, but it’s also worth remembering that they work for music too.

Header: Bad AI image with Dall-E prompt, “Ten thousand monkeys with typewriters”.

Obsolete E-Reader Gets New Life

For those who read often, e-readers are a great niche device that can help prevent eye fatigue with their e-ink displays especially when compared to a backlit display like a tablet or smartphone, all while taking up minimal space unlike a stack of real books. But for all their perks, there are still plenty of reasons to maintain a library of bound paper volumes. For those who have turned back to books or whose e-readers aren’t getting the attention they once did, there are plenty of things to do with them like this e-book picture frame.

The device started life as a PocketBook Basic Touch, or PocketBook 624, a fairly basic e-reader from 2014, but at its core is a decent ARM chip that can do many more things than display text. It also shipped running a version of Linux, which made it fairly easy to get a shell and start probing around. Unlike modern smart phones this e-reader seems to be fairly open and able to run some custom software, and as a result there are already some C++ programs available for these devices. Armed with some example programs, [Peter] was able to write a piece of custom software that displays images from an on-board directory and mounted the new picture display using an old book.

There were a number of options for this specific device that [Peter] explored that didn’t pan out well, like downloading images from the internet to display instead of images on the device, but in the end he went with a simpler setup to avoid feature creep and get his project up and running for “#inktober”, a fediverse-oriented drawing challenge that happened last month. While not strictly in line with a daily piece of hand-drawn artwork, the project still follows the spirit of the event. And, for those with more locked-down e-readers there’s some hope of unlocking the full functionality of older models with this FOSS operating system.

NVIDIA Trains Custom AI To Assist Chip Designers

AI is big news lately, but as with all new technology moves, it’s important to pierce through the hype. Recent news about NVIDIA creating a custom large language model (LLM) called ChipNeMo to assist in chip design is tailor-made for breathless hyperbole, so it’s refreshing to read exactly how such a thing is genuinely useful.

ChipNeMo is trained on the highly specific domain of semiconductor design via internal code repositories, documentation, and more. The result is a vast 43-billion parameter LLM running on a single A100 GPU that actually plays no direct role in designing chips, but focuses instead on making designers’ jobs easier.

For example, it turns out that senior designers spend a lot of time answering questions from junior designers. If a junior designer can ask ChipNeMo a question like “what does signal x from memory unit y do?” and that saves a senior designer’s time, then NVIDIA says the tool is already worth it. In addition, it turns out another big time sink for designers is dealing with bugs. Bugs are extensively documented in a variety of ways, and designers spend a lot of time reading documentation just to grasp the basics of a particular bug. Acting as a smart interface to such narrowly-focused repositories is something a tool like ChipNeMo excels at, because it can provide not just summaries but also concrete references and sources. Saving developer time in this way is a clear and easy win.

It’s an internal tool and part research project, but it’s easy to see the benefits ChipNeMo can bring. Using LLMs trained on internal information for internal use is something organizations have experimented with (for example, Mozilla did so, while explaining how to do it for yourself) but it’s interesting to see a clear roadmap to assisting developers in concrete ways.

Synthesizing 360-degree Views From Single Source Images

ZeroNVS is one of those research projects that is rather more impressive than it may look at first glance. On one hand, the 3D reconstructions — we urge you to click that first link to see them — look a bit grainy and imperfect. But on the other hand, it was reconstructed using a single still image as an input.

Most results look great, but some — like this bike visible through a park bench — come out a bit strange. A valiant effort for a single-image input, all things considered.

How is this done? It’s NeRFs (neural radiance fields) which leverages machine learning, but with yet another new twist. Existing methods mainly focus on single objects and masked backgrounds, but a new approach makes this method applicable to a variety of complex, in-the-wild images without the need to train new models.

There are a ton of sample outputs on the project summary page that are worth a browse if you find this sort of thing at all interesting. Some of the 360 degree reconstructions look rough, some are impressive, and some are a bit amusing. For example indoor shots tend to reconstruct rooms that look good, but lack doorways.

There is a research paper for those seeking additional details and a GitHub repository for the code, but the implementation requires some significant hardware.