Forsp: A Forth & Lisp Hybrid Lambda Calculus Language

In the world of lambda calculus programming languages there are many ways to express the terms, which is why we ended up with such an amazing range of programming languages, even if most trace their roots back to ALGOL. Of the more unique (and practical) languages, Lisp and Forth probably range near the top, but what if you were to smudge both together? That’s what [xorvoid] did and it resulted in the gracefully titled Forsp programming language. Unsurprisingly it got a very warm and enthusiastic reception over at Hacker News.

While keeping much of Lisp-isms, the Forth part consists primarily out of it being very small and easy to implement, as demonstrated by the C-based reference implementation. It also features a Forth-like value/operand stack and function application. Also interesting is Forsp using call-by-push-value (CBPV), which is quite different from call-by-value (CBV) and call-by-name (CBN), which may give some advantages if you can wrap your mind around the concept.

Even if practicality is debatable, Forsp is another delightful addition to the list of interesting lambda calculus demonstrations which show that the field is anything but static or boring.

Programming Ada: Records And Containers For Organized Code

Writing code without having some way to easily organize sets of variables or data would be a real bother. Even if in the end you could totally do all of the shuffling of bits and allocating in memory by yourself, it’s much easier when the programming language abstracts all of that housekeeping away. In Ada you generally use a few standard types, ranging from records (equivalent to structs in C) to a series of containers like vectors and maps. As with any language, there are some subtle details about how all of these work, which is where the usage of these types in the Sarge project will act as an illustrative example.

In this project’s Ada code, a record is used for information about command line arguments (flag names, values, etc.) with these argument records stored in a vector. In addition, a map is created that links the names of these arguments, using a string as the key, to the index of the corresponding record in the vector. Finally, a second vector is used to store any text fragments that follow the list of arguments provided on the command line. This then provides a number of ways to access the record information, either sequentially in the arguments vector, or by argument (flag) name via the map.

Continue reading “Programming Ada: Records And Containers For Organized Code”

Amber Compiles To Bash

It certainly isn’t a new idea to compile a language into an intermediate language. The original C++ compiler outputs C code, for example. Enhanced versions of Fortran were often just conversions of new syntax to old syntax. Of course, it makes sense to output to some language that can run on lots of different platforms. So, using that logic, Amber makes perfect sense. It targets — no kidding — bash. You write with nice modern syntax and compile-time checks. The output is a bash script. Admittedly, sometimes a hard-to-read bash script, but still.

If you want to see the source code, it is available on GitHub. Since Windows doesn’t really support bash — if you don’t count things like Cygwin and WSL — Amber only officially supports Linux and MacOS. In addition to compiling files, Amber can also execute scripts directly which can be useful for a quick one-liner. If you use Visual Studio Code, you can find a syntax highlighter extension for Amber.

Continue reading “Amber Compiles To Bash”

Static Recompilation Brings New Life To N64 Games

Over the past few years a number of teams have been putting a lot of effort into taking beloved Nintendo 64 games, decompiling them, and lovingly crafting them into highly portable C code. This allows for these games to not only run natively on PCs, but also for improvements to be made to the rendering engine and other components.

Yet this artisan approach to porting these games means a massive time investment, something which static binary translation (static recompilation) may conceivably speed up. Enter the N64: Recompiled project, which provides a binary translation tool to ease the translation of the N64’s binaries into C code.

This is effectively quite similar to what an emulator does in real-time, just with the goal of creating a permanent copy of the translated instructions. After this static binary translation, the C code can be compiled again, but as noted by the project’s documentation, a suitable runtime is needed to get a functional game. An example of this is the Zelda 64: Recompiled project, which uses the N64: Recompiled project at its core, while providing the necessary scaffolding and wrappers to create a working copy of The Legend of Zelda: Majora’s Mask as output.

In the video below, [Modern Vintage Gamer] takes the software for a test drive and comes away very excited about the potential it has to completely change the state of N64 emulation. To be clear, this isn’t a one-button-press solution — it still requires capable developers to roll up their sleeves and get the plumbing in. It’s going to take some time before you favorite game is supported, but the idea of breathing new life into some of the best games from the 1990s and early 2000s certainly has us eager to see where this technology goes

Continue reading “Static Recompilation Brings New Life To N64 Games”

Try Image Classification Running In Your Browser, Thanks To WebGPU

When something does zero-shot image classification, that means it’s able to make judgments about the contents of an image without the user needing to train the system beforehand on what to look for. Watch it in action with this online demo, which uses WebGPU to implement CLIP (Contrastive Language–Image Pre-training) running in one’s browser, using the input from an attached camera.

By giving the program some natural language visual concept labels (such as ‘person’ or ‘cat’) that fit a hypothetical template for the image content, the system will output — in real-time — its judgement on the appropriateness of such labels to what the camera sees. Again, all of this runs locally.

It’s maybe a little bit unintuitive, but what’s happening in the demo is that the system is deciding which of the user-provided labels (“a photo of a cat” vs “a photo of a bald man”, for example) is most appropriate to what the camera sees. The more a particular label is judged a good fit for the image, the higher the number beside it.

This kind of process benefits greatly from shoveling the hard parts of the computation onto compatible graphics cards, which is exactly what WebGPU provides by allowing the browser access to a local GPU. WebGPU is relatively recent, but we’ve already seen it used to run LLMs (Large Language Models) directly in the browser.

Wondering what makes GPUs so very useful for AI-type applications? It’s all about their ability to work with enormous amounts of data very quickly.

NetBSD Bans AI-Generated Code From Commits

A recent change was announced to the NetBSD commit guidelines which amends these to state that code which was generated by Large Language Models (LLMs) or similar technologies, such as ChatGPT, Microsoft’s Copilot or Meta’s Code Llama is presumed to be tainted code. This amendment was to the existing section about tainted code, which originally referred to any code that was not written directly by the person committing the code, and was due to licensing concerns. The obvious reason behind this is that otherwise code may be copied into the NetBSD codebase which may have been licensed under an incompatible (or proprietary) license.

In the case of LLM-based code generators like the above-mentioned, the problem stems from the fact that they are trained on millions of lines of code from all over the internet, which are naturally released under a wide variety of licenses. Invariably, some of that code will be covered by a license that’s not acceptable for the NetBSD codebase. Although the guideline mentions that these auto-generated code commits may still be admissible, they require written permission from core developers, and presumably an in-depth audit of the code’s heritage. This should leave non-trivial commits that got churned out by ChatGPT and kin out in the cold.

The debate about the validity of works produced by current-gen “artificial intelligence” software is only just beginning, but there’s little question that NetBSD has made the right call here. From a legal and software engineering perspective this policy makes perfect sense, as LLM-generated code simply doesn’t meet the project’s standards. That said, code produced by humans brings with it a whole different set of potential problems.

Winamp Source Code Will Be Opened Up, Company Says

Recently the company currently in charge of the Winamp media player – formerly Radionomy, now Llama Group – announced that it will be making the source code of the player ‘available to developers’. Although the peanut gallery immediately seemed to have jumped to the conclusion that this meant that the source would be made available to all on the announced 24 September 2024 date, reading between the lines of the press release gives a different impression.

First there is the sign-up form for ‘FreeLlama’ where interested developers can sign up, with a strong suggestion that only vetted developers will be able to look at the code, which may or may not be accompanied by any non-disclosure agreements. It would seem appropriate to be skeptical considering Winamp’s rocky history since AOL divested of it in 2013 with version 5.666 and its new owner Radionomy not doing much development on the software except for adding NFT and crypto/blockchain features in 2022. The subsequent Winamp online service doubled down on this.

Naturally it would be great to see Winamp become a flourishing OSS project for the two dozen of us who still use Winamp on a daily base, but the proof will be in the non-NFT pudding, as the saying goes.