How Resident Evil 2 For The N64 Kept Its FMV Cutscenes

Originally released for the Sony PlayStation in 1998, Resident Evil 2 came on two CDs and used 1.2 GB in total. Of this, full-motion video (FMV) cutscenes took up most of the space, as was rather common for PlayStation games. This posed a bit of a challenge when ported to the Nintendo 64 with its paltry 64 MB of cartridge-based storage. Somehow the developers managed to do the impossible and retain the FMVs, as detailed in a recent video by [LorD of Nerds]. Toggle the English subtitles if German isn’t among your installed natural language parsers.

Instead of dropping the FMVs and replacing them with static screens, a technological improvement was picked. Because of the N64’s rather beefy hardware, it was possible to apply video compression that massively reduced the storage requirements, but this required repurposing the hardware for tasks it was never designed for.

The people behind this feat were developers at Angel Studios, who had 12 months to make it work. Ultimately they achieved a compression ratio of 165:1, with software decoding handling the decompressing and the Reality Signal Processor (RSP) that’s normally part of the graphics pipeline used for both audio tasks and things like upscaling.

Continue reading “How Resident Evil 2 For The N64 Kept Its FMV Cutscenes”

AI. Where do you stand?

[Yang-Hui He] Presents To The Royal Institution About AI And Mathematics

Over on YouTube you can see [Yang-Hui He] present to The Royal Institution about Mathematics: The rise of the machines.

In this one hour presentation [Yang-Hui He] explains how AI is driving progress in pure mathematics. He says that right now AI is poised to change the very nature of how mathematics is done. He is part of a community of hundreds of mathematicians pursuing the use of AI for research purposes.

[Yang-Hui He] traces the genesis of the term “artificial intelligence” to a research proposal from J. McCarthy, M.L. Minsky, N. Rochester, and C.E. Shannon dated August 31, 1955. He says that his mantra has become: connectivism leads to emergence, and goes on to explain what he means by that, then follows with universal approximation theorems.

He goes on to enumerate some of the key moments in AI: Descartes’s bête-machine, 1617; Lovelace’s speculation, 1842; Turing test, 1949; Dartmouth conference, 1956; Rosenblatt’s Perceptron, 1957; Hopfield’s network, 1982; Hinton’s Boltzmann machine, 1984; IBM’s Deep Blue, 1997; and DeepMind’s AlphaGo, 2012.

He continues with some navel-gazing about what is mathematics, and what is artificial intelligence. He considers how we do mathematics as bottom-up, top-down, or meta-mathematics. He mentions about one of his earliest papers on the subject Machine-learning the string landscape (PDF) and his books The Calabi–Yau Landscape: From Geometry, to Physics, to Machine Learning and Machine Learning in Pure Mathematics and Theoretical Physics.

He goes on to explain about Mathlib and the Xena Project. He discusses Machine-Assisted Proof by Terence Tao (PDF) and goes on to talk more about the history of mathematics and particularly experimental mathematics. All in all a very interesting talk, if you can find a spare hour!

In conclusion: Has AI solved any major open conjecture? No. Is AI beginning to help to advance mathematical discovery? Yes. Has AI changed the speaker’s day-to-day research routine? Yes and no.

If you’re interested in more fun math articles be sure to check out Digital Paint Mixing Has Been Greatly Improved With 1930s Math and Painted Over But Not Forgotten: Restoring Lost Paintings With Radiation And Mathematics.

Continue reading “[Yang-Hui He] Presents To The Royal Institution About AI And Mathematics”

How Vibe Coding Is Killing Open Source

Does vibe coding risk destroying the Open Source ecosystem? According to a pre-print paper by a number of high-profile researchers, this might indeed be the case based on observed patterns and some modelling. Their warnings mostly center around the way that user interaction is pulled away from OSS projects, while also making starting a new OSS project significantly harder.

“Vibe coding” here is defined as software development that is assisted by an LLM-backed chatbot, where the developer asks the chatbot to effectively write the code for them. Arguably this turns the developer into more of a customer/client of the chatbot, with no requirement for the former to understand what the latter’s code does, just that what is generated does the thing that the chatbot was asked to create.

This also removes the typical more organic selection process of libraries and tooling, replacing it with whatever was most prevalent in the LLM’s training data. Even for popular projects visits to their website decrease as downloads and documentation are replaced by LLM chatbot interactions, reducing the possibility of promoting commercial plans, sponsorships, and community forums. Much of this is also reflected in the plummet in usage of community forums like Stack Overflow.

Continue reading “How Vibe Coding Is Killing Open Source”

Writing An Optimizing Tensor Compiler From Scratch

Not everyone will write their own optimizing compiler from scratch, but those who do sometimes roll into it during the course of ever-growing project scope creep. People like [Michael Moroz], who wrote up a long and detailed article on the why and how. Specifically, a ‘small library’ involving a few matrix operations for a Unity-based project turned into a static optimizing tensor compiler, called TensorFrost, with a Python front-end and a shader-like syntax, all of which is available on GitHub.

The Python-based front-end implements low-level NumPy-like operations, with development still ongoing. As for why Yet Another Tensor Library had be developed, the reasons were that most of existing libraries are heavily focused on machine learning tasks and scale poorly otherwise, dynamic control flow is hard to implement, and the requirement of writing custom kernels in e.g. CUDA.

Above all [Michael] wanted to use a high-level language instead of pure shader code, and have something that can output graphical data in real-time. Taking the gamble, and leaning on LLVM for some parts, there is now a functional implementation, albeit with still a lot of work ahead.

Block Devices In User Space

Your new project really could use a block device for Linux. File systems are easy to do with FUSE, but that’s sometimes too high-level. But a block driver can be tough to write and debug, especially since bugs in the kernel’s space can be catastrophic. [Jiri Pospisil] suggests Ublk, a framework for writing block devices in user space. This works using the io_uring facility in recent kernels.

This opens the block device field up. You can use any language you want (we’ve seen FUSE used with some very strange languages). You can use libraries that would not work in the kernel. Debugging is simple, and crashing is a minor inconvenience.

Another advantage? Your driver won’t depend on the kernel code. There is a kernel driver, of course, named ublk_drv, but that’s not your code. That’s what your code talks to.

Continue reading “Block Devices In User Space”

BASIC On A Calculator Again

We are always amused that we can run emulations or virtual copies of yesterday’s computers on our modern computers. In fact, there is so much power at your command now that you can run, say, a DOS emulator on a Windows virtual machine under Linux, even though the resulting DOS prompt would probably still perform better than an old 4.77 MHz PC. Remember when you could get calculators that ran BASIC? Well, [Calculator Clique] shows off BASIC running on a decidedly modern HP Prime calculator. The trick? It’s running under Python. Check it out in the video below.

Think about it. The HP Prime has an ARM processor inside. In addition to its normal programming system, it has Micropython as an option. So that’s one interpreter. Then PyBasic has a nice classic Basic interpreter that runs on Python. We’ve even ported it to one or two of the Hackaday Superconference badges.

Continue reading “BASIC On A Calculator Again”

Optimizing Software With Zero-Copy And Other Techniques

An important aspect in software engineering is the ability to distinguish between premature, unnecessary, and necessary optimizations. A strong case can be made that the initial design benefits massively from optimizations that prevent well-known issues later on, while unnecessary optimizations are those simply do not make any significant difference either way. Meanwhile ‘premature’ optimizations are harder to define, with Knuth’s often quoted-out-of-context statement about these being ‘the root of all evil’ causing significant confusion.

We can find Donald Knuth’s full quote deep in the 1974 article Structured Programming with go to Statements, which at the time was a contentious optimization topic. On page 268, along with the cited quote, we see that it’s a reference to making presumed optimizations without understanding their effect, and without a clear picture of which parts of the program really take up most processing time. Definitely sound advice.

And unlike back in the 1970s we have today many easy ways to analyze application performance and to quantize bottlenecks. This makes it rather inexcusable to spend more time today vilifying the goto statement than to optimize one’s code with simple techniques like zero-copy and binary message formats.

Continue reading “Optimizing Software With Zero-Copy And Other Techniques”