Procedurally Generated Retrocomputer Emulators

[Marquis de Geek] has a profound love of old systems. Tired of writing new emulators from scratch for each project, his newest project EMF generates the emulator for him. An XML document describes the layout of the memory, CPU description, and screen handler. The output is currently a single-page Javascript emulator application with an assembly and a dissembler. However, but that backend can easily be swapped to another language such as Rust or C++.

Since EMF is a framework that provides a common way to describe the emulated machine, you get a common emulator user interface for free. There’s a lot of flexibility offered here as well. Opcodes can be implemented as a large switch statement or individual functions, depending on the target language’s performance. Self-modifying code can be detected and handled separately. Custom features or hardware can be injected easily by writing a module in the target language.

While the source code for the EMF hasn’t been released yet, several of the machines that [Marquis de Geek] has built with EMF are open-source on GitHub. So far the list includes Dragon32, Sinclair ZX80, Sinclair ZX81, Sinclair ZX Spectrum, Elliott 903, Chip8, Cosmac VIP, and the MegaProcessor. Each has a live emulator that runs in your browser.

While [Marquis de Geek] hopes to release a binary version of the EMF soon, we’re very much looking forward to the EMF source coming out once the code has been cleaned up. We love the trend towards creating easier and more accessible emulators, such as this Twitter bot that runs Atari programs.

Continue reading “Procedurally Generated Retrocomputer Emulators”

Local And Remote Debugging With GDB

As a debugger, GDB is a veritable Swiss Army knife. And just like exploring all of the non-obvious uses of a those knives, your initial response to the scope of GDB’s feature set is likely to be one of bewilderment, subsequent confusion, and occasional laughter. This is an understandable reaction in the case of the Swiss Army knife as one is unlikely to be in the midst of an army campaign or trapped in the wilderness. Similarly, it takes a tricky debugging session to really learn to appreciate GDB’s feature set.

If you have already used GDB to debug some code, it was likely wrapped in the comfort blanket of an IDE. This is of course one way to use GDB, but limits the available features to what the IDE exposes. Fortunately, the command line interface (CLI) of GDB has no such limitations. Learning the CLI GDB commands also has the advantage that one can perform that critical remote debug session even in the field via an SSH session over the 9600 baud satellite modem inside your Swiss Army knife, Cyber Edition.

Have I carried this analogy too far? Probably. But learning the full potential of GDB is well worth your time so today, let’s dive in to sharpen our digital toolsets.

Continue reading “Local And Remote Debugging With GDB”

DIY Regular Expressions

In the Star Wars universe, not everyone uses a lightsaber, and those who do wield them had to build them themselves. There’s something to be said about that strategy. Building a car or a radio is a great way to learn how those things work. That’s what [Low Level JavaScript] points out about regular expressions. Sure, a lot of people think they are scary. So why not write your own regular expression parser and engine? Get that under your belt and you’ll probably never fear another regular expression.

Of course, most of us probably won’t do it ourselves, but you can still watch the process in the video below. The code is surprisingly short, but don’t expect all the bells and whistles you might find in Python or even Perl.

Continue reading “DIY Regular Expressions”

PyGame Celebrates 20 Years By Releasing PyGame 2.0

Python is an absolutely fantastic language for tossing bits of data around and gluing different software components together. But eventually you may find yourself looking to make a program with an output a bit more advanced than the print() statement. Once you’ve crossed into the land of graphical Python programming, you’ll quickly find that the PyGame library is often recommended as a great way to start pushing pixels even if you’re not strictly making a game.

Today, the project is celebrating an incredible milestone: 20 years of helping Python developers turn their ideas into reality. Started by [Pete Shinners] in 2000 as a way to interface with Simple DirectMedia Layer (SDL), the project was quickly picked up by the community and morphed into a portable 2D/3D graphics library that lets developers deploy their code on everything from Android phones to desktop computers.

Things haven’t always gone smoothly for the open source library, and for awhile development had stalled out. But the current team has been making great progress, and decided today’s anniversary was the perfect time to officially roll out PyGame 2.0. With more than 3,300 changes committed since the team started working on their 2.0 branch in July of 2018, it’s a bit tough to summarize what’s new. Suffice to say, the library is more capable than ever and is ready to tackle everything from simple 2D art up to 4K GPU-accelerated applications.

Rip and tear in PyGame 2.0

If you haven’t given PyGame a try in awhile, don’t worry. The team has put special effort into making the library as backwards compatible as possible, so if you’ve got an old project kicking around that you haven’t touched in a decade, it should still run against the latest and greatest version. If you’ve never used it before, the team says they’ll soon be releasing new tutorials that show you how to get the most out of this new release.

Whether you’re putting together your own implementation of Conway’s “Game of Life” or creating the graphical front-end for your own Linux distribution, PyGame is a powerful tool to have in your collection. Our sincere congratulations to all PyGame developers, past and present, for making it to this auspicious occasion. We can’t wait to see what the next decade will bring.

[Thanks to deshipu for the tip.]

Quantum Inspired Algorithm Going Back To The Source

Recently, [Jabrils] set out to accomplish a difficult task: porting a quantum-inspired algorithm to run on a (simulated) quantum computer. Algorithms are often inspired by all sorts of natural phenomena. For example, a solution to the traveling salesman problem models ants and their pheromone trails. Another famous example is neural nets, which are inspired by the neurons in your brain. However, attempting to run a machine learning algorithm on your neurons, even with the assistance of pen and paper would be a nearly impossible exercise.

The quantum-inspired algorithm in question is known as the wavefunction collapse function. In a nutshell, you have a cube of voxels, a graph of nodes, or simply a grid of tiles as well as a list of detailed rules to determine the state of a node or tile. At the start of the algorithm, each node or point is considered in a state of superposition, which means it is considered to be in every possible state. Looking at the list of rules, the algorithm then begins to collapse the states. Unlike a quantum computer, states of superposition is not an intrinsic part of a classic computer, so this solving must be done iteratively. In order to reduce possible conflicts and contradictions later down the line, the nodes with the least entropy (the smallest number of possible states) are solved first. At first, random states are assigned, with the changes propagating through the system. This process is continued until the waveform is ultimately collapsed to a stable state or a contradiction is reached.

What’s interesting is that the ruleset doesn’t need to be coded, it can be inferred from an example. A classic use case of this algorithm is 2D pixel-art level design. By providing a small sample level, the algorithm churns and produces similar but wholly unique output. This makes it easy to provide thousands of unique and beautiful levels from an easy source image, however it comes at a price. Even a small level can take hours to fully collapse. In theory, a quantum computer should be able to do this much faster, since after all, it was the inspiration for this algorithm in the first place.

[Jabrils] spent weeks trying to get things running but ultimately didn’t succeed. However, his efforts give us a peek into the world of quantum computing and this amazing algorithm. We look forward to hearing more about this project from [Jabrils] who is continuing to work on it in his spare time. Maybe give it a shot yourself by learning the basics of quantum computing for yourself.

Continue reading “Quantum Inspired Algorithm Going Back To The Source”

Jetson Emulator Gives Students A Free AI Lesson

With the Jetson Nano, NVIDIA has done a fantastic job of bringing GPU-accelerated machine learning to the masses. For less than the cost of a used graphics card, you get a turn-key Linux computer that’s ready and able to handle whatever AI code you throw at it. But if you’re trying to set up a lab for 30 students, the cost of even relatively affordable development boards can really add up.

Spoiler: These things don’t exist.

Which is why [Tea Vui Huang] has developed jetson-emulator. This Python library provides a work-alike environment to NVIDIA’s own “Hello AI World” tutorials designed for the Jetson family of devices, with one big difference: you don’t need the actual hardware. In fact, it doesn’t matter what kind of computer you’ve got; with this library, anything that can run Python 3.7.9 or better can take you through NVIDIA’s getting started tutorial.

So what’s the trick? Well, if you haven’t guessed already, it’s all fake. Obviously it can’t actually run GPU-accelerated code without a GPU, so the library [Tea] has developed simply pretends. It provides virtual images and even “live” camera feeds to which randomly generated objects have been assigned.

The original NVIDIA functions have been rewritten to work with these feeds, so when you call something like net.Classify(img) against one of them you’ll get a report of what faux objects were detected. The output will look just like it would if you were running on a real Jetson, down to providing fictitious dimensions and positions for the bounding boxes.

If you’re a hacker looking to dive into machine learning and computer vision, you’d be better off getting a $59 Jetson Nano and a webcam. But if you’re putting together a workshop that shows a dozen people the basics of NVIDIA’s AI workflow, jetson-emulator will allow everyone in attendance to run code and get results back regardless of what they’ve got under the hood.

Should You Build For Windows, Mac, IOS, Android, Or Linux? Yes!

The holy grail of computer languages is to write code once and have it deploy effortlessly everywhere. Java likes to take credit for the idea, but UCSD P-Code was way before that and you could argue that mainframes had I/O abstraction like Fortran unit numbers even earlier. More modern efforts include Qt, GTK, and other things. Naturally, all of these fall short in some way. Now Google enters the fray with Flutter.

Flutter isn’t new, but in the past, it only handled Android and iOS. Now it can target desktop platforms and can even produce JavaScript. We haven’t played with the system enough to say how successful it is, but you can try it in your browser if you want some first-hand experience.

Continue reading “Should You Build For Windows, Mac, IOS, Android, Or Linux? Yes!”