Ysgrifennu Côd Yn Gymraeg (Writing Code In Welsh)

Part of traveling the world as an Anglophone involves the uncomfortable realization that everyone else is better at learning your language than people like you are at learning theirs. It’s particularly obvious in the world of programming languages, where English-derived language and syntax rules the roost.

It’s always IF foo THEN bar, and  never SI foo ALORS bar. It is now possible to do something akin to OS foo YNA bar though, because [Richard Hainsworth] has created y Ddraig (the Dragon), a programming language using Welsh language as syntax. (The Welsh double D, “Dd” is pronounced something like an English soft “th” as in “their”)

Under the hood it’s not an entirely new language, instead it’s a Welsh localisation of the Raku language. A localisation file is created, that can as we understand it handle bidirectional transcription between languages. The write-up goes into detail about the process.

There will inevitably be people asking what the point of a programming language for a spoken language with under a million native speakers is, so it’s worth taking a look at that head on. It’s important for Welsh education and the Welsh tech sector because a a geeky kid in a Welsh-medium school Pwllheli deserves to code just as much as an English kid in a school near Oxford, but it goes far beyond Welsh alone. There are many languages and cultures across the world where English is not widely spoken, and every single one of them has those kids like us who pick up a computer and run with it. The more of them that can learn to code, and thrive without having the extra burden of knowing English, the better. Perhaps in a couple of decades we’ll be using code from people who learned this way, without our ever knowing it.

As your scribe, this needs to be added: Mae’n ddrwg gyda fi ffrendiau Cymraeg, mae Cymraeg i yn wael iawn. Dwi’n dôd o’r Rhydychen, ni Pwllheli.


Header image: Jeff Buck, CC BY-SA 2.0.

The Many-Sprites Interpretation Of Amiga Mechanics

The invention of sprites triggered a major shift in video game design, enabling games with independent moving objects and richer graphics despite the limitations of early video gaming hardware. As a result, hardware design was specifically built to manipulate sprites, and generally as new generations of hardware were produced the number of sprites a system could produce went up. But [Coding Secrets], who published games for the Commodore Amiga, used an interesting method to get this system to produce far more sprites at a single time than the hardware claimed to support.

This hack is demonstrated with [Coding Secrets]’s first published game on the Amiga, Leander. Normally the Amiga can only display up to eight sprites at once, but there is a coprocessor in the computer that allows for re-drawing sprites in different areas of the screen. It can wait for certain vertical and horizontal line positions and then execute certain instructions. This doesn’t allow unlimited sprites to be displayed, but as long as only eight are displayed on any given line the effect is similar. [Coding Secrets] used this trick to display the information bar with sprites, as well as many backgrounds, all simultaneously with the characters and enemies we’d normally recognize as sprites.

Of course, using built-in hardware to do something the computer was designed to do isn’t necessarily a hack, but it does demonstrate how intimate knowledge of the system could result in a much more in-depth and immersive experience even on hardware that was otherwise limited. It also wasn’t free to use this coprocessor; it stole processing time away from other tasks the game might otherwise have to perform, so it did take finesse as well. We’ve seen similar programming feats in other gaming projects like this one which gets Tetris running with only 1000 lines of code.

Continue reading “The Many-Sprites Interpretation Of Amiga Mechanics”

Different Algorithms Sort Christmas Lights

Sorting algorithms are a common exercise for new programmers, and for good reason: they introduce many programming fundamentals at once, including loops and conditionals, arrays and lists, comparisons, algorithmic complexity, and the tradeoff between correctness and performance. As a fun Christmas project, [Scripsi] set out to implement twelve different sorting algorithms over twelve days, using Christmas lights as the sorting medium.

The lights in use here are strings of WS2812 addressable LED strips, with the program set up to assign random hue values to each of the lights in the string. From there, an RP2040-based platform will step through the array of lights and implement the day’s sorting algorithm of choice. When operating on an element in the array the saturation is turned all the way up, helping to show exactly what it’s doing at any specific time. When the sorting algorithm has finished, the microcontroller randomizes the lights and starts the process all over again.

For each of the twelve days of Christmas [Scripsi] has chosen one of twelve of their favorite sorting algorithms. While there are a few oddballs like Bogosort which is a guess-and-check algorithm that might never sort the lights correctly before the next Christmas (although if you want to try to speed this up you can always try an FPGA), there are also a few favorites and some more esoteric ones as well. It’s a great way to get some visualization of how sorting algorithms work, learn a bit about programming fundamentals, and get in the holiday spirit as well.

EmuDevz Is Literally A Software Game

The idea of gamifying all the things might have died down now that the current hype is shoving AI into all the things — but you’ve probably never seen it quite like EmuDevz, a game in which you develop an 8-bit emulator by [Rodrigo Alfonso].

There’s a lot of learning you’ll have to do along the way, about programming and how retro systems work, including diving into 6502 assembly code. Why 6502? Well, the emulator you’re working on (it’s partially-written at the start of the game; you need only debug and finish the job) is for a fantasy system called the NEEES “an antique game console released in 1983”. It’s the NEEES and not NES for two reasons. One, Nintendo has lawyers and they really, really know how to use them. Two, by creating a fantasy console that is not-quite-a-Famicom, the goalposts for EmuDevz can be moved a bit closer in.

Continue reading “EmuDevz Is Literally A Software Game”

A New Cartridge For An Old Computer

Although largely recognizable to anyone who had a video game console in the 80s or 90s, cartridges have long since disappeared from the computing world. These squares of plastic with a few ROM modules were a major route to get software for a time, not only for consoles but for PCs as well. Perhaps most famously, the Commodore VIC-20 and Commodore 64 had cartridge slots for both gaming and other software packages. As part of the Chip Hall of Fame created by IEEE Spectrum, [James] found himself building a Commodore cartridge more than three decades after last working in front of one of these computers.

[James] points out that even by the standards of the early 80s the Commodore cartridges were pretty low on specs. They’re limited to 16 kB, which means programming in assembly and doing things like interacting with video hardware directly. Luckily there’s a treasure trove of documentation about the C64 nowadays as well as a number of modern programming tools for them, in contrast to the 80s when tools and documentation were scarce or nonexistent. Hardware these days is cheap as well; the cartridge PCB and other hardware cost only a few dollars, and the case for it can easily be 3D printed.

Burning the software to the $3 ROM chip was straightforward as well with a TL866 programmer, although [James] left a piece of memory management code in the first pass which caused the C64 to lock up. Removing this code and flashing the chip again got the demo up and running though, and it’ll be on display at their travelling “Chips that Changed the World” exhibit. If you find yourself in the opposite situation, though, we’ve also seen projects that cleverly pull the data off of ancient C64 ROM chips for preservation.

Behind The Bally Home Computer System

Although we might all fundamentally recognize that gaming consoles are just specialized computers, we generally treat them, culturally and physically, differently than we do desktops or laptops. But there was a time in the not-too-distant past where the line between home computer and video game console was a lot more blurred than it is today. Even before Microsoft entered the scene, companies like Atari and Commodore were building both types of computer, often with overlapping hardware and capabilities. But they weren’t the only games in town. This video takes a look at the Bally Home Computer System, which was a predecessor of many of the more recognized computers and gaming systems of the 80s.

At the time, Bally as a company was much more widely known in the pinball industry, but they seemed to have a bit of foresight that the computers used in arcades would eventually transition to the home in some way. The premise of this console was to essentially start out as a video game system that could expand into a much more full-featured computer with add-ons. In addition to game cartridges it came with a BASIC interpreter cartridge which could be used for programming. It was also based on the Z80 microprocessor which was used in other popular PCs of the time, so in theory it could have been a commercial success but it was never able to find itself at the top of the PC pack.

Although it maintains a bit of a cult following, it’s a limited system even by the standards of the day, as the video’s creator [Vintage Geek] demonstrates. The controllers are fairly cumbersome, and programming in BASIC is extremely tedious without a full keyboard available. But it did make clever use of the technology at the time even if it was never a commercial success. Its graphics capabilities were ahead of other competing systems and would inspire subsequent designs in later systems. It’s also not the last time that a video game system that was a commercial failure would develop a following lasting far longer than anyone would have predicted.

Continue reading “Behind The Bally Home Computer System”

Hardware Built For Executing Python (Not Pythons)

Lots of microcontrollers will accept Python these days, with CircuitPython and MicroPython becoming ever more popular in recent years. However, there’s now a new player in town. Enter PyXL, a project to run Python directly in hardware for maximum speed.

What’s the deal with PyXL? “It’s actual Python executed in silicon,” notes the project site. “A custom toolchain compiles a .py file into CPython ByteCode, translates it to a custom assembly, and produces a binary that runs on a pipelined processor built from scratch.” Currently, there isn’t a hard silicon version of PyXL — no surprise given what it costs to make a chip from scratch. For now, it exists as logic running on a Zynq-7000 FPGA on a Arty-Z7-20 devboard. There’s an ARM CPU helping out with setup and memory tasks for now, but the Python code is executed entirely in dedicated hardware.

The headline feature of PyXL is speed. A comparison video demonstrates this with a measurement of GPIO latency. In this test, the PyXL runs at 100 MHz, achieving a round-trip latency of 480 nanoseconds. This is compared to MicroPython running on a PyBoard at 168 MHz, which achieves a much slower 15,000 nanoseconds by comparison. The project site claims PyXL can be 30x faster than MicroPython based on this result, or 50x faster when normalized for the clock speed differences.

Python has never been the most real-time of languages, but efforts like this attempt to push it this way. The aim is that it may finally be possible to write performance-critical code in Python from the outset. We’ve taken a look at Python in the embedded world before, too, albeit in very different contexts.

Continue reading “Hardware Built For Executing Python (Not Pythons)”