Putting The C In C64

Older CPUs and some fairly modern microcontrollers are not made to readily support C compilers. Among those are the 1802, some 8-bit PICs, and the 6502 at the heart of the Commodore 64. That’s not to say you can’t make a C compiler for any of them, but the tricks required to handle the odd word sizes, lack of stack manipulation, or whatever other reason C isn’t a good fit tends to make compiled code bloated and possibly slower. [Dr. Mortal Wombat] took a different approach. The oscar64 compiler takes C source code and compiles it to a virtual machine code or native machine code for cases where performance might be important.

Turns out, the penalty for using native code isn’t as much as predicted, at least in some cases, The performance penalty for using the interpreter, however, can be significant in many common cases. The 6502 has a small stack that is hard to address, and indexing into a user-maintained stack is slow. The word size problem also produces lots of code as you have to break 16-bit operations into multiple 8-bit ones. The compiler aims to be C99-compliant, including floating point, recursion, multiple dimensions for arrays, and pointers to structures.

There are a few things left to hammer out. The linker doesn’t support external libraries, and the floating point code doesn’t understand NaN. On the other hand, many C++ features are available, like namespaces, reference types, templates, and more. The compiler can target several Commodore machines from the C128 to the PET. It also works with some Nintendo and Atari systems and can create various cartridge formats.

If you are writing code for any kind of 6502, it is probably worth checking out. Compiling C for the 6502 is no small feat, but then, so it is targeting PowerPoint. Don’t have a C64? Build one.

Image: [MOS6502], CC-BY-SA 3.0

A Journey Through Font Rendering

In the wide world of programming, there are a few dark corners that many prefer to avoid and instead leverage the well-vetted libraries that are already there. [Phillip Tennen] is not one of those people, and when the urge came to improve font rendering for his hobby OS, axle, he got to work writing a TrueType font renderer.

For almost a decade, the OS used a map table encoding all characters as 8×8 bitmaps. While scaling works fine, nonfractional scaling values are hard to read, and fractional scaling values are jagged and blocky. TrueType and font rendering, in general, are often considered dark magic. Font files (.ttf) are structured similarly to Mach-O (the binary format for macOS), with sections containing tagged tables. The font has the concept of glyphs and characters. Glyphs show up on the screen, and characters are the UTF/Unicode values that get translated into glyphs by the font. Three critical tables are glyf (the set of points forming the shape for each glyph), hmtx (how to space the characters), and cmap (how to turn Unicode code points into glyphs).

Seeing the curtain pulled back from the format itself makes it seem easy. In reality, there are all sorts of gotchas along the way. There are multiple types of glyphs, such as polygons, blanks, or compound glyphs. Sometimes, control points in the glyphs need to be inferred. Curves need to be interpolated. Enclosed parts of the polygon need to be filled in. And this doesn’t even get to the hinting system.

Inside many fonts are tiny programs that run on the TrueType VM. When a font is rendered at low enough resolutions, the default control points will lose their curves and become blobs. E’s become C, and D’s become O’s. So, the hinting system allows the font to nudge the control points to better fit on the grid. Of course, [Phillip] goes into even more quirks and details in a wonderful write-up about his learnings. Ultimately, axle has a much better-looking renderer, we get a great afternoon read, and fonts seem a little less like forbidden magic.

Maybe someday [Phillip] will implement other font rendering techniques, such as SDF-based text renderers. But for now, it’s quite the upgrade. The source code is available on GitHub.

A Look Inside The Smallest Possible PNG File

What’s inside a PNG file? Graphics, sure. But how is that graphic encoded? [Evan Hahn] shows you what goes into a single black pixel inside a 67-byte file. Why so many bytes? Well, that is exactly what the post is about.

You had to guess there is some overhead, right? There is an 8-byte header. Next up is a 25-byte metadata block. That single pixel takes 22 bytes, and then there is a 12-byte marker for the end of file. Turns out, you could put a bit more in the file, and would still take 67 bytes. The metadata is in a chunk — a block of data with a type, length, and CRC. That’s why it takes 25 bytes to store the dimensions of the image. A chunk has to be at least 12 bytes long. The metadata includes the image dimensions, the bit depth, and so on.

The next chunk, of course, is the data. The data is compressed, but in the case of one pixel, compression is a misnomer. There will be ten data bytes in the data chunk. That doesn’t include the 12 bytes of the chunk overhead so that one pixel takes a whopping 22 bytes.

The end of file marker is another chunk with no data. The total? 67 bytes. However, you can add more than one bit and still wind up with 67 bytes. For all the details, check out the post.

Luckily, it is easy to pronounce PNG. You can even use the format for circuit simulation.

Niklaus Wirth with Personal Computer Lilith that he developed in the 1970ies. (Photo: ETH Zurich)

Remembering Niklaus Wirth: Father Of Pascal And Inspiration To Many

Although perhaps not as much of a household name as other pioneers of last century’s rapid evolution of computer hardware and the software running on them, Niklaus Wirth’s contributions puts him right along with other giants. Being a very familiar face both in his native Switzerland at the ETH Zurich university – as well as at Stanford and other locations around the world where computer history was written – Niklaus not only gave us Pascal and Modula-2, but also inspired countless other languages as well as their developers.

Sadly, Niklaus Wirth passed away on January 1st, 2024, at the age of 89. Until his death, he continued to work on the Oberon programming language, as well as its associated operating system: Oberon System and the multi-process, SMP-capable A2 (Bluebottle) operating system that runs natively on x86, X86_64 and ARM hardware. Leaving behind a legacy that stretches from the 1960s to today, it’s hard to think of any aspect of modern computing that wasn’t in some way influenced or directly improved by Niklaus.

Continue reading “Remembering Niklaus Wirth: Father Of Pascal And Inspiration To Many”

The World Of Web Browsers Is In A Bad Way

There once was a man who invented a means for publishing scientific documents using hypertext. He made his first documents available from his NeXT cube, and a lot of the academics who saw them thought it was a great idea. They took the idea, expanded it, and added graphics, and pretty soon people who weren’t scientists wanted to use it too. It became the Next Big Thing, and technology companies new and old wanted a piece of the pie.

You all know the next chapter of this story. It’s the mid 1990s, and Microsoft, having been caught on the back foot after pursuing The Microsoft Network as a Compuserve and AOL competitor, did an about-turn and set out to conquer the Web. Their tool of choice was Microsoft Internet Explorer 3, which since it shipped with Windows 95 and every computer that mattered back then came with Windows 95, promptly entered a huge battle with Netscape’s Navigator browser. Web standards were in their infancy so the two browsers battled each other by manipulating the underlying technologies on which the Web relied. Microsoft used their “Embrace and extend” strategy to try to Redmondify everything, and Netscape got lost in the wilderness with Netscape 4, a browser on which nightmarish quirks were the norm. By the millennium it was Internet Explorer that had won the battle, and though some of the more proprietary Microsoft web technologies had fallen by the wayside, we entered the new decade in a relative monoculture. Continue reading “The World Of Web Browsers Is In A Bad Way”

X86 ENTER: What’s That Second Parameter?

[Raymond Chen] wondered why the x86 ENTER instruction had a strange second parameter that seems to always be set to zero. If you’ve ever wondered, [Raymond] explains what he learned in a recent blog post.

If you’ve ever taken apart the output of a C compiler or written assembly programs,  you probably know that ENTER is supposed to set up a new stack frame. Presumably, you are in a subroutine, and some arguments were pushed on the stack for you. The instruction puts the pointer to those arguments in EBP and then adjusts the stack pointer to account for your local variables. That local variable size is the first argument to ENTER.

The reason you rarely see it set to a non-zero value is that the final argument is made for other languages that are not as frequently seen these days. In a simple way of thinking, C functions live at a global scope. Sure, there are namespaces and methods for classes and instances. But you don’t normally have a C compiler that allows a function to define another function, right?

Turns out, gcc does support this as an extension (but not g++). However, looking at the output code shows it doesn’t use this feature, but it could. The idea is that a nested function can “see” any local variables that belong to the enclosing function. This works, for example, if you allow gcc to use its extensions:

#include <stdio.h>

void test()
{
   int a=10;
   /* nested function */
   void testloop(int n)
   {
      int x=a;
      while(n--) printf("%d\n",x);
   }
   testloop(3);
   printf("Again\n");
   testloop(2);
   printf("and now\n");
   a=33;
   testloop(5);
}

void main(int argc, char*argv[])
{
   test();
}

Continue reading “X86 ENTER: What’s That Second Parameter?”

The Sol-1: A 16-bit Computer In 74HC Logic With C Compiler And Unix-like OS

Sol-1 system pictured from the front. {Credit: Paulo Constantino)
Sol-1 system pictured from the front. {Credit: Paulo Constantino)

While the concept of a computer system implemented in discrete logic ICs is by itself not among the most original ideas, the way some machines are executed certainly makes them stick out. This is the case with [Paulo Constantino]’s Sol-1, which not only looks extremely professional, but also comes with a lot of amenities that allow for system development, including a C compiler and assembler, a Unix-like OS (in development), DMA, and a whole host of interfaces to interact with the system and peripherals (serial, parallel, IDE, etc.). Not to mention a SystemVerilog model and an emulator, all of which can be found on [Paulo]’s GitHub.

More photos and videos can be found on [Paulo]’s YouTube channel, as well as the Sol-1 website, which shows off the intricate wire wrap work on the back of each PCB. In terms of the ISA, there are 5 general purpose registers (one scratch) which can also be used as two 8-bit registers each. Most operations are supported, except for floating point. For future improvements and additions, Sol-1’s OS will get more features added, and the first major software to be ported to the Sol-1 should be Colossal Cave Adventure and similar text-based adventure (dungeon) games.