The IPV4 We Didn’t Get

If you have ever read science fiction, you’ve probably seen “alternate history” stories. You know, where Europeans didn’t discover the New World until the 19th century, or the ancient Egyptians stumbled upon electricity. Maybe those things happened in an alternate universe. [BillPG] has an alternate history tale for us that imagines IPv6 was shot down and a protocol called IPv4x became prominent instead.

The key idea is that in 1993, the IP-Next-Generation working group could have decided that any solution that would break the existing network wouldn’t work. There is precedent. Stereo records play on mono players and vice versa. Color TV signals play on black and white sets just as well as black and white signals play on color TVs. It would have made perfect sense.

How could this be? The idea was to make everyone who “owns” an IPv4 address the stewards of a 96-bit sub-address block. IPv4x-aware equipment extracts the entire 128-bit address. IPv4-only equipment routes the packet to the controlling IPv4 address. Wasteful? Sure. Most people don’t need 79 octillion addresses. But if everyone has that many, then why not?

The fictional timeline has DNS and DHCP, along with dial-up stacks, changing to accommodate the new addresses. Again, you had to assume some parts of the network were still IPv4-only. DNS would return both addresses, and it was up to you to pick the IPv4x address if you understood it.

Your ISP would probably not offer you the entire extra space. A regional router could handle all traffic for your neighborhood and then direct it to your specific 128-bit address or your pool of addresses, if you have multiple devices. No need for NAT to hide your devices, nor strange router configurations to punch traffic through.

Of course, back in the real world, we have two incompatible systems: IPv4 and IPv6. IPv6 adoption has been slow and painful. We wondered why [BillPG] wrote about this future that never was. Turns out, he’s proposed a gateway that IPv6 hosts can provide to allow access from IPv4-only networks. Pretty sneaky, but we can admire it. If reading all this makes you wonder what happened to IPv5, we wondered that, too.

The Shockley 4-Layer Diode In 2026

The physicist William Shockley is perhaps today best known for three things: his role in the invention of the transistor, his calamitous management of Shockley Semiconductor which led to a mass defection of employees and precipitated the birth of the Silicon Valley we know, and his later descent into promoting eugenics. This was not the sum of his work though, and [David Prutchi] has been experimenting with a now-mostly-forgotten device that bears the Shockley name (PDF), after finding one used in an early heart pacemaker circuit.  His findings are both comprehensive and fascinating.

The Shockley diode, or 4-layer diode as it later became known, is as its name suggests a two terminal device with a 4-layer NPNP structure. It can be modeled as a pair of complementary transistors in parallel with a reverse biased diode, and the avalanche breakdown characteristics of that diode when a particular voltage is applied to it provide the impetus to turn on the two transistors. This makes it a voltage controlled switch, that activates when the voltage across it reaches that value.

The PDF linked above goes into the Shockley diode applications, and in them we find a range of relaxation oscillators, switches, and logic circuits. The oscillators in particular could be made with the barest minimum of components, important in a time when each semiconductor device could be very expensive. It may have faded into obscurity as it was superseded by more versatile 4-layer devices such as the PUJT or silicon-controlled switch and then integrated circuits, but he makes the point that its thyristor cousin is still very much with us.

This appears to be the first time we’ve featured a 4-layer diode, but we’ve certainly covered the genesis of the transistor in the past.

Diagnosing A Mysterious Fault With A Commodore 1541 Disk Drive

Some PCB corrosion on the bottom of the 1541 drive. (Credit: TheRetroChannel, YouTube)
Some PCB corrosion on the bottom of the 1541 drive. (Credit: TheRetroChannel, YouTube)

Recently [TheRetroChannel] came across an interesting failure mode on a Commodore 1541 5.25″ floppy disk drive, in the form of the activity LED blinking just once after power-up with the drive motor continuously spinning. Since the Flash Codes that Commodore implemented and bothered to document start at 2 flashes (for RAM-related Zero Page), this raised the question of what fault this drive had, and whether a single flash is some kind of undocumented error code.

A cursory check showed that the heads were okay and not shorted, ruling out a common fault with the used floppy mechanism. Cleaning up the corrosion on IC sockets and similar basic operations were performed next, without making a change, nor did removing the ICs to induce it to produce the documented error codes, but this helped narrow down the potential causes. Especially after swapping in known-good ICs failed to make a difference. One possibility was that the drive was boot looping, as the activity LED is lit up once on boot.

Some probing around with an oscilloscope between the faulty and a working drive seemed to point to a faulty RAM IC, but while probing the faulty drive suddenly initialized successfully. After some more poking around it appeared that the drive was fine after it had a chance to warm up, which just deepened the mystery.

The drive did talk to a C64 with diagnostic cartridge at this point, but would often glitch out. Ultimately it appears that a dodgy IC socket and a few bad traces were to blame for the behavior, making it an ‘obvious in hindsight’ repair. The bottom of the PCB had some clear corrosion on it, but the affected traces were apparently still hanging on for dear life with the drive still initializing once warmed up.

Continue reading “Diagnosing A Mysterious Fault With A Commodore 1541 Disk Drive”

The LEGO-lookalike displaying [Paul]'s dashboard

LEGO Space Computer Made Full Size, 47 Years On

There’s just something delightful about scaled items. Big things shrunk down, like LEGO’s teeny tiny terminal brick? Delightful. Taking that terminal brick and scaling it back to a full-sized computer? Even better. That’s what designer [Paul Staal] has done with his M2x2 project.

In spite of the name, it actually has a Mac Mini M4 as its powerful beating heart. An M2 might have been more on-brand, but it’s probably a case of wanting the most horsepower possible in what [Paul] apparently uses as his main workstation these days. The build itself is simple, but has some great design details. As you probably expected, the case is 3D printed. You may not have expected that he can use the left stud as a volume control, thanks to an IKEA Symfonisk remote hidden beneath. The right stud comes off to allow access to a wireless charger.

The minifigs aren’t required to charge those airpods, but they’re never out of place.

The 7″ screen can display anything, but [Paul] mostly uses it either for a custom home assistant dashboard, or to display an equalizer, both loosely styled after ‘screen’ on the original brick. We have to admit, as cool as it looked with the minifigs back in the day, that sharp angle to the screen isn’t exactly ergonomic for humans.

Perhaps the best detail was putting LEGO-compatible studs on top of the 10:1 scaled up studs, so the brick that inspired the project can sit securely atop its scion. [Paul] has provided a detailed build guide and the STLs necessary to print off a brick, should anyone want to put one of these nostalgic machines on their own desk.

We’ve covered the LEGO computer brick before, but going the other way–putting a microcontroller and display in the brick it to run DOOM. We’ve also seen it scaled up before, but that project was a bit more modest in size and computing power.

C64 Gets A Modern Interactive Disassembler

If you want to pull apart a program to see how it ticks, you’re going to need a disassembler. [Ricardo Quesada] has built Regenerator 2000 for just that purpose. It’s a new interactive disassembler for the Commodore 64 platform.

Naturally, Regenerator 2000 is built with full support for the 6502 instruction set, including undocumented op-codes as well. It’s able to automatically create labels and comments and can be paired with the VICE C64 emulator for live debugging. You can do all the usual debug stuff like inspecting registers, stepping through code, and setting breakpoints and watchpoints when you’re trying to figure out how something works. It can even show you sprites, bitmaps, and character sets right in the main window.

Files are on Github if you’re ready to dive in. You might find this tool to be a useful companion to C64 assembly tools we’ve featured previously, as well. If you’re pulling off your own retro development hacks, be sure to notify the tipsline.

[Thanks to Stephen Waters for the tip!]

A screenshot of the world's first 64kB boomer shooter

QUOD Is A Quake-Like In Only 64kB

The demoscene is still alive and well, and the proof is in this truly awe-inspiring game demo by [daivuk] : a Quake-like “boomer shooter” squeezed into a Windows executable of only 64 kB he calls “QUOD”. We’ve included the full explanation video below, but before you check out all the technical details, consider playing the game. It’ll make his explanations even more impressive.

OK, what’s so impressive? Well, aside from the fact that this is a playable 3D shooter in 64kB, with multiple enemies, multiple levels, oodles of textures, running, jumping et cetera–it’s so Quake-like he’s using TrenchBroom to make the levels. Of course he’s reprocessing them into a more space-efficient, optimized format. Yeah, unlike the famous .kkrieger and a lot of other demos in the 64kB space, this isn’t all procedurally generated. [daivuk] did make his own image editing program for procedurally generated textures, though. Which makes sense: as a PNG, the QUOD logo is probably half the size of the (compressed) executable.

The low-poly models are created in Blender, and all created to be symmetric–having the engine mirror the meshes saves 50% of the vertex data. . Blender is just exporting half of a low-poly mesh; just as he wrote his own image editor, he has his own bespoke model tool. This allows tiling model elements, as well as handling bones and poses to keyframe the model’s animation.

Audio is treated similarly to textures and meshes: built up at runtime from stored data and a layered series of effects. When you realize all the sounds were put together in his sound tool from square and sine waves, it makes it very impressive. He’s also got an old-style tracker to create the music. All of these tools output byte arrays that get embedded directly in the game code.

The video also gets into some of his optimization techniques; we like his use of a map file and analyzing it with a python tool to find the exact size of game elements and test his optimizations thereby. One thing he notes is that his optmizations are all for space, not for speed. Except, perhaps, for one thing: [daivuk] created a new language and virtual machine for the game, which seems downright extravagant. It actually makes sense, though, as the virtual machine can be optimized for the limits of the game, as he explains starting at about 20 minutes into the video. Apparently it saved a whole 2kB, which seems like nothing these days but actually let [daivuk] fit an extra level into his 64kB limit. Sure, it’s still bigger than Quake13k–and how did we never cover that?–but you get a lot more game, too.

So, to recap: [daivuk] didn’t just make a game with an impressively tiny size on disk, he made the entire toolchain, and a language for it to boot. If you think this is overoptimized, check out Wolfenstien in 600 lines of AWK. Of course in spite of the 1980s file size, this needs modern hardware to run. You can get surprising graphics performance from a fraction of that, like this ATtiny sprite engine.

Thanks to [Keith Olson] for the tip, which probably took up more than 64kB on our tips line.

Continue reading “QUOD Is A Quake-Like In Only 64kB”

Porting Super Mario 64 To The Original Nintendo DS

Considering that the Nintendo DS already has its own remake of Super Mario 64, one might be tempted to think that porting the original Nintendo 64 version would be a snap. Why you’d want to do this is left as an exercise to the reader, but whether due to nostalgia or out of sheer spite, the question of how easy this would be remains. Correspondingly, [Tobi] figured that he’d give it a shake, with interesting results.

Of note is that someone else already ported SM64 to the DSi, which is a later version of the DS with more processing power, more RAM and other changes. The reason why the 16 MB of RAM of the DSi is required, is because it needs to load the entire game into RAM, rather than do on-demand reads from the cartridge. This is why the N64 made do with just 4 MB of RAM, which is as much RAM as the NDS has. Ergo it can be made to work.

The key here is NitroFS, which allows you to implement a similar kind of segmented loading as the N64 uses. Using this the [Hydr8gon] DSi port could be taken as the basis and crammed into NitroFS, enabling the game to mostly run smoothly on the original DS.

There are still some ongoing issues before the project will be released, mostly related to sound support and general stability. If you have a flash cartridge for the DS this means that soon you too should be able to play the original SM64 on real hardware as though it’s a quaint portable N64.

Continue reading “Porting Super Mario 64 To The Original Nintendo DS”