Network Infrastructure And Demon-Slaying: Virtualization Expands What A Desktop Can Do

The original DOOM is famously portable — any computer made within at least the last two decades, including those in printers, heart monitors, passenger vehicles, and routers is almost guaranteed to have a port of the iconic 1993 shooter. The more modern iterations in the series are a little trickier to port, though. Multi-core processors, discrete graphics cards, and gigabytes of memory are generally needed, and it’ll be a long time before something like an off-the-shelf router has all of these components.

But with a specialized distribution of Debian Linux called Proxmox and a healthy amount of configuration it’s possible to flip this idea on its head: getting a desktop computer capable of playing modern video games to take over the network infrastructure for a LAN instead, all with minimal impact to the overall desktop experience. In effect, it’s possible to have a router that can not only play DOOM but play 2020’s DOOM Eternal, likely with hardware most of us already have on hand.

The key that makes a setup like this work is virtualization. Although modern software makes it seem otherwise, not every piece of software needs an eight-core processor and 32 GB of memory. With that in mind, virtualization software splits modern multi-core processors into groups which can act as if they are independent computers. These virtual computers or virtual machines (VMs) can directly utilize not only groups or single processor cores independently, but reserved portions of memory as well as other hardware like peripherals and disk drives.

Proxmox itself is a version of Debian with a number of tools available that streamline this process, and it installs on PCs in essentially the same way as any other Linux distribution would. Once installed, tools like LXC for containerization, KVM for full-fledged virtual machines, and an intuitive web interface are easily accessed by the user to allow containers and VMs to be quickly set up, deployed, backed up, removed, and even sent to other Proxmox installations. Continue reading “Network Infrastructure And Demon-Slaying: Virtualization Expands What A Desktop Can Do”

Version Control To The Max

There was a time when version control was an exotic idea. Today, things like Git and a handful of other tools allow developers to easily rewind the clock or work on different versions of the same thing with very little effort. I’m here to encourage you not only to use version control but also to go even a step further, at least for important projects.

My First Job

The QDP-100 with — count ’em — two 8″ floppies (from an ad in Byte magazine)

I remember my first real job back in the early 1980s. We made a particular type of sensor that had a 6805 CPU onboard and, of course, had firmware. We did all the development on physically big CP/M machines with the improbable name of Quasar QDP-100s. No, not that Quasar. We’d generate a hex file, burn an EPROM, test, and eventually, the code would make it out in the field.

Of course, you always have to make changes. You might send a technician out with a tube full of EPROMs or, in an emergency, we’d buy the EPROMs space on a Greyhound bus. Nothing like today.

I was just getting started, and the guy who wrote the code for those sensors wasn’t much older than me. One day, we got a report that something was misbehaving out in the field. I asked him how we knew what version of the code was on the sensor. The blank look I got back worried me. Continue reading “Version Control To The Max”

Putting A Pi In A Container

Docker and other containerization applications have changed a lot about the way that developers create new software as well as how they maintain virtual machines. Not only does containerization reduce the system resources needed for something that might otherwise be done in a virtual machine, but it standardizes the development environment for software and dramatically reduces the complexity of deploying on different computers. There are some other tricks up the sleeves as well, and this project called PI-CI uses Docker to containerize an entire Raspberry Pi.

The Pi container emulates an entire Raspberry Pi from the ground up, allowing anyone that wants to deploy software on one to test it out without needing to do so on actual hardware. All of the configuration can be done from inside the container. When all the setup is completed and the desired software installed in the container, the container can be converted to an .img file that can be put on a microSD card and installed on real hardware, with support for the Pi models 3, 4, and 5. There’s also support for using Ansible, a Docker automation system that makes administering a cluster or array of computers easier.

Docker can be an incredibly powerful tool for developing and deploying software, and tools like this can make the process as straightforward as possible. It does have a bit of a learning curve, though, since sharing operating system tools instead of virtualizing hardware can take a bit of time to wrap one’s mind around. If you’re new to the game take a look at this guide to setting up your first Docker container.

Docker-Powered Remote Gaming With Games On Whales

Cloud gaming services allow even relatively meager devices like set top boxes and cheap Chromebooks play the latest and greatest titles. It’s not perfect of course — latency is the number one issue as the player’s controller inputs need to be sent out to the server —  but if you’ve got a fast enough connection it’s better than nothing. Interested in experimenting with the tech on your own terms? The open source Games on Whales project is here to make that a reality.

As you might have guessed from the name, Games on Whales uses Linux and Docker as core components in its remote gaming system. With the software installed on a headless server, multiple users can create virtual desktop environments on the same machine, with each spawning as a separate process on the host computer. This means that all of the hardware of the host can be shared without needing to do anything complicated like setting up GPU pass-through. The main Docker container can spin up more containers as needed.

Of course there will obviously be limits to what any given hardware configuration will be able to support in terms of number of concurrent users and the demands of each stream. But for someone who wants to host a server for their friends or something even simpler like not having to put a powerful gaming PC in the living room, this is a real game-changer. For those not up to speed on Docker yet, we recently featured a guide on getting started with this powerful tool since it does take some practice to wrap one’s mind around at first.

A Guide To Running Your First Docker Container

While most of us have likely spun up a virtual machine (VM) for one reason or another, venturing into the world of containerization with software like Docker is a little trickier. While the tools Docker provides are powerful, maintain many of the benefits of virtualization, and don’t use as many system resources as a VM, it can be harder to get the hang of setting up and maintaining containers than it generally is to run a few virtual machines. If you’ve been hesitant to try it out, this guide to getting a Docker container up and running is worth a look.

The guide goes over the basics of how Docker works to share system resources between containers, including some discussion on the difference between images and containers, where containers can store files on the host system, and how they use networking resources. From there the guide touches on installing Docker within a Debian Linux system. But where it really shines is demonstrating how to use Docker Compose to configure a container and get it running. Docker Compose is a file that configures a number of containers and their options, making it easy to deploy those containers to other machines fairly straightforward, and understanding it is key to making your experience learning Docker a smooth one.

While the guide goes through setting up a self-hosted document management program called Paperless, it’s pretty easy to expand this to other services you might want to host on your own as well. For example, the DNS-level ad-blocking software Pi-Hole which is generally run on a Raspberry Pi can be containerized and run on a computer or server you might already have in your home, freeing up your Pi to do other things. And although it’s a little more involved you can always build your own containers too as our own [Ben James] discussed back in 2018.

Apple System 7… On Solaris?

While the Unix operating systems Solaris and HP-UX are still in active development, they’re not particularly popular anymore and are mostly relegated to some enterprise and data center environments They did enjoy a peak of popularity in the 90s during the “wild west” era of windowed operating systems, though. This was a time when there were more than two mass-market operating systems commercially available, with many companies fighting for market share. This led to a number of efforts to get software written for one operating system to run on others, whether that was simply porting software directly or using some compatibility layer. Surprisingly enough it was possible in this era to run an entire instance of Mac System 7 within either of these two Unix operating systems, and this was an officially supported piece of Apple software.

The software was called the Macintosh Application Environment (MAE), and was an effort by Apple to bring Macintosh System 7 applications to various Unix-based operating systems, including Solaris and HP-UX. This was a time before Apple’s OS was Unix-compliant, and MAE provided a compatibility layer that translated Macintosh system calls and application programming interfaces (APIs) into the equivalent Unix calls, allowing Mac software to function within the Unix environments. [Lunduke] outlines a lot of the features of this in his post, including some of the details the “scaffolding” allowing the 68k processor to be emulated efficiently on the hardware of the time, the contents of the user manual, and even the memory management and layout.

What’s really jarring to anyone only familiar with Apple’s modern “walled garden” approach is that this is an Apple-supported compatibility layer for another system. At the time, though, they weren’t the technology giant they are today and had to play by a different set of rules to stay viable. Quite the opposite, in fact: they almost went out of business in the mid-90s, so having their software run on as many machines as possible would have been a perk at the time. While this era did have major issues with cross-platform compatibility, there was some software that attempted to solve these problems that are still in active development today.

Thanks to [Stephen] for the tip!

An Old Netbook Spills Its Secrets

For a brief moment in the late ’00s, netbooks dominated the low-cost mobile computing market. These were small, low-cost, low-power laptops, some tiny enough to only have a seven-inch display, and usually with extremely limiting hardware even for the time. There aren’t very many reasons to own a machine of this era today, since even the cheapest of tablets or Chromebooks are typically far more capable than the Atom-based devices from over a decade ago. There is one set of these netbooks from that time with a secret up its sleeve, though: Phoenix Hyperspace.

Hyperspace was envisioned as a way for these slow, low-power computers to instantly boot or switch between operating systems. [cathoderaydude] wanted to figure out what made this piece of software tick, so he grabbed one of the only netbooks that it was ever installed on, a Samsung N210. The machine has both Windows 7 and a custom Linux distribution installed on it, and with Hyperspace it’s possible to switch almost seamlessly between them in about six seconds; effectively instantly for the time. Continue reading “An Old Netbook Spills Its Secrets”