Docker-Powered Remote Gaming With Games On Whales

Cloud gaming services allow even relatively meager devices like set top boxes and cheap Chromebooks play the latest and greatest titles. It’s not perfect of course — latency is the number one issue as the player’s controller inputs need to be sent out to the server —  but if you’ve got a fast enough connection it’s better than nothing. Interested in experimenting with the tech on your own terms? The open source Games on Whales project is here to make that a reality.

As you might have guessed from the name, Games on Whales uses Linux and Docker as core components in its remote gaming system. With the software installed on a headless server, multiple users can create virtual desktop environments on the same machine, with each spawning as a separate process on the host computer. This means that all of the hardware of the host can be shared without needing to do anything complicated like setting up GPU pass-through. The main Docker container can spin up more containers as needed.

Of course there will obviously be limits to what any given hardware configuration will be able to support in terms of number of concurrent users and the demands of each stream. But for someone who wants to host a server for their friends or something even simpler like not having to put a powerful gaming PC in the living room, this is a real game-changer. For those not up to speed on Docker yet, we recently featured a guide on getting started with this powerful tool since it does take some practice to wrap one’s mind around at first.

A Guide To Running Your First Docker Container

While most of us have likely spun up a virtual machine (VM) for one reason or another, venturing into the world of containerization with software like Docker is a little trickier. While the tools Docker provides are powerful, maintain many of the benefits of virtualization, and don’t use as many system resources as a VM, it can be harder to get the hang of setting up and maintaining containers than it generally is to run a few virtual machines. If you’ve been hesitant to try it out, this guide to getting a Docker container up and running is worth a look.

The guide goes over the basics of how Docker works to share system resources between containers, including some discussion on the difference between images and containers, where containers can store files on the host system, and how they use networking resources. From there the guide touches on installing Docker within a Debian Linux system. But where it really shines is demonstrating how to use Docker Compose to configure a container and get it running. Docker Compose is a file that configures a number of containers and their options, making it easy to deploy those containers to other machines fairly straightforward, and understanding it is key to making your experience learning Docker a smooth one.

While the guide goes through setting up a self-hosted document management program called Paperless, it’s pretty easy to expand this to other services you might want to host on your own as well. For example, the DNS-level ad-blocking software Pi-Hole which is generally run on a Raspberry Pi can be containerized and run on a computer or server you might already have in your home, freeing up your Pi to do other things. And although it’s a little more involved you can always build your own containers too as our own [Ben James] discussed back in 2018.

BASIC Classroom Management

While we don’t see it used very often these days, BASIC was fairly revolutionary in bringing computers to the masses. It was one of the first high-level languages to catch on and make computers useful for those who didn’t want to (or have time) to program them in something more complex. But that doesn’t mean it wasn’t capable of getting real work done — this classroom management software built in the language illustrates its capabilities.

Written by [Mike Knox], father of [Ethan Knox] aka [norton120], for his classroom in 1987, the programs were meant to automate away many of the drudgeries of classroom work. It includes tools for generating random seating arrangements, tracking attendance, and other direct management tasks as well as tools for the teacher more directly like curving test grades, tracking grades, and other tedious tasks that normally would have been done by hand at that time. With how prevalent BASIC was at the time, this would have been a powerful tool for any educator with a standard desktop computer and a floppy disk drive.

Since most people likely don’t have an 80s-era x86 machine on hand capable of running this code, [Ethan] has also included a docker container to virtualize the environment for anyone who wants to try out his father’s old code. We’ve often revisited some of our own BASIC programming from back in the day, as our own [Tom Nardi] explored a few years ago.

Modeling Network Latency

The selfhosting community is an interesting and useful part of the Internet dedicated to removing one’s own services and data from the cloud and hosting it on their own servers, often on hardware that can be physically touched. With that kind of network usage, it’s not uncommon for people to build their own routers, firewalls, and other network support systems from the ground up. And, if you go deep enough, maybe even a home lab dedicated to testing and improving the network’s various layers. This piece of software helps simulate network latency to more accurately assess quality of service, performance, and the optimization of one one’s own networks.

The tool, called Speedbump, allows a network administer to quickly build a test network where characteristics of the network such as base latency and wave shape and size can be set up. From there, a TCP proxy sends the network traffic through the virtual network, adding in a set amount of delay to anything traveling on the network. It can be installed (or built from source) on an existing installation or used from within a Docker terminal, so there are plenty of options depending on preference. It’s also available as a library for any programs written in Go.

While this certainly has applications for home labs where self-hosting services is done at a high level, this could have professional applications as well. For troubleshooting simpler network issues we’d always recommend this tool which allows a more comprehensive network test than the standard “ping” command, and if you haven’t heard of selfhosting before it’s probably time to read this primer on it and build a hobby web server from scratch.

Easier Self Hosting With Umbrel

While it is undeniable that cloud-based services are handy, there are people who would rather do it themselves. For many of us, it is because we want what we want the way we want it. For others, it is a distrust of leaving your personal data on someone’s server you don’t control. Umbrel is a Linux distribution just for people who want to self-host popular applications like NextCloud or Home Assistant. [ItsFoss] has a good review that points out some of the plusses and minuses of the early version of Umbrel.

What’s really interesting, though, is the approach the distro takes to installing software. Like most modern distributions, Umbrel has a package manager. Unlike most, though, the packages are actually docker containers. So when you install an app, it is preconfigured and lives in its own bubble, unlikely to conflict with other things you might install.

We also like that it has a specific build for a Raspberry Pi, although it will work on other 64-bit hardware and you can even install it within docker on top of your normal operating system. Of course, the docker container concept is also a drawback — at least for now — because it can be difficult to adjust settings inside the container compared to a more conventional install.

It amazes us that hardware has become so capable that it is easier to just duplicate entire operating systems than it is to work out the required dependency interactions. Still, it works, and in most cases, it works well.

If you want to know more about Docker, we’ve covered it a few times in the past. You can even use it for very simple development cases if you like.

Continue reading “Easier Self Hosting With Umbrel”

Linux Fu: Docking Made Easy

Most computer operating systems suffer from some version of “DLL hell” — a decidedly Windows term, but the concept applies across the board. Consider doing embedded development which usually takes a few specialized tools. You write your embedded system code, ship it off, and forget about it for a few years. Then, the end-user wants a change. Too bad the compiler you used requires some library that has changed so it no longer works. Oh, and the device programmer needs an older version of the USB library. The Python build tools use Python 2 but your system has moved on. If the tools you need aren’t on the computer anymore, you may have trouble finding the install media and getting it to work. Worse still if you don’t even have the right kind of computer for it anymore.

One way to address this is to encapsulate all of your development projects in a virtual machine. Then you can save the virtual machine and it includes an operating system, all the right libraries, and basically is a snapshot of how the project was that you can reconstitute at any time and on nearly any computer.

In theory, that’s great, but it is a lot of work and a lot of storage. You need to install an operating system and all the tools. Sure, you can get an appliance image, but if you work on many projects, you will have a bunch of copies of the very same thing cluttering things up. You’ll also need to keep all those copies up-to-date if you need to update things which — granted — is sort of what you are probably trying to avoid, but sometimes you must.

Docker is a bit lighter weight than a virtual machine. You still run your system’s normal kernel, but essentially you can have a virtual environment running in an instant on top of that kernel. What’s more, Docker only stores the differences between things. So if you have ten copies of an operating system, you’ll only store it once plus small differences for each instance.

The downside is that it is a bit tough to configure. You need to map storage and set up networking, among other things. I recently ran into a project called Dock that tries to make the common cases easier so you can quickly just spin up a docker instance to do some work without any real configuration. I made a few minor changes to it and forked the project, but, for now, the origin has synced up with my fork so you can stick with the original link.

Continue reading “Linux Fu: Docking Made Easy”

Your Own IBM Mainframe (or Vax, Or Cray…) The Easy Way

If you want the classic experience of working with an IBM mainframe or another classic computer like a DEC VAX, you have a few choices. You could spend a lot of money trying to find one, transport it, and refurbish it. But, of course, most of us will settle for an emulator. While there are great emulators out there, most of the time you aren’t interested in running just the bare machine — you want the operating systems, the compilers, and the other software that made these machines so interesting. Running your three lines of machine code isn’t as much fun as playing hunt the wumpus or compiling some Fortran IV code. Unfortunately, finding copies of all this old software can be daunting. But thanks to the efforts of [Rattydave], you can do it with no problems at all. The secret? Pre-built docker images that have everything you need in one place.

Continue reading “Your Own IBM Mainframe (or Vax, Or Cray…) The Easy Way”