Network Infrastructure And Demon-Slaying: Virtualization Expands What A Desktop Can Do

The original DOOM is famously portable — any computer made within at least the last two decades, including those in printers, heart monitors, passenger vehicles, and routers is almost guaranteed to have a port of the iconic 1993 shooter. The more modern iterations in the series are a little trickier to port, though. Multi-core processors, discrete graphics cards, and gigabytes of memory are generally needed, and it’ll be a long time before something like an off-the-shelf router has all of these components.

But with a specialized distribution of Debian Linux called Proxmox and a healthy amount of configuration it’s possible to flip this idea on its head: getting a desktop computer capable of playing modern video games to take over the network infrastructure for a LAN instead, all with minimal impact to the overall desktop experience. In effect, it’s possible to have a router that can not only play DOOM but play 2020’s DOOM Eternal, likely with hardware most of us already have on hand.

The key that makes a setup like this work is virtualization. Although modern software makes it seem otherwise, not every piece of software needs an eight-core processor and 32 GB of memory. With that in mind, virtualization software splits modern multi-core processors into groups which can act as if they are independent computers. These virtual computers or virtual machines (VMs) can directly utilize not only groups or single processor cores independently, but reserved portions of memory as well as other hardware like peripherals and disk drives.

Proxmox itself is a version of Debian with a number of tools available that streamline this process, and it installs on PCs in essentially the same way as any other Linux distribution would. Once installed, tools like LXC for containerization, KVM for full-fledged virtual machines, and an intuitive web interface are easily accessed by the user to allow containers and VMs to be quickly set up, deployed, backed up, removed, and even sent to other Proxmox installations. Continue reading “Network Infrastructure And Demon-Slaying: Virtualization Expands What A Desktop Can Do”

Putting A Pi In A Container

Docker and other containerization applications have changed a lot about the way that developers create new software as well as how they maintain virtual machines. Not only does containerization reduce the system resources needed for something that might otherwise be done in a virtual machine, but it standardizes the development environment for software and dramatically reduces the complexity of deploying on different computers. There are some other tricks up the sleeves as well, and this project called PI-CI uses Docker to containerize an entire Raspberry Pi.

The Pi container emulates an entire Raspberry Pi from the ground up, allowing anyone that wants to deploy software on one to test it out without needing to do so on actual hardware. All of the configuration can be done from inside the container. When all the setup is completed and the desired software installed in the container, the container can be converted to an .img file that can be put on a microSD card and installed on real hardware, with support for the Pi models 3, 4, and 5. There’s also support for using Ansible, a Docker automation system that makes administering a cluster or array of computers easier.

Docker can be an incredibly powerful tool for developing and deploying software, and tools like this can make the process as straightforward as possible. It does have a bit of a learning curve, though, since sharing operating system tools instead of virtualizing hardware can take a bit of time to wrap one’s mind around. If you’re new to the game take a look at this guide to setting up your first Docker container.

A Guide To Running Your First Docker Container

While most of us have likely spun up a virtual machine (VM) for one reason or another, venturing into the world of containerization with software like Docker is a little trickier. While the tools Docker provides are powerful, maintain many of the benefits of virtualization, and don’t use as many system resources as a VM, it can be harder to get the hang of setting up and maintaining containers than it generally is to run a few virtual machines. If you’ve been hesitant to try it out, this guide to getting a Docker container up and running is worth a look.

The guide goes over the basics of how Docker works to share system resources between containers, including some discussion on the difference between images and containers, where containers can store files on the host system, and how they use networking resources. From there the guide touches on installing Docker within a Debian Linux system. But where it really shines is demonstrating how to use Docker Compose to configure a container and get it running. Docker Compose is a file that configures a number of containers and their options, making it easy to deploy those containers to other machines fairly straightforward, and understanding it is key to making your experience learning Docker a smooth one.

While the guide goes through setting up a self-hosted document management program called Paperless, it’s pretty easy to expand this to other services you might want to host on your own as well. For example, the DNS-level ad-blocking software Pi-Hole which is generally run on a Raspberry Pi can be containerized and run on a computer or server you might already have in your home, freeing up your Pi to do other things. And although it’s a little more involved you can always build your own containers too as our own [Ben James] discussed back in 2018.

Linux Fu: Docking Made Easy

Most computer operating systems suffer from some version of “DLL hell” — a decidedly Windows term, but the concept applies across the board. Consider doing embedded development which usually takes a few specialized tools. You write your embedded system code, ship it off, and forget about it for a few years. Then, the end-user wants a change. Too bad the compiler you used requires some library that has changed so it no longer works. Oh, and the device programmer needs an older version of the USB library. The Python build tools use Python 2 but your system has moved on. If the tools you need aren’t on the computer anymore, you may have trouble finding the install media and getting it to work. Worse still if you don’t even have the right kind of computer for it anymore.

One way to address this is to encapsulate all of your development projects in a virtual machine. Then you can save the virtual machine and it includes an operating system, all the right libraries, and basically is a snapshot of how the project was that you can reconstitute at any time and on nearly any computer.

In theory, that’s great, but it is a lot of work and a lot of storage. You need to install an operating system and all the tools. Sure, you can get an appliance image, but if you work on many projects, you will have a bunch of copies of the very same thing cluttering things up. You’ll also need to keep all those copies up-to-date if you need to update things which — granted — is sort of what you are probably trying to avoid, but sometimes you must.

Docker is a bit lighter weight than a virtual machine. You still run your system’s normal kernel, but essentially you can have a virtual environment running in an instant on top of that kernel. What’s more, Docker only stores the differences between things. So if you have ten copies of an operating system, you’ll only store it once plus small differences for each instance.

The downside is that it is a bit tough to configure. You need to map storage and set up networking, among other things. I recently ran into a project called Dock that tries to make the common cases easier so you can quickly just spin up a docker instance to do some work without any real configuration. I made a few minor changes to it and forked the project, but, for now, the origin has synced up with my fork so you can stick with the original link.

Continue reading “Linux Fu: Docking Made Easy”

Field Guide To Shipping Containers

In the 1950s, trucking magnate Malcom McLean changed the world when he got frustrated enough with the speed of trucking and traffic to start a commercial shipping company in order to move goods up and down the eastern seaboard a little faster. Within ten years, containers were standardized, and the first international container ship set sail in 1966. The cargo? Whisky for the U.S. and guns for Europe. What was once a slow and unreliable method of moving all kinds of whatever in barrels, bags, and boxes became a streamlined operation — one that now moves millions of identical containers full of unfathomable miscellany each year.

When I started writing this, there was a container ship stuck in the Suez canal that had been blocking it for days. Just like that, a vital passage became completely clogged, halting the shipping schedule of everything from oil and weapons to ESP8266 boards and high-waist jeans. The incident really highlights the fragility of the whole intermodal system and makes us wonder if anything will change.

A rainbow of dry storage containers. Image via xChange

Setting the Standard

We are all used to seeing the standard shipping container that’s either a 10′, 20′, or 40′ long box made of steel or aluminum with doors on one end. These are by far the most common type, and are probably what come to mind whenever shipping containers are mentioned.

These are called dry storage containers, and per ISO container standards, they are all 8′ wide and 8′ 6″ tall. There are also ‘high cube’ containers that are a foot taller, but otherwise share the same dimensions. Many of these containers end up as some type of housing, either as stylish studios, post-disaster survivalist shelters, or construction site offices. As the pandemic wears on, they have become so much in demand that prices have surged in the last few months.

Although Malcom McLean did not invent container shipping, the strict containerization standards that followed in his wake prevent issues during stacking, shipping, and storing, and allow any container to be handled safely at any port in the world, or load onto any rail car with ease. Every bit of the container is standardized, from the dimensions to the way the container’s information is displayed on the end. At most, the difference between any two otherwise identical containers is the number, the paint job, and maybe a few millimeters in one dimension.

Standard as they may be, these containers don’t work for every type of cargo. There are quite a few more types of shipping containers out there that serve different needs. Let’s take a look at some of them, shall we?

Continue reading “Field Guide To Shipping Containers”

Lightweight OS For Any Platform

Linux has come a long way from its roots, where users had to compile the kernel and all of the other source code from scratch, often without any internet connection at all to help with documentation. It was the wild west of Linux, and while we can all rely on an easy-to-install Ubuntu distribution if we need it, there are still distributions out there that require some discovery of those old roots. Meet SkiffOS, a lightweight Linux distribution which compiles on almost any hardware but also opens up a whole world of opportunity in containerization.

The operating system is intended to be able to compile itself on any Linux-compatible board (with some input) and yet still be lightweight. It can run on Raspberry Pis, Nvidia Jetsons, and x86 machines to name a few, and focuses on hosting containerized applications independent of the hardware it is installed on. One of the goals of this OS is to separate the hardware support from the applications, while being able to support real-time tasks such as applications in robotics. It also makes upgrading the base OS easy without disrupting the programs running in the containers, and of course has all of the other benefits of containerization as well.

It does seem like containerization is the way of the future, and while it has obviously been put to great use in web hosting and other network applications, it’s interesting to see it expand into a real-time arena. Presumably an approach like this would have many other applications as well since it isn’t hardware-specific, and we’re excited to see the future developments as people adopt this type of operating system for their specific needs.

Thanks to [Christian] for the tip!