Putting A Pi In A Container

Docker and other containerization applications have changed a lot about the way that developers create new software as well as how they maintain virtual machines. Not only does containerization reduce the system resources needed for something that might otherwise be done in a virtual machine, but it standardizes the development environment for software and dramatically reduces the complexity of deploying on different computers. There are some other tricks up the sleeves as well, and this project called PI-CI uses Docker to containerize an entire Raspberry Pi.

The Pi container emulates an entire Raspberry Pi from the ground up, allowing anyone that wants to deploy software on one to test it out without needing to do so on actual hardware. All of the configuration can be done from inside the container. When all the setup is completed and the desired software installed in the container, the container can be converted to an .img file that can be put on a microSD card and installed on real hardware, with support for the Pi models 3, 4, and 5. There’s also support for using Ansible, a Docker automation system that makes administering a cluster or array of computers easier.

Docker can be an incredibly powerful tool for developing and deploying software, and tools like this can make the process as straightforward as possible. It does have a bit of a learning curve, though, since sharing operating system tools instead of virtualizing hardware can take a bit of time to wrap one’s mind around. If you’re new to the game take a look at this guide to setting up your first Docker container.

Docker-Powered Remote Gaming With Games On Whales

Cloud gaming services allow even relatively meager devices like set top boxes and cheap Chromebooks play the latest and greatest titles. It’s not perfect of course — latency is the number one issue as the player’s controller inputs need to be sent out to the server —  but if you’ve got a fast enough connection it’s better than nothing. Interested in experimenting with the tech on your own terms? The open source Games on Whales project is here to make that a reality.

As you might have guessed from the name, Games on Whales uses Linux and Docker as core components in its remote gaming system. With the software installed on a headless server, multiple users can create virtual desktop environments on the same machine, with each spawning as a separate process on the host computer. This means that all of the hardware of the host can be shared without needing to do anything complicated like setting up GPU pass-through. The main Docker container can spin up more containers as needed.

Of course there will obviously be limits to what any given hardware configuration will be able to support in terms of number of concurrent users and the demands of each stream. But for someone who wants to host a server for their friends or something even simpler like not having to put a powerful gaming PC in the living room, this is a real game-changer. For those not up to speed on Docker yet, we recently featured a guide on getting started with this powerful tool since it does take some practice to wrap one’s mind around at first.

A Guide To Running Your First Docker Container

While most of us have likely spun up a virtual machine (VM) for one reason or another, venturing into the world of containerization with software like Docker is a little trickier. While the tools Docker provides are powerful, maintain many of the benefits of virtualization, and don’t use as many system resources as a VM, it can be harder to get the hang of setting up and maintaining containers than it generally is to run a few virtual machines. If you’ve been hesitant to try it out, this guide to getting a Docker container up and running is worth a look.

The guide goes over the basics of how Docker works to share system resources between containers, including some discussion on the difference between images and containers, where containers can store files on the host system, and how they use networking resources. From there the guide touches on installing Docker within a Debian Linux system. But where it really shines is demonstrating how to use Docker Compose to configure a container and get it running. Docker Compose is a file that configures a number of containers and their options, making it easy to deploy those containers to other machines fairly straightforward, and understanding it is key to making your experience learning Docker a smooth one.

While the guide goes through setting up a self-hosted document management program called Paperless, it’s pretty easy to expand this to other services you might want to host on your own as well. For example, the DNS-level ad-blocking software Pi-Hole which is generally run on a Raspberry Pi can be containerized and run on a computer or server you might already have in your home, freeing up your Pi to do other things. And although it’s a little more involved you can always build your own containers too as our own [Ben James] discussed back in 2018.

Apple System 7… On Solaris?

While the Unix operating systems Solaris and HP-UX are still in active development, they’re not particularly popular anymore and are mostly relegated to some enterprise and data center environments They did enjoy a peak of popularity in the 90s during the “wild west” era of windowed operating systems, though. This was a time when there were more than two mass-market operating systems commercially available, with many companies fighting for market share. This led to a number of efforts to get software written for one operating system to run on others, whether that was simply porting software directly or using some compatibility layer. Surprisingly enough it was possible in this era to run an entire instance of Mac System 7 within either of these two Unix operating systems, and this was an officially supported piece of Apple software.

The software was called the Macintosh Application Environment (MAE), and was an effort by Apple to bring Macintosh System 7 applications to various Unix-based operating systems, including Solaris and HP-UX. This was a time before Apple’s OS was Unix-compliant, and MAE provided a compatibility layer that translated Macintosh system calls and application programming interfaces (APIs) into the equivalent Unix calls, allowing Mac software to function within the Unix environments. [Lunduke] outlines a lot of the features of this in his post, including some of the details the “scaffolding” allowing the 68k processor to be emulated efficiently on the hardware of the time, the contents of the user manual, and even the memory management and layout.

What’s really jarring to anyone only familiar with Apple’s modern “walled garden” approach is that this is an Apple-supported compatibility layer for another system. At the time, though, they weren’t the technology giant they are today and had to play by a different set of rules to stay viable. Quite the opposite, in fact: they almost went out of business in the mid-90s, so having their software run on as many machines as possible would have been a perk at the time. While this era did have major issues with cross-platform compatibility, there was some software that attempted to solve these problems that are still in active development today.

Thanks to [Stephen] for the tip!

An Old Netbook Spills Its Secrets

For a brief moment in the late ’00s, netbooks dominated the low-cost mobile computing market. These were small, low-cost, low-power laptops, some tiny enough to only have a seven-inch display, and usually with extremely limiting hardware even for the time. There aren’t very many reasons to own a machine of this era today, since even the cheapest of tablets or Chromebooks are typically far more capable than the Atom-based devices from over a decade ago. There is one set of these netbooks from that time with a secret up its sleeve, though: Phoenix Hyperspace.

Hyperspace was envisioned as a way for these slow, low-power computers to instantly boot or switch between operating systems. [cathoderaydude] wanted to figure out what made this piece of software tick, so he grabbed one of the only netbooks that it was ever installed on, a Samsung N210. The machine has both Windows 7 and a custom Linux distribution installed on it, and with Hyperspace it’s possible to switch almost seamlessly between them in about six seconds; effectively instantly for the time. Continue reading “An Old Netbook Spills Its Secrets”

Hardware Virtualization In Microcontrollers

Look at any sufficiently advanced CNC machine or robot, and you’ll notice something peculiar. On one hand, you have a computer running a true operating system for higher-level processing, be it vision or speech recognition, or just connecting to the Internet. On the other hand, you have another computer responsible only for semi-real-time tasks, like moving motors, servos, and reading sensors and switches. You won’t be doing the heavy-lifting tasks with a microcontroller, and the Raspberry Pi is proof enough that real-time functions aren’t meant for a chip running Linux. There are many builds that would be best served with two processors, but that may be changing soon.

Microchip recently announced an addition to the PIC32 family of microcontrollers that will support hardware virtualization. This addition comes thanks to the MIPS M5150 Warrior-M processor, the first microcontroller to support hardware visualization.

Continue reading “Hardware Virtualization In Microcontrollers”

Thin Client Hack

Hacking A Thin Client To Gain Root Access

[Roberto] recently discovered a clever way to gain root access to an HP t520 thin client computer. These computers run HP’s ThinPro operating system. The OS is based on Linux and is basically just a lightweight system designed to boot into a virtual desktop image loaded from a server. [Roberto’s] discovery works on systems that are running in “kiosk mode”.

The setup for the attack is incredibly simple. The attacker first stops the virtual desktop image from loading. Then, the connection settings are edited. The host field is filled with garbage, which will prevent the connection from actually working properly. The real trick is in the “command line arguments” field. The attacker simply needs to add the argument “&& xterm”. When the connection is launched, it will first fail and then launch the xterm program. This gives the attacker a command shell running under the context of whichever user the original software is running as.

The next step is to escalate privileges to root. [Roberto] discovered a special command that the default user can run as root using sudo. The “”hpobl” command launches the HP Easy Setup Wizard. Once the wizard is opened, the attacker clicks on the “Thank You” link, which will then load up the HP website in a version of Firefox. The final step is to edit Firefox’s default email program association to xterm. Now when the attacker visits an address like “mailto:test@test.com”, Firefox (running as root) launches xterm with full root privileges. These types of attacks are nothing new, but it’s interesting to see that they still persist even in newer software.