Lightweight OS For Any Platform

Linux has come a long way from its roots, where users had to compile the kernel and all of the other source code from scratch, often without any internet connection at all to help with documentation. It was the wild west of Linux, and while we can all rely on an easy-to-install Ubuntu distribution if we need it, there are still distributions out there that require some discovery of those old roots. Meet SkiffOS, a lightweight Linux distribution which compiles on almost any hardware but also opens up a whole world of opportunity in containerization.

The operating system is intended to be able to compile itself on any Linux-compatible board (with some input) and yet still be lightweight. It can run on Raspberry Pis, Nvidia Jetsons, and x86 machines to name a few, and focuses on hosting containerized applications independent of the hardware it is installed on. One of the goals of this OS is to separate the hardware support from the applications, while being able to support real-time tasks such as applications in robotics. It also makes upgrading the base OS easy without disrupting the programs running in the containers, and of course has all of the other benefits of containerization as well.

It does seem like containerization is the way of the future, and while it has obviously been put to great use in web hosting and other network applications, it’s interesting to see it expand into a real-time arena. Presumably an approach like this would have many other applications as well since it isn’t hardware-specific, and we’re excited to see the future developments as people adopt this type of operating system for their specific needs.

Thanks to [Christian] for the tip!

35 thoughts on “Lightweight OS For Any Platform

  1. All I’ve seen from containerization is hard drives filled with bloated, duplicated libraries, applications that cannot share data with one another and work that should have been easy to back up from a well named subdirectory of ~ almost impossible to find, lost inside some part of a container system the user is never expected to look inside.

    What a bunch of garbage!

      1. As a BIG erlang fan and evangelist, I say that erlang WON’T save us from ourselves. It allows you to wrangle millions of shotguns constantly shooting you in feet while replacing broken feet before you fall over.

        1. Because, as a server admin, it forces the application teams to declare their requirements. I’m not left trying to figure out what damned version of Java they decided they needed, or some list of ancient libraries, etc etc. By providing a machine-consumable list of the requirements, the development teams give me all the things I need to know about how to run their app and I can go on about my day.

    1. Kubernetes is like General Relativity, it works so well at the largest scales we feel compelled to jam it into the smallest.

      I know that’s not really accurate physics and is really more about clusters than containers specifically- but my limited experience with K8s has led me to believe there is a sort of compulsion to use containers at every level of computing because of how well they work at large scales. But that’s just me.

    2. a file-system or container runtime that does de-duplication helps. Something like Docker uses image layers, which lets you share a lot of related immutable disk space if you do something reasonable like fork all your containers off the same root base OS image (usually alpine, maybe Ubuntu, or UBI).

      For running on your own workstation, the containers aren’t quite as compelling as they are for a big cluster of empty un-provisioned general purpose machines (“cloud”). But at least something like Alpine is only a few megabytes.

      In a cluster you can have a scheduler or orchestrator that delivers your container somewhere in a pool of servers according to the rules you set, run the work, then terminate, releasing your cloud resources for the next job. The generic platform of a container worker (like Kubernetes) makes adding, removing, and upgrading equipment a lot easier. You don’t have to wrestle with dedicating some machines for a special purpose. And if you do have special hardware, you can use tags to schedule work just for them, but manage them otherwise identically to your remaining cluster.

      When you use containers to hold data, you need to bind their persistent folders to somewhere. Or have a plan for shipping data in/out of the container. (sftp, rsync, resnic, …). If the container itself were entirely read-only that would be the ideal in most cases. Mutable containers kind of presents a problem of how do you manage this state that is buried inside of something. I’d argue that if a container (that isn’t a builder stage in a Dockerfile) is writing to the container volume then something is badly designed.

      Sharing between containers is not too bad, but you have to take an extra step to enable it. There are things like docker-compose that makes it easier to share data between related containers, mainly for doing a service-per-container design. (e.g. web server in one container, database in another, connected together with a Unix socket or private TCP port)

      P.S. sorry for the long reply. containers are my full time job.

      1. Jonmayo do you have any suggestions for training or certifications regarding Cloud technology? I have worked with networked computers for decades and would like to move on to the next level.

        1. I’ve not taken any myself. I do the classic method of having my employer pay me a salary while I go figure it out. Not everyone has that luxury.
          I think the difficulty with this stuff is that there are so many technologies floating around that learning just one is not going to carry you through for very long. There are some general principles though, and with a few bits of open source you can setup K3s (lightweight kubernetes) or with a few low-cost PCs, like the Odyssey Blue w/ Intel J4105 for example, you could make a small lab. , or RPi4 if you want to make this a little more difficult but cheaper.
          My first exercise in this was installing Docker on my desktop, then making a Dockerfile that setup my preferred IRC client and signed onto Freenode. From there I explored how to get logs and configs in/out, how to use a builder in the Dockerfile to build my projects from source but leave the build cruft out of the final artifact.

    3. It sounds like you don’t know how to use containers. Common layers don’t bloat disk as they’re reused between containers, sharing data is as easy as setting up a network or mounted folder/volume. Backups aren’t needed for the base image. Anywhere you’re making changes should be mounted to a volume or bind mount to the host system which is backed up. It sounds like you’ve got a fundamental misunderstanding of how containers should be used.

  2. Containerization is how commercial applications will proliferate but it has too many drawbacks for general use. It’s the answer to “how can we statically compile in libraries without violating the GPL?” It is rather worthless beyond that.

    1. Curious, which drawbacks do you see in this case? The model used is 1 container (Linux namespace) with an entire distro inside, not 1 container per app. The model for installing apps is the same. The reason for the namespace is to allow the init program to run as PID 1, even though it’s being managed by the boot-up init process. There’s no performance or user workflow impact.

  3. “compiles on any hardware” yeah good luck with that, “runs poorly everywhere” is the inevitable result. Take note of RHEL, it takes all of Redhats engineering team to support their very short list of supported platforms.

    1. the only time RHEL wastes hours of my life is when I deal with entitlements and licenses. You take that worst bit out and it’s much better, but of course it means RH would have no business model. (R.I.P. CentOS)

  4. I assumed by the heading that we were going to read about a FreeRTOS alternative or something.. Seems to be a different version of ‘lightweight’ ..

    I notice in their whole write-up they don’t mention ‘size’ or ‘ram’ either, so not sure how people would know how light it was..

    And finally, I hate containerization. Yes, the hardware has improved to the point that the performance doesn’t completely absolutely suck, but it is still being overused by people who really don’t know what they are doing.. It’s simply the wrong answer to dependencies..

    1. Is it really though? Keeping your dependencies separated so that you can run multiple versions of the same software on one machine that would normally be impossible without the overhead of virtualization definitely sounds like a good answer to me.

    1. If chroot had been secure to begin with, nobody would have needed to invent containerisation.

      Or for that matter, if we had switched to better microkernels instead of sticking with monolithic architectures, we wouldn’t need an isolation API of any kind.

  5. TL/DR – containers are bad when done incorrectly and make things more confusing. Containers offer value when you build / layer them correctly and use a proper orchestration layer.

    Got it.

  6. I’m absolutely amazed by how much everyone in these comments hates containers. They basically let you deploy whatever you want at the click of a button. If you’re losing files or having issues you’re usually thinking too hard about it. There’s great UIs for managing docker and now you don’t have to worry about dependency hell because everything is containerized. Containers are the way of the future for enterprise and personal use. If you hate them, take the time to actually learn about how to use them and their best practices. It turns pages of installation instructions into literally one command for most projects.

Leave a Reply to WadeCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.