Immutable distributions are slowly spreading across the Linux world– but should you care? Are they hacker friendly? What does “immutable” mean, anyway?
Immutable means “not subject or susceptible to change” according to Merriam-Webster, which is not 100% accurate in this context, but it’s close enough and the name is there so we’re stuck with it. Immutable distributions are subject to change, it’s just that how you change them is quite a bit different than bog-standard Linux. Will this matter to you? Read on to find out! (Or, if you know the answers already, read on to find out how angry you should be in the comments section.)
Immutability is cloud-based thinking: the system has a known-good state, and it’s always in it. Everything that is not part of the core system is containerized and controlled. I’m writing this from a KDE-based distribution called Aurora, part of the Universal Blue project that builds on Fedora’s Atomic Desktop work. It bills itself as being for “lazy developers”.
The advantage to this hypothetical lazy dev is that the base system is already built, and you can’t get distracted messing around with it. It works, and it isn’t at all likely to break. Every installation is essentially identical to every other installation, which means reproducibility is all but guaranteed. No more faffing about arguing on forums to figure out which library is conflicting with which. In an immutable system, they’ve all been selected to play well together, and anything else is safely containerized. (Again, a cloud ideal.) If the devs make a mistake during an update, well, just roll back!
50 Shades of Immutability
The different flavours of immutable linux differ in how they accomplish that, but all have rollbacks as a basic capability. Each change to the system becomes a new, indivisible image; that’s why we talk about atomic updates. You create a new system image when you update, but you don’t start using it until you reboot the system. (This has some advantages to stability, as you might imagine, although the rebooting can get old.) The old image is maintained on your system, just in case you happen to need it.
MicroOS and its descendants (like Aeon) use a system based on BTRFS snapshots to provide rollbacks. Fedora’s atomic desktops, like Silverblue, and the Universal Blue downstreams that are based on Fedora like Bazzite or Aurora use a system called OSTree, which is considerably more complex and more interesting. You can do something similar with Nix, of course, but that is a whole other kettle of fish.
OSTree bills itself as “Git for operating system binaries”. Every update, or every package installed is layered onto the tree and can be rolled back if needed– en masse, or individually. You can package up that tree of commits, and deploy it onto a new system, making devising new “distros” so trivial they don’t really deserve the name. In theory, you can install everything via OSTree, but the further you take your system from the base image, the less you have that “every system is identical” easy-problem-solving that the immutable guys like to talk about.
Of course you do want to install applications, and you do it the same way you might on a server: in containers. What sort of containers can vary by taste, but typically that means Flatpak for GUI applications. Fedora-based immutable distributions like Silverblue or Aurora use Flatpak, as does OpenSuse. (AppImage and snap are also options, technically speaking, but who likes snaps?) The Universal Blue team adds in Homebrew for those terminal applications that don’t tend to get Flatpaks. I admit that I was surprised at first to see Homebrew when I started using Aurora, since I knew it as “the missing package manager for MacOS” but its inclusion makes perfect sense when you think about it.
MacOS is the First Immutable UNIX
MacOS, you see, is the first immutable UNIX. As much as we in the Linux community don’t like to talk about it, Macs aren’t just POSIX compatible– they run Certified UNIX(™). And Cupertino has been moving towards this “immutable” thing for a long time, until Catalina finally sealed the system folders away completely on a read-only volume. Updates for MacOS also come as snapshots to replace that system volume– you could certainly call them “atomic”. Since the system volume is locked down, traditional package managers won’t be able operate. Homebrew was created to solve that problem. It works just as well on a Linux system that has the same lockdown applied to its system folders.
If Homebrew isn’t your cup of tea – and it seems to not be everyone’s, since I think Universal Blue is the only distro set to ship with it – you can go more hard-core into containerization with docker or podman. Somewhere in between, you could use something like Distrobox. If you haven’t heard of it, Distrobox is a framework for deploying traditional linux systems inside containers. For devs, it’s great for testing, even if you aren’t basing it on top of an immutable distribution. If you’ve never worked in the cloud, this may all sound like rube-goldberg gobbbly-gook, (“linux in a box on my linux!?”) but once you adapt to it, it’s not so bad.
The Year of Immutable on the Desktop?
The question is: do you want to adapt to it? Is cloud-based thinking necessary on the desktop? Well I’d say it depends on who is using the desktop. I would absolutely steer Windows users who are thinking of switching to Linux in the wake of the Windows 10 EOL to a Universal Blue distribution, and probably Aurora since KDE is more windows-y than Gnome. Most of those ex-Windows users are people who just want to use a computer, not play with it. If that describes you, then maybe an immutable distribution could be to your liking.
MacOS has shown that very few desktop users will ever notice if they can access the system folders or not; they are most interested in having a stable, reproducible environment to work in. Thus, immutable Linux may be the way to bring Linux mainstream – certainly Steam thinks so, with SteamOS. For their use case, it’s hard to argue the benefits: you need a stable base system for the stack of cards that is gaming on Linux, and tech support is much simplified for a locked-down operating system that you cannot install packages on. The rising popularity of Bazzite, Universal Blue’s gaming-centric distribution, also speaks to this.
There are downsides to this kind of system, of course, and it is important to recognize that. Some people really, really hate containerization because Flatpaks, and other similar options, use more memory, both on disk and in RAM. Of course not everything is available as a Flatpak, or on Homebrew if the system uses that. If you want to use Toolbox or Distrobox to get a distro-specific set of packages, well, of course running a whole extra Linux system in a container is going to have overhead.
From an aesthetic perspective, it’s not as elegant as a traditional Linux environment, at least to some eyes, mine included. Those of us who switched to Linux because we wanted absolute control over our computers might not feel too great about the “do not touch” label implicitly scrawled across the system folders, even if we do get something like rpm-ostree
to make changes with. Even with a package manager, there are customizations and tweaks you simply cannot make on a read-only system. For those of us who treat Linux as a hobby, that’s probably a no-go.
For the “Lazy Developer” Aurora sells itself to, well, that’s perhaps a different story. Speaking of lazy, I’ve been using Aurora for a few months now, almost in spite of myself. I initially loaded it as the last step on a distro-hopping jaunt to see if I could find a good Windows 10 replacement for my parents. (I think this is it, to be honest.) It’s still on my main laptop simply because it’s so unobtrusively out of the way that I can think of no reason to install anything else.
At some point that may change, and when it does I might just overcorrect and do a Linux From Scratch build or try out like NixOS like I’ve been meaning to. Something like that would let me regain the sense of agency I have forfeited to the Universal Blue dev team while running Aurora. (There have been times where I can feel the ghostly hand of an imaginary sysadmin urging me not to mess with my own system.)
After seeing how well containerization can work on desktop, Nix looks extra appealing – it can do most of what this article talks about with the immutable distros, but without trusting configuration of any facet of the system to anyone else. What do you think? Are the touted benefits to stability, reproducibility, and security worth the hassle of an immutable distribution? Is the grass greener in the land of Nix? If you’ve tried one of the immutable Linux distributions out there, we’d love to hear what you think in the comments.
you can pry [insert os/app/distribution here] from my cold dead hands!
Welp, that’s at least 80% of the comments made redundant right there.
Well there goes all the DOS comments at least.
I don’t want unchangeable Linux distributions! I want an unchanging PC hardware revision !
A PC that doesn’t change, that I run gcc on and that will slim down the kernel to a minimum.
Ffa it’s not unchangeble , you can install apps and shit, just different. These immutable distros are good, but not for the Linux geek, it’s for old Linux lazy geeks like me who just want a working os there I can install and use “apps” and don’t be tweaking it as soon a new version break my setup. and so on and it’s really good for the pc gamers that don’t know shit about Linux but don’t want to be on shitty slow Windows. It is just not for you if you just have to use Arch…or seLinux.. stop complaining it’s not for you to begin with.
A voice of reason. Amen.
I’ve downloaded Bazzite as I’m going to try it on my W10 Asus Zeph that isn’t W11 compliant.
I really don’t want to screw around with the OS (which is one of the main reasons I hate W11).
I love immutable systems when I want an appliance (gaming consoles count) and also when I know I am going to play IT-guy for the real user (family, children, etc). Of the two dozen computers I regularly touch or advise on, about half are some variation of immutable-with-overlays. The time they’ve saved me is hard to calculate, but since they take basically no work to maintain and the other half all consume a few hours a month… it’s a meaningful amount.
For my personal workstation I just like having backups and keeping the system out of my hair when I want to install or change things. But I’m still considering Silverblue, if my work or hobbies shift.
Spelling error in article: Curputino
Spelling error in article: BRTFS
Thanks! Fixed.
Nothing for “Immunability” in the 50 shades heading?
Sure! While we’re at it. :)
“BRTFS” -My butt, 2025
With Valve doing their thing I suspect in only a few years time most folks first experience with Linux will be Valve’s or one of the spinoffs with that extra bit of work done to support more hardware. So while immutable to me isn’t going to be my goto option its probably going to be the default soon. Great for less tech savvy users and for my own Steamdeck I’ve have no reason to switch (when gaming anyway – I do boot all sorts of things off the SD card whenever I want to use it as a real computer.)
I think if I wanted an immutable distro I’d want to make it myself though – so everything core to my needs isn’t in the container unless I want it to be for some reason. But I pretty rare have any trouble, as the package manager tends to solve all dependency stuff for you just fine most of the time anyway.
android of course does this..i imagine roughly in the same way as macos. they keep changing the name of the underlying partitions but fundamentally the “OS” is in read-only /system, and everything else is under some complicated access control (so that one app can’t read/write another app’s data…usually). 15 years ago i found it was useful to root android and ‘repair’ components in /system, and for a long time i had scripts “rwsys” and “rosys” (mount -o remount,rw /system) to help me manage them. these days i don’t bother because i hate the work, but android keeps doing stupid things, making me feel like i’ve made a mistake in embracing vendor-fubared android. anecdotally, every “extremely chinese” android device i’ve ever owned (pritom, valuepad, etc) has come pre-rooted, and i wonder if everyone else has that experience too?
on ARM SBCs, i do the same rwsys / rosys dance, because i don’t want to wear down my microsd. but it’s really just ‘regular’ debian, and i just run rwsys whenever i want to run apt. my first step is deleting systemd, and then it’s usually very easy to put system logs and so on in ‘tmpfs’, and make root read-only.
i understand why containerization people want to take this to an extreme. it’s really all about deployment at scale. not much use for me, of course. though i do find it handy to run my browser under a container, and if lxc presented me with a stock immutable browser base container, i’d probably use that (instead of using a shell script to produce it from some stock ubuntu image). it’s just nice to be certain that upgrading libraries on my PC won’t break my browser, and vice versa. such an insane web of dependencies, and really just focused around that one application.
I have ta admit that I am tempted to go witn an immutable distro on my next linux laptop or maybe I will go proxmox and run that on my next linux machine. So many options. I can see the value of it just working. Frankly I wish my company would go with Proxmox or an immutable Linux and run Windows in a VM on it. If for no other reason that when it is time to get a new laptop you can just move your VM to the new machine and get back to work. Even better would be so you could have a powerful desktop along with our laptops and just move the VMs between them.
Recent installed Proxmox here. Takes a little getting use to. Even with protainer things can still take work to getting right.
Homebrew was created long before the macOs system volume was locked down. The actual reason for its creation is that the author disagreed with how the existing package managers (MacPorts, Fink) worked, esp. w.r.t. dependencies.
Third-party Mac package managers have always worked outside the system folders, because messing with them has always been bad practice and very fragile. After seeing distro version upgrades explode a few times, most Linux users learn this lesson as well.
I discovered macos’ immutability a couple years ago when I tried to put a program in /usr/bin and was momentarily irked, but got over it when I found that I still had access to /usr/local. Honestly, in thirty years of messing with Linux and even more with Unix, I’ve never WANTED to mess with system files. In fact, when I started working with Unix (BSD 4.2) on a 68020-based workstation in the 80s, I kept a pristine OS image on 9-track tape, because every time my system crashed due to faulty hardware I was troubleshooting, I knew there was a significant risk that something in my OS no longer worked. So I kept a two-sided refrigerator magnet, red on one side and green on the other, on the front of my rack. When I restored the system from tape, I flipped it to green, the next time it crashed I flipped it to red, and if it started misbehaving while in “red” state, I just grabbed the tape again. This way I didn’t have to waste time doing the restore for the crashes that didn’t break anything. I didn’t have time to play with broken OSs; I had work to do.
I have found macos to be a godsend after too many years of Windows and yes, even Linux (just TRY building an application linking to any libraries other than the ones in /usr/lib). In macos, if you build an application that needs a particular version of a particular library, you can just put that library in the .app package. None of this nonsense of having to sandbox the entire application. Flatpacks and snap are ridiculous. Homebrew might be all right if enough applications were available for it, but it has given me a lot of trouble. I use macports whenever possible for non-native applications because this doesn’t have a special walled-off area for all non-macos applications.
haha i’m delighted by the contrast between our takes… i was shocked to read that you never wanted to mess with the system parts. i am always messing with system stuff, and for the same reason that you don’t: i have work to do, i don’t have time for a broken OS. a working OS has so rarely been the default state in my life that i have very often needed to ‘root’ a device to fix a bug in the OS so i could get work done :)
i did not even find macos to be bug free, and throughout my life i am exposed to a bunch of OSes, many of them supplied by hardware vendors, and still i have yet to find one so bug free that i haven’t resented the walls they put up to make me not fix it. must be something wrong with me that i’m unable to be content
I like Aurora and Bluefin, they’re both filled with nice defaults and things just work out of the box on all my hardware (Running a wide gamut of NVIDIA, AMD, and Intel GPU/CPUs).
The problem comes when you want to run weird things or do development that isn’t web-centric. Building and maintaining entire container systems to generate dev environments for projects gets old. Running CLI apps is still a bit weird since flatpaks really aren’t made for it so there aren’t many prebuilt packages. Homebrew helps but once again, it’s hit or miss on package availability and freshness.
The last major hitch I had is the bootloader. It’s the interface that all your backup OSTree images hinges upon. Out of the box it only displays two previous systems states with no real contextual information other than ostree:1, ostree:0, etc. So hopefully you know what those mean when you find out that the stable image it just pulled sometime in the last week breaks plugins. And despite all the advances to the OS, grub is still grub; so if you hose that (by running ostree update as root, for instance) you hose your entire system with no hope to recover since the process to boot is so weird. Normal OS recovery methods don’t apply here. Don’t ask me how I know. :c
Nix is the only solution that really makes sense to me tbh. Containers aren’t free, either in CPU or storage. And let’s not even get into bugs caused by containers compiled for one version of the kernel and running on another. Please note I said containers – not VMs. Containers use the host kernel. A real lockfile for my OS – that’s great. And configuring it all via git? Also great. Of course, I’ve been using the same Arch install for over 10 years now (literally, I just
dd
to new hardware…), so the odds of me putting the effort into switching are pretty small.nixos is my first Linux distro and it will likely be my last, I don’t see myself using anything else any time soon. it has a steep learning curve but it is everything I wanted.
on other distros, you use cli commands or random files to configure everything. this has the problem of “what change did I make 6 months ago that could be breaking my system now?” with nix, everything is written in one file in one place (or one repo of files backed up with git ideally). if I want to know what setting I set a while ago, I just start reading my configuration.
nix is a very easy language to read, but not so when it comes to writing it. you can be as fancy as you want, or as simple as you want. this is a blessing and a curse, as you are free to do whatever, but there are few agreed upon ways to do something so tutorials rarely work for everyone.
As middle step there is etckeeper. It does what what the tin says: handle the files in /etc/ with a version control system. And the package tools should already handle the files outside it.
I didn’t see any mention of Tiny Core Linux, which is great for small systems so I will mention that.
Each time the system boots, a pristine copy of the Tiny Core system files is made into a RAM file system. Likewise, a small RAM file system is used to loads each application. By running from RAM, the system is fast. A compressed file system in RAM provides paging, if that’s ever needed.
Tiny Core Linux has applications up on a file server and has an effective package manager.
For your own software development work you can mount persistent media into the file system. Following a boot just make sure the persistent media is mounted. Then you can package your work to load with the other applications.
All the details are in the Tiny Core book on their web site.
If you guys wanna check out something awesome look up astros microvms for nix. All the benefits none of the drawbacks!!
I moved from an initial stretch of System 3 -> macOS to NixOS in 2019, installed it on all of the things, and never looked back.
I don’t see myself ever running a non-declarative & -atomic OS again. Also I would never suggest NixOS to someone else. People rightly bounce off the quirky-at-best language and barely-present tooling. It very much is a thing you experience and then have strong opinions on.
My impression is that the group most likely to bounce off is that of very experienced Linux people. They risk entering with an expectation of being able to directly apply a lot of know-how, only to find themselves fighting the system at every turn.
Personally I’m most definitely in the “lazy dev” camp. I have zero interest in twiddling OS installs. I just want to arrive at something which works well for me and then not have to think about it again until I move to a new version and get told of a few things needing adapting. I run 20-30 installs this way across hardware & VMs, for work and family.
Speaking of updates, those just run headless on boot – no progress bar, no notification. When I boot next time I just run newer software. And if I lose power or just shut down in the middle of a background update, it just picks up where it left off on next boot. Why is it still acceptable in present year for powerloss during an update to potentially brick a device?
Anyway, try NixOS at your own peril – you might bounce off hard or you might fall down an amazing rabit hole – either way I’m sorry.
About 3 years ago, I was hopelessly hooked on distro hopping. That is, until I installed NixOS. When I originally installed it, I had a pretty extensive homelab with a 3-node ProxMox cluser, TrueNAS, and opnSense. Not to mention, I had to resize my disk partitions because NixOS was the 4th linux distribution installed on that disk.
For me, the idea clicked immediately, and the value proposition was obvious. I still haven’t “hopped” off of NixOS. That 4th OS partition on my NVMe eventually grew to take over the whole thing, and now, I still have all the hardware I mentioned above, but the only thing left in its original state is opnSense, literally everything else is running NixOS. I love opnSense but, it’s on borrowed time. :D
I’m not sure why I typed all this out, but if you made it this far, and you’re interested in NixOS, go give it a shot. You never know, it might become your new favorite obsession!
It’s funny when people consider themselves power users and then look at immutable distros and say they can’t do much with them.
You can build your own image. It’s a new thing to learn of course, but you also have to learn NixOS and I can guarantee building your own image is easier.
Having said that it is also true that if the underlying image changes, and you don’t like those changes, you then have to modify your image again to rever what the Devs of the base image did, when you didn’t actually make any new changes yourself.
So yeah I can see that friction there.
On a tangent – what is the main advantage of this style where every piece of software has to be “assimilated” into the operating system by shotgunning their resources across a hierarchical file system? It seems that all these troubles stem from that simple design choice.
Why can’t I just have a piece of software and its resources and libraries on a flash stick and launch it from there? Why do we need these elaborate container systems and virtual layers to make that happen?
i think the word ‘elaborate’ could be removed from your comment and then it might answer itself
there’s definitely things out there that are too elaborate but there are a ton of approaches to packaging an application in a single directory out there, and they run the gamut in complexity. i don’t like docker, and even lxc gives me some ick from the complexity and unknowns, but there are many-many alternatives. it just depends on how hardnosed you are willing to be when trading off dependencies vs duplication.
for example, android and macos are just points on a vast spectrum of unix package management tradition. and for all their brilliance at allowing apps to exist in just a single directory, they also still find it useful to have formal ingress into the host, with hooks and, ultimately, files sprayed across the filesystem. (not denying the value of the way they manage this spray!)
As If I had any say on the matter. I’m the user – I have to go with what is given.
For instance, I wanted a particular music player on Ubuntu but it wasn’t in the repository, so I had to find another Debian based package for it. That had different assumptions about where some resources and configurations went, so it didn’t work until I manually moved the files around, which then borked the package management for that app.
Tried AppImage? Getting an app installed (download, +x) into.menu a minor PITA but no dependencies. .. all there
https://ludditus.com/2024/10/31/appimage/
The problem seems to be that AppImages don’t really work because they cannot reasonably and don’t include ALL libraries that must be present without carrying the entire operating system with them (there’s an “exclusion list”), yet they cannot reasonably rely on everything being there because that was never truly standardized, so as an end result the user reports that about 2/3rds of their AppImage applications failed to work when switching between distributions.
The comment in the exclusion list: “This is a working document; expect it to change over time.”
So the problem remains. The AppImage package must be tailored to the distro, and even then there is no guarantee that it keeps working as the distro gets updated or upgraded because there is no stable base and no agreed standards.
I have an older laptop I bought with Windows VISTA. I ended up putting 7 on it. Worked well for a long time.
Then I put Linux Mint on it. All I do is use that laptop for doing the cable modems here in the building I live in, so it doesn’t get that much use doing anything else. What I want to know is, MS hit a home run with Windows 7, it was stable, got regular updates and patches and just WORKED. Not sure why the need to go to 10, and now 11? The Linux Mint? Still stable, still working. Not sure I understand the reason for all the changes in Windows. “Engineers, they love to change things”. –Dr. Mccoy, Star Trek The Motion Picture.
Engineers and developers do like to make new things, but when talking about Windows, it’s about making sales.
Do you still eat Chef Boyardee for lunch, Billy?
best immutable Linux for containers in Flatcar Linux. A CoreOS drop in. RedHat created created Fedora CoreOs which has some advantages over Flatcar like cloud-init support, but I feel Flatcar is just easier..and it has Nebraska, fantastic mass patching for Linux
NixOS forever. You won’t regret it™
OSX is not the first immutable Unix by far, that predates Linux, and of course it was possible to do on early Linux as well, and even embedded Windows before OSX was available. Note I don’t say any of this was convenient or easy. It was mainly aimed at managed deployment.
I’m a new Linux user (as of June 2025). Based on my own research, I started with UBlue Aurora and discovered that it’s just Fedora Kinoite with some stuff thrown in that beginners probably don’t need. But KDE Plaza didn’t work for me because KIO-gdrive is broken. I also tried UBlue Bluefin and Fedora Silverblue. Both work really well. And I particularly liked the automated mounting of Google Drive using GOA and GVfs. I just couldn’t get comfy with the Gnome desktop paradigm, even with extensions. I also toyed with Budgie atomic. I liked it, but X11 display server kept me from running Waydroid. After briefly playing around with Manjaro, Ultramarine, MX Linux, and LMDE, I ended up back at Fedora Kinoite and will probably stay, now that I’ve figured out how to use rclone to mount Google Drive on the file system. Kinoite/Aurora isn’t as rock-solid as Silverblue/Bluefin, but it’s what I like for now.
Sounds very familiar.
https://alt.os.linux.ubuntu.narkive.com/dXEt4XwV/evolution-of-a-ubuntu-user
i know one thing i dont like about steam os is how hard it is to set up a build environment on it. there are a lot of old games with linux ports where its easier to get the windows version to run than to get a functional native linux build. i need a solution better than “build a flatpack on another computer running arch”. what if i dont have another rig? what if it runs something other than arch? we will not even discuss building a flatpack (that sounds hard).
I’m gonna try another fork. Ubuntu’s too broken already for my taste (generic BT dongle acts up for example) when I buy another machine.
On a more general note Linux will never achieve supremacy without gaining the monolithic quality of Windows. Again it’s just too broken to be taken seriously.
I just want a physical write protect switch on the drive again, so I can lock my core system down hard beyond any potential of software overwriting anything.
The closest that I have been able to find to this recently is that SD cards have both temporary and permanent write protect bits which most interfaces are not able to override. Of course SD cards have other issues, which may make them questionable as the core OS storage medium.
And, of course, you would need a configuration that keeps swap and tmp and things like that on the user drive rather than the system drive.
But it should be trivial to do, and that is the kind of immutability that I would find valuable.
Unfortunately, trivial does not equate to easily available. There are thumb drives which claim to have write protect switches, but my understanding is that they rely on the operating system honoring a read – only bit, which I do not consider strong enough. And I still have not seen a USB traffic filter which can be plugged in between the system and external drive to provide this feature, except for some forensic tools which cost entirely too much.
You would think hackaday would have featured such a filter by now…!
It’s not for me on my desktop, but I can fully understand the appeal. I have a Steam Deck and I just want that amazing device to “just work (TM)”. So this is the way to go. And with the expected Steam Machines for the living room, if they come out, it might be a huge increase in the amount of Linux users running a system like this without even knowing it. It’s great for those situations. No need to mess with it, no reason to mess with it, no option to mess with it. I wouldn’t want this on my desktop, but for a gaming machine it’s perfect. Personally I don’t really tinker, but I have my system setup the way I like it. I use i3wm on Arch with Trizen. I installed Arch on the workstation at work in early 2018 and it’s still running perfectly smooth. Should probably get a new computer but my setup is so nice. My install at home is a bit older than that and I did change some hardware during that time, but kept the original install. I’ve been using Linux since before the Y2K. I want my things to feel the way they feel and rarely change, but some of the software I use isn’t in normal repo’s. So if I install Debian/Ubuntu/etc, I have to manually compile software, and keep it up to date and to be honest, I’m just too lazy to do that. Especially when the software is just there in the AUR.
I use Arch by the way.
I think this article is written from a very narrow perspective.
As far as the net goes, there are already a multitude of locked cloud services. AWS/Google and more.
iOS and Android dev teams are comprised of thousands of developers. The cost to build, test, certify and release each OS build costs more than a house. These builds are used by billions of end-users, they are made to be un-hackable, because:
Warranty claims. Possibly from an overly enthusiastic “non-lazy dev” posting an easy to use sudo/curl install script. github is full of those. No end-user support team is capable of dealing with anything on that tech level. They have to RMA devices. Too costly.
DRM commitments to your favorite steaming service. They have ways to penalize non-complaint products.
If the device is a phone, oh boy…. certifications. Things that make certification work are not cleanly located in a separate chip. Those Severe Weather Alerts need a UI and sound. They are required. Not something you can allow the “non-lazy dev” to break and possibly distribute to end-users, possibly via sudo/curl command.
I make quite a lot of my own clothes, or anyway more than most people, and I highly recommend it. So I’d like it if there were more and better places to buy fabric. But I’m not going round saying ready-to-wear clothing stores shouldn’t exist. Normal people would correctly think something had gone wrong with my brain.
If you enjoy tinkering with your OS as a hobby, that’s cool, but it just has nothing to do with making computers work well for normal use. I feel like there’s a vocal minority who specifically don’t want regular people included in the term “Linux users”, ‘cause otherwise why care what distros are available to everyone else?
Typo in paragraph 2: dif/Bferent
I’m guessing that the /B was intended to be a command to the editor, such as bold.
A good guess!