Linux has changed. Originally inspired by Unix, there were certain well understood but not well enforced rules that everyone understood. Programs did small things and used pipes to communicate. X Windows servers didn’t always run on your local machine. Nothing in /usr
contributed to booting up the system.
These days, we have systemd controlling everything. If you run Chrome on one display, it is locked to that display and it really wants that to be the local video card. And moving /usr
to another partition will easily prevent you from booting up, unless you take precautions. I moved /usr
and I lived to tell about it. If you ever need to do it, you’ll want to hear my story.
A lot of people are critical of systemd — including me — but really it isn’t systemd’s fault. It is the loss of these principles as we get more programmers and many of them are influenced by other systems where things work differently. I’m not just ranting, though. I recently had an experience that brought all this to mind and, along the way, I learned a few things about the modern state of the boot process. The story starts with a friend giving me an Intel Compute Stick. But the problems I had were not specific to that hardware, but rather how modern Linux distributions manage their start-up process.
The Story
As I said, a friend of mine gave me an Intel Compute Stick. It was one of the early ones that was fairly anemic. In particular, it only had 1 GB of memory and 8 GB of flash storage. It booted an old version of Ubuntu and it was, as you’d expect, pretty slow. But I liked the form factor and I have a new workshop that could use a permanent PC, so I decided to upgrade it.
There were the usual problems. A BIOS upgrade broke the network. Upgrading to KDE Neon fixed the network, but the newer kernel had the dreaded C-State bug that caused hangs. Luckily, that’s easy to workaround. So after some effort, I had a reasonably working system. Sort of.
The Problem
The problem was the 8GB of flash. I put a 64GB SDCard in, but I didn’t want to boot from it. With Neon installed and a few other essential things, the flash was very close to 100% full. My plan was to move /opt
, /home
, and /usr
over to the SDCard. I thought it would be easy.
Traditionally, this is a straightforward process. First, copy the files over using something that gets hidden files and links without changing them. Many people use rsync but usually use tar. Then you remove the old directory (I rename it first, just to be safe and delete it after everything is working). Finally, you make a new empty directory and change /etc/fstab
to mount the disk at boot time.
For /home
and /opt
that was fine. The system will boot without difficulty and I had that working in no time. I knew /usr
would be a bit harder but I figured I could be in a root shell without the GUI or just boot off a USB drive and do all the same work.
I actually anticipated four mounts. I mounted the entire SDCard at /sdcard
. Then I did bind mounts from /sdcard/opt
to /opt
and /sdcard/home
to /home
. The /usr
mount would also be a bind mount but it wasn’t going to be that easy.
The Bigger Problem
My attempts to move /usr
caused the system to stop booting. Why? Turns out systemd handles mounting what’s in /etc/fstab
and systemd requires things in /usr
. I thought perhaps that systemd would be smart enough to boot the system if it didn’t have to read /etc/fstab
so I decided to mount the SDCard using systemd’s native facilities.
Systemd can handle a mount just like a service. That means it will manage the mounting, and also weave it into the dependencies. So it is possible to require a service or other mount to be ready before mounting some disk and, of course, other services and mounts might depend on that disk, as well. In theory, that’s perfect because it doesn’t make sense to try to mount /sdcard/home
, for example, before mounting /sdcard
.
Here’s the mount definition in /etc/systemd/system/sdcard.mount
[Unit] Description=Main SDCard Mount DefaultDependencies=no Conflicts=umount.target Before=local-fs.target umount.target After=swap.target [Mount] What=/dev/disk/by-uuid/b3b6ac3b-2109-487c-af34-c49586412cea Where=/sdcard Type=ext4 Options=defaults,errors=remount-ro [Install] WantedBy=multi-user.target
Obviously, the UUID would change depending on your disk. Not so obviously, this file must be named sdcard.mount. If the mount point were, say, /usr/lib
then the file would have to be usr-lib.mount.
Once that mount occurs, you can mount /home
:
[Unit] Description=Home SDCard Mount DefaultDependenices=no Conflicts=umount.target Before=local-fs.target umount.target RequiresMountsFor=/sdcard [Mount] What=/sdcard/home Where=/home Type=none Options=bind,x-systemd.requires-mounts-for=/sdcard [Install] WantedBy=multi-user.target
Note that the mount requires /sdcard
. Once you put these files in the right place and reload systems, you can start these units and the files will mount. You can enable them to cause them to start at boot. The /opt
unit looks just the same, except for the file names.
Not So Fast!
This still leaves the problem with /usr
. Oh sure, it is easy to write the unit, but the problem is that systemd needs some libraries out of /usr
so the system will refuse to boot. I considered copying the libraries over to either /lib
or somewhere in the initial RAM disk, but after it turned out to be quite a few, I decided against that.
I finally decided that having everything mounted early in the boot process would be the right answer. That way systemd can imagine it has a complete disk. I had actually considered using lvm to join the disks together, but decided that was bad for a lot of reasons and may have had the same problems anyway. I wanted control over what was on the SDcard vs the internal storage, so it was time to look at the initramfs scripts.
Booting (Some) Linux Systems
Most modern Linux distributions don’t boot your root file system directly. Instead, they have a compressed file system that loads into RAM and then boots. This system is responsible for getting the system ready for the real boot. Among other things, it mounts the root file system and then pivots to make it the real root.
For Debian-style distributions, this is the initramfs and you can find some user-definable scripts in /etc/initramfs-tools
. The bulk of the predefined ones are in /usr/share/initramfs-tools
. In the scripts directory you’ll see a number of subdirectories with suffixes of -top, -bottom, and -premount.
As you might imagine, init-* occurs at system initialization and local-* happens as local disks are mounted. Don’t confuse these with hook scripts. A hook script executes when you are building the initial file system. That helps if you need to modify the boot environment statically. The scripts we want are the ones that execute at boot time.
The top scripts run first, followed by the premount, and then the bottom. So init-top runs first and init-bottom is the last thing that runs. In between, the other scripts run and by local-bottom, the root file system should be ready to go.
Documentation and Gotchas
If you read the documentation, you’ll see that the scripts have a specific format. However, there are some examples that are misleading. For example, the template script shows sourcing /usr/share/initramfs-tools/hook-functions
to load common functions. That’s great if /usr
already exists, but for us it doesn’t. Some other scripts use a copy that is in the boot environment, located at /usr/share/initramfs-tools/scripts/functions
. That’s what I used in my script:
#!/bin/sh PREREQ="" prereqs() { echo "$PREREQ" } case $1 in prereqs) prereqs exit 0 ;; esac . scripts/functions # Note: our root is on /root right now, not / mount /dev/mmcblk2p1 /root/sdcard mount -o bind /root/sdcard/usr /root/usr mount -o bind /root/sdcard/home /root/home mount -o bind /root/sdcard/opt /root/opt
The only tricky part is that our eventual root file system isn’t at /
, it is at /root
, so the mounts reflect that.
The Result
Of course, I had to disable the systemd mounts for /opt
and /home
although I could have left them and not put them in this script. Now by the time systemd gets control it can find all the things in /usr
it wants and the system boots. Moving those three directories left me with about 70% of the internal storage free and only took up a small fraction of the SDCard.
There are probably many other ways to do this. I mentioned lvm or you could revert back to the old init scripts. But this does work reliably and is very flexible once you get it all figured out.
The Intel stick is pretty small, but we’ve seen smaller. If you do try this at home, don’t forget that logging to eMMC devices isn’t always a good idea.
I tried something similar for my OrangePi PC board. The SD card I had at hand was very slow, and only 4GB, so I decided to use a 500GB USB harddrive as my main storage instead. I promptly copied the /usr, /lib and /opt directories to the harddrive, edited the fstab to mount them on boot.
As expected, it did not work. Fortunately I now know why. Although I am sure editing the initramfs for a SMC like OrangePI PC will be a lot harder.
on a uboot system, initramfs should behave more-or-less identically to a uefi system. albeit with the addition of a device-tree file in addition to the kernel and initramfs.
Or just stick with non systemd distros.
Artix Gang approves this post
artix/void/gentoo/devuan/slackware/alpine
+1
Love you Slackware.
+5 insightful
I used Openrc on Gentoo up until the start of the Covid crisis.
While at home I wanted to:
try Anbox (not worth it, hardly any apps worked and Snap dependency makes it hard to keep working on Gentoo anyway),
upgrade to the latest Anki (yes, a flashcard app requires Systemd now, due to dependencies inherited from QT),
and speed up my /home nfs mount on boot (Systemd didn’t really help).
Now my hard drive has physically died and I am temporarily booting to my second, Windows drive.
When I reinstall I’m not sure if I will go back to Openrc or Systemd.
I hate the design philosophy of Systemd so much but it’s reached it’s tendrils into everything. It’s hard to avoid. I even tried FreeBSD as a desktop for a while but lack of Wildvine for Netflix, Hulu, etc.. ensured that would be short lived.
How does this happen that seemingly one person can just redesign so much of a popular open source operating system in their own image and then everyone is pretty much forced to use it or do without?
And as HaD reported a while back that “one person” wants to mess with /home now!!!
It’s amazing how little the current Linux fan base understands about Unix. A very bright dev on my team had never heard of the head command.
I believe Linux has “evolved” to the point where it has the complexity of Windows with the user friendliness of Unix – not a great combo :(
Agreed. Ubuntu is doing things that Windows used to get blasted for. Unattended upgrade is a new Ubuntu service that does just that. You expect to shutdown or reboot and no wait there are upgrades happening without your consent. It even happens just sitting idle. I try and install something with apt and see that something in /var is locked because the upgrade service is running.
Ubuntu has been garbage for at least a decade. It’s no surprise when you think about the business model that Canonical employs and how similar it is to the Ballmer days of Microsoft.
It’s great for people who are interested in Linux but are too scared to make the jump to a real distro that gives you back the power of your hardware.
“unattended-upgrades” was created by Michael Vogt for Debian and the program is also a part of Ubuntu, to no surprise. But no one is forced to use it. How about making the distinction about that and what Windows does?
How about educating the users to choose what fits their needs, instead of mindlessly throwing crap at developers who offer functionality that’s important for plenty of people? This is especially important for servers, where you want your security up to date.
You should rather talk to the developer(s) about making the update/upgrade process “atomic”, so you can shutdown/reboot after the current packages is installed, not the whole 159 if that’s ever the case. That would actually be useful, and I bet people with SSDs and decent CPUs will likely have no issue with waiting for a few seconds for 1 update to complete before shutdown/reboot. They can resume updating once they start their OS back up.
But obviously, for people who use their computers several times a week, doing their updates when they get released, they will rarely get a large batch of updates. That means in most cases you don really have to wait a lot for the package manager to finish. Also, it’s a not a common use case that people want to install new packages but the package manager is busy.
So are you advanced enough to know how the package manager works on the file system, but you’re helpless against a checkbox that disables the unattended upgrades? How?
I don’t know if it even makes sense to talk about a singular “Linux”. Is Mint a good enough variation of Ubuntu? Are the Puppy Linux family better at cutting down complexity? Arch-based distros like Manjaro good, or is the original Arch the only true one? Debian? Linux has “evolved” to the point of not being a single operating system, and these conversations bare kinda useless unless you reframe the discussion to a specific “family”.
“It’s amazing how little the current Linux fan base understands about Unix.”
Not so much for explaining the head command but when it comes to design concepts maybe a good approach would be for the old timers to stop talking about the “Unix Way” or “Unix Philosophy” and just refer to these things as good design practices and explain why they are good.
I doubt the kids care about the name Unix.
Over a couple of decades ago, we interviewed a candidate for programming C/Unix,
he didn’t know how to use the “find” command…
Nice to see someone else badmouth systemd.
The old rule used to be that you had /bin in the root filesystem and would get /usr (and /usr/bin) later.
But Fedora a long time ago merged /bin and /usr/bin and abandoned that philosophy.
And now apparently systemd makes additional assumptions.
Moving root would always be fraught with peril (or certain disaster), but /usr used to be fair game.
Stories like this, based on real adventures are my favorite sort of thing.
By “the rule” you mean the entire series of rules spelled out in the Filesystem Hierarchy Standard ratified in 2004? ;)
https://www.pathname.com/fhs/
It probably all started to go wrong when they stopped using different volumes for /usr and mounting them separately. In the old days you could boot single user to recover a trashed /usr from tape.
” how little the current Linux fan base understands about Unix” how true.
At least the concept of users and privilege seems to have survived though, which I am sure cuts down the propagation of malware on *n*x.
Thats the old rule, the new rule is that you put everything in intramfs you need to boot. And while I also get annoyed and frustrated by the added complexity of this, there is a certain charm to it. Bricking the boot is harder as at worst you should end up in the initramfs shell
I resisted this for a very long time.
I thought the point was to be able to build hardware and filesystem drivers that are needed at boot as modules instead of building them into the kernel. But why would I ever want to do that? Is there a point to building something as a module other than being able to unload it when not needed? Exactly when do I use my computer without it’s hard drive or filesystem support? Maybe for some complex setup where /boot is using a totally different partition or hardware type as the other partitions. I never saw the need for that.
Now between UEFI and Systemd any attempt to KISS is a nightmare.
I used to use tar to copy a directory hierarchy. Now, I use “cp -a”.
Almost going full circle – before tar, I used to use “cpio -oBcduv” which preserved directory dates!
cpio was created many years after tar. So tar was the canonical way to copy directory hierarchies before -a was added to cp.
This is NOT a flame, this is a genuinely curious question: when did cpio come about? Both tar and cpio are documented in my copy of _UNIX User’s Manual_, Release 3.0, Bell Laboratories, June 1980. Which, yeah, I should probably move to a safe deposit box before something happens to it.
Thanks for the question. Apparently my memory was faulty. I thought that I had used tar in V6 and then got cpio much later. But according to wikipedia, both tar and cpio were introduced in V7 in January 1979, replacing tp. I had forgotten about tp. Oh well, it’s been 40+ years. Seriously, thanks for calling me out.
P.S. I treasure my copy of the BSTJ July/August 1978 Part 2 on the UNIX Time-Sharing System
Ah, my memory wasn’t entirely faulty. cpio was introduced as a part of the Programmer’s Workbench 2.0, which was based on V7, but which we didn’t have. So, the V7 system I used in 1979 did have tar, but not cpio.
Why not just copy the things that systemd needs in the /usr directory on the root disk ?
Read the fine article. He references that.
Read the article again ;-) He doesn’t reference /usr only initramfs and /lib.
I was wondering the same thing!
I’m actually a bit of a Linux noob, but couldn’t you put the necessary files into /usr for initial boot up, and then mount /usr to the SD card where the full directory contents reside?
I honestly think that’s not a very good solution, since it could create other issues down the road.
Honestly, for this sort of issue, using initramfs would seem to be the better way to go REGARDLESS of distribution…
It’s a fair idea, it might boot for a little bit but you’ll run into issues later on down the line.
The reason why we don’t just move what is absolute necessary and remount later is because at any point in time an update might come along and add a new dependency into /usr without you knowing. When it updates it will be writing to the “wrong” location and on next reboot might not come back up and depending on how long it takes for that change to come up, you might have forgotten all about it, or worse, left a ticking time bomb for someone else.
Actually: I considered copying the libraries over to either /lib or somewhere in the initial RAM disk, but after it turned out to be quite a few, I decided against that.
So, yes, I could have copied everything needed from /usr/bin and /usr/lib over but after I did like 6 things and there were still issues, I decided it wasn’t worth it. Especially since I was trying to save space on / to start with.
What is wrong with the world these days? (rhetorical, please don’t answer! B^)
Root is not in /, but in /root ???
Libraries are not in /lib ???
Next, you’ll be telling me that to shut down a Windows machine, I’ll need to click “Start”!
B^)
Would it be appropriate to open a ticket with fedora/debian/… with the simple description “blowing away /usr causes boot failure” and a link to the LSB spec?
you’d probably get referred ‘upstream’ to potering, who’d then tell you to pound sand
let not sane design stand in the way of progress /s
In conclusion systemd is a garbage that needs to be purged from the earth.
I would be ok if it were more modular/flexible (the acid test being “can I only use this _part_ of systemd’s ‘ecosystem’?’) but that would require massive restructuring and wouldn’t be systemd anymore
Nope
I was commissioned by my job to do a study on whether an Intel ComputeStick could serve as an inexpensive replacement for our think client infrastructure. Among my bullet points (some good, some not) was that the Stick served as an excellent hand warmer under load.
s/think/thin
My waistline is telling me to think thin!
B^)
Ah yes /usr /usr/bin /usr/sbin /usr/local/bin /sbin all carry overs from the constraints of 5Mb RL02 disk packs. We still can’t shake that legacy.
This is nothing compared to the whole drive letter thing that dos and windows stole from vms, poorly.
What? Windows NT’s drive letters came via PC-DOS/MS-DOS from CP/M in 1974. VAX/VMS wasn’t released until 1977. Not only that, drive letters originally came from IBM’s CP/CMS in 1967.
One day all the executables will be stored in /bin.
No more floppy legacy.
One day far sooner, all executables will *be* /bin/systemd
basically busybox
I use Slackware, so should not care, but…
Lennart Poettering should be tried and convicted for treason and various crimes to humanity.
About 50 years in a Russian prison or a Chinese ‘re-education’ facility would be an appropriate sentence.
Wow that is harsh.. While I’m not a fan of their work in general its not like they are forcing you to use it personally – its the distro you select or your own lack of skill/interest in building your own that makes you have to use it.
And it must be said despite the fact their stuff has been the biggest pain to get working I’ve yet experienced in Linux when it doesn’t work or you want to do something a little unusual… On the whole it does actually work.
So for me just a stern disapproving stare for Mr Poettering and the distro’s (also the companies that could pay for devs to support other methods) jumping in on it all making it harder to avoid.
My favorite part is the implication that systemd is the reason for the /usr problem. It isn’t. It started with other Unix systems first, like with Solaris. Linux distros and thus systemd followed suit.
“These days, we have systemd controlling everything”
Not if I can help it.
I ain’t interested in something designed with the Windows “our way or the highway” philosophy, open source or not.
systemd hatin’ is so 2016! systemd is an imperfect step in the RIGHT direction. I am a Linux desktop user for 21 years now, I also enjoy systemd on servers. I get what some people complain about, perhaps it needs to be split in smaller chunks… but systemd is here to stay.
23 years and I agree. Also, you can move with rsync and just modify fstab. Some linux installers sometimes used to set this up for you. People who complain about systemd are mostly misinformed. Same with people who complain about Nvidia. It’s “cool” to complain about some things.
Considering Nvidia tries to bully with their corporate might to get things their way instead of adhering to standards that exist to prevent borkage, I can only assume you’re one of those “I wanna watch the world burn” kind of people constantly taking the piss at people pointing out stupidity.
Now that’s unfair. Pop!OS makes Nvidia “just work”. When Intel, and AMD, heck even ARM now, all battling each other. I think of Nvidia as just found what it needs to survive, whereas Ubuntu refusing to work with them is “bullying”. On the other hand, SystemD really is a monstrosity. Contrary to the Unix philosophy itself.
Its that last sentence that really sums up why systemd should not be the way it is.
As for Nvidia can’t say I’m a fan, but they do at least try to provide a working driver for GNU systems, so they get a pass in my book for bothering at all – not like even with Linux’s ever growing market share nvidia needed to – the core of Nvidia’s consumer GPU market is selling to windows gamers, and if they didn’t try to support anything else they would still do well because they make fast capable GPU’s that do what most folks want on the bloatware OS they love…
Can’t wait until systemd does away with all those messy plain text files and moves everything into a single flat database. Call it something like REGISTRY.DAT and have a nice GUI for editing. Make sure the GUI also needs to run from a container because who ever heard of static binaries!
Static binaries run pretty well in a container.
Do it right the first time or don’t do it at all.
Even OpenRC is less asinine, less unessesary convoluted and more functional.
SystemD is the sunken cost fallacy turned into a init system.
My head exploded at this line: “Upgrading to KDE Neon fixed the network”
I am a fan of things that /just work/, but also once I’ve decided on a strategy for maintaining a system, I like to get to the bottom of the problems I run into. On the rare occasion that I upgrade the hardware in my daily driver machines, there are always a couple issues and generally I like to resolve them directly. Upgrading just the kernel, or getting a specific version of a driver, that sort of thing. Several times I’ve wound up patching the kernel, a couple times the patch was even accepted upstream. The idea of tying the whole platform to some driver compatibility issue makes my skin crawl. If I wanted KDE Neon, I’d install it…and if I didn’t, I wouldn’t install it for anything! Choices!
I hate systemd. A couple years ago, I decided that I was a bigoted old man in need of reform, and I went through a process to integrate my network setup (iwconfig+dhclient+openvpn, nothing fancy) into the systemd way of things. I kept peeling back layers of the onion, I tried 3 different approaches, and by the third I thought I had gotten to the systemd mindset…it did finally “work” but it still had severe limitations even after I’d jumped through all those hoops. And even though I had read a great deal of documentation, there were still a lot of hidden parts that I was only able to sound the depths of by looking at systemd source code.
I spent several days at it. The problem is that the fundamental design of systemd isn’t very elegant — there are too many different kinds of things, and each thing has different variants of the thing, and the rules that tie everything together are all hard-coded undocumented special cases. If they had made everything a “unit” and made the dependency system between them simple enough to express all the relationships without special cases, it would actually be brilliant and an improvement over sysv-init. But that’s the opposite of what happened.
I eventually googled “how to remove systemd from debian” and it turns out it only took about 15 minutes, and WOW all my problems are solved. Difficult hacks that took hours to set up and which I *KNOW* systemd would have gratuitously broken over and over due to “upgrades” became simple 5-line shell scripts that have worked flawlessly for me for 15 years.
I’m not saying systemd is bad…it’s only a little worse than Mac OS X, and I managed to use OS X for 5 years with only a little frustration. If you want that sort of integrated experience, you might as well use systemd…if you’re too poor to buy a mac and willing to accept a severely limited suite of applications.
Well, to be fair, my goal was to put Neon on to start with. I just decided not to troubleshoot the network until I put Neon on and then that had whatever combo of drivers were required. Clearly, I too often attack the problem directly, but in this case I didn’t want to waste my time fixing something for the sole purpose of wiping it out. It was easier to make a bootable Neon drive and install from it.
This is odd… I haven’t had usr, var, tmp, or home on a root filesystem in… maybe since my first slackware install in the 90’s.
Regenerating initramfs has always pulled everything it needed. Copy stuff, change mount points, mount them, regen initram and reboot. Even do this on headless systems and have had to recover via serial port exactly one time.
This time the fu in the title refers to a fubar not a skill.
Without focusing on systemd, what is described here is an exercise in stupidity, fighting every step of the way to do things that should be ridiculously easy. Putting mounts in /etc/fstab should just work, period, full stop. The fact that it no longer does is criminal. Add is the fact that having /usr on a separate disk is not supported anymore.
The article clearly shows a perfect use case for /usr being separate. The are hundreds of others. If your distro does not support it, then to me your distro is broken by design. And yes, I know that’s the vast majority of distros.
This is why I actually DO maintain my own distro. I find it easier to compile the entire distro from scratch and know that everything works, and that it will continue to work tomorrow, than to continually have to relearn system administration and perform ‘hacks’ to do anything that the mainstream deems to be exotic. Which lately is pretty much anything other than blindly accepting the defaults.
The hate that systemd gets despite its clear advantages leads me to draw comparisons between Poettering and Cantor.
https://en.wikipedia.org/wiki/Cantor
SystemD is great in the way it is a example of how to not do a init system.
The trouble lies in that people insist on doing so anyway due to clout over brains.
Juuuust like politics….
avid linux user here, laptop at home, laptops and desktops in work during work and server that oversees backups and security for my premises. All on systemd. honestly, took some head scratching and re-teaching old tricks to get it to do what I wanted when it came to customisation I depend on but I dont really get the hate, different yes, but it works, and imho no more of a learning hump than init. im not a sysadmin, and my linux use is a hobby and through choice and certainly not a professional thing so perhaps I miss a lot of the nuances.
My use of linux is both a hobby and profession. And I strongly dislike systemd. As much as it simplified some things, it broke another, sometimes in non-documented and/or non-intuitive way. Add here less than sane maintainer (“it is working as expected” is usual first reply to bugs), and things get bad.
Personal experience: I needed two mutually exclusive services, if one crashes, another starts up and stays up. What was happening when I declared them mutually exclusive – systemd refused to wait for proper termination of one service and started another immediately, right after sending signals to the process, without waiting for it to fully terminate. Which took a bit of time, which in turn crashed another process, because first one did not release graphics card. What was 4 lines of shell in runit or similar system turned into at least two days lost searching high and low through the documentation, issues on github and code. In the end – there was no universal solution, and I had to hardcode few things.
Aww man, Linux is still disgusting.
Use a real Unix-like OS not a kludge and you will have better luck. Try FreeBSD instead of Linsucks and you will have no issues at all moving entire filesystems around.
There are many ways to look at it:
1-You could says the 8GB is too small (for 2020 standard).
2-You could says ubuntu is too large (bloated).
3-in your case you chose /usr is the problem.
Another way to solve it would be systemd-nspawn, I think. You could install a super small linux distro on the 8GB and chose the root “/” and use that to boot it.
here is an example.
https://wiki.archlinux.org/index.php/User:Altercation/Bullet_Proof_Arch_Install#Boot_into_new_system
There are always many ways to solve a problem. In my case, I’ve been trying to standardize on neon across all my boxes. So as usual, context is everything. But you’re right. I could have put Alpine on there for example but that really didn’t suit my purpose.
May find this useful, thanks for the headsup. I had a 486 system back in the day with all the spare HDDs that weren’t very big, so had filesystem spread all over, root on one, /usr on another, and I forgot what was on the 3rd… anyway would likely have got in a mess trying to do it the old way if I came across the situation that needed it. I figure it would probably have been something similar to the illustrated situation. I have a cracked tablet with only 1GB flash I might have tried to turn into a linux box sometime down the road.
That’s just one facet of the whole saga that is Linux nowadays. I’m “forced” to use it at home because I want the best hardware support, I have to use it at work due to historical decisions (and a restrictive Win policy).
I have a consistently higher uptime on my home Windows machines than on my Linux ones. But up until a few years ago I was having >500 days on the Linux ones, interrupted only by hardware or distro upgrades.
Trying to stay positive: I guess Windows has come a long way?
Windows has the bad habit (by design, really) of forcing reboots after applying updates. So you were implying that you can’t have an update longer than 1 week on your Linux machines? Or have you disabled the Windows Updates and you’re happy to use Windows without security patches? Neither makes sense.
If your Linux uptimes were 500+ days, I can safely assume that you’re not doing some wrong in particular, nowadays, to your Linux distro of choice. So this begs the question: how do you manage to keep Windows up an running, and updated, for over a week? Sure, on rare occasions there can be updates that don’t require a reboot. But most of the times, reboots are mandatory.
I mostly need to restart my Windows 10 machines after a major update, which is usually once per quarter. Things were MUCH different when I was on the insider ring. The only reason why I’m usually rebooting is to fix some laptop WiFi issue or to boot into Ubuntu. Most of the other updates I’ve seen don’t require a restart.
Under LInux, I’m not forced to reboot, but things start acting weird. On my pi systems, there are updates (such as /boot) which require a reboot. On my CentOS desktop, it just starts getting slower and slower, for no explanation. Maybe it’s a confirmation bias, but I rarely get over 60 days of uptime on my linux desktop and 100 days on my pi systems.
As I no longer use Windows, I can’t tell if they did something to require less reboots, so I can’t be 100% sure, but also hard to believe, as that would’ve triggered quite some news. Have you tweaked the settings so you don’t have to reboot too often or you’re going with the default settings of an official Windows retail release?
I can see how corporate use cases might have a different update/upgrade policy that would be enforced by configuration and allow the users to be more productive and get less down time, but I don’t know what kind of Windows you use. Of course, I can’t assume much about pirated copies as they could contain any random amount of modifications against an official release, although it’s fair to assume that some are related to updates that would require restart, like stuff related to copyright protection.
Also, if you have to reboot to fix wifi issues, that still counts as some OS annoyance that makes you reboot. I don’t know what to say about the slowdowns with high uptime, as that would clearly need an investigation – although even a reboot a month is not bad for any OS. For your Pis, check raspi-config if you want to make the /boot partition and overly FS writable: Advanced Options -> Overlay FS.
I’m a bit late getting here, but this is almost exactly what I’m looking for and the only thing I’ve found that’s even close.
I’ve succsesfully moved /usr to a separate partition before, but that was a simple matter of adding the /usr hook in mkinitcpio.conf and setting a pass of ‘2’ in /etc/fstab. But now I need to bind mount a /usr from another filetree on a separate partition.
I need to mount a partition to /sub-directory/sub-sub-directory/sub-sub-sub-directory>.
And then bind directory/sub-directoy_1 … /sub-directory_n/usr to it.
Unfortunately, the above information re where to find the examples in /etc/initramfs-tools and /usr/share/initramfs-tools doesn’t help, because those locations don’t exist in Arch and my knowledge of systemd is sorely lacking. So, any help with this that will work for Arch would be **greatly** appreciated!
Thanks in advance.
I’m a bit late getting here, but this is almost exactly what I’m looking for and the only thing I’ve found that’s even close.
I’ve succsessfully moved /usr to a separate partition before, but that was a simple matter of adding the /usr hook in mkinitcpio.conf and setting a pass of ‘2’ in /etc/fstab. But now I need to bind mount a /usr from another filetree on a separate partition.
I need to mount a partition to /sub-directory/sub-sub-directory/sub-sub-sub-directory>.
And then bind directory/sub-directoy_1 … /sub-directory_n/usr to it.
Unfortunately, the above information re where to find the examples in /etc/initramfs-tools and /usr/share/initramfs-tools doesn’t help, because those locations don’t exist in Arch and my knowledge of systemd is sorely lacking. So, any help with this that will work for Arch would be **greatly** appreciated!
Thanks in advance.