Linux On A Floppy: Still (Just About) Possible

Back in the early days of Linux, there were multiple floppy disk distributions. They made handy rescue or tinkering environments, and they packed in a surprising amount of useful stuff. But a version 1.x kernel was not large in today’s context, so how does a floppy Linux fare in 2025? [Action Retro] is here to find out.

Following a guide from GitHub in the video below the break, he’s able to get a modern version 6.14 kernel compiled with minimal options, as well as just enough BusyBox to be useful. It boots on a gloriously minimalist 486 setup, and he spends a while trying to refine and add to it, but it’s evident from the errors he finds along the way that managing dependencies in such a small space is challenging. Even the floppy itself is problematic, as both the drive and the media are now long in the tooth; it takes him a while to find one that works. He promises us more in a future episode, but it’s clear this is more of an exercise in pushing the envelope than it is in making a useful distro. Floppy Linux was fun back in 1997, but we can tell it’s more of a curiosity in 2025.

Linux on a floppy has made it to these pages every few years during most of Hackaday’s existence, but perhaps instead of pointing you in that direction, it’s time to toss a wreath into the sea of abandonware with a reminder that the floppy drivers in Linux are now orphaned.

31 thoughts on “Linux On A Floppy: Still (Just About) Possible

    1. I had tried out SuSe Linux 6.x back then, when it was still of German heritage.
      And it came on a big set of CD-ROMs. About 7 or 10 of them..
      But that was okay, because these CD-ROMs held the whole collection of packages, the whole repository! 😄
      So you had access to everything, without requiring any internet access!
      Sure, it was a momentary snapshot only and not up-to-date anymore.
      But nowadays it’s a time capsule. These old Linux distros with the many CD-ROMs are fully independent.
      They don’t rely on servers that are now shutdown.
      I really wished modern Linux distros would be available as Blu ray “ISOs”.
      So you would download a 25, 50 or 100 GB image once (for the major release) and have about everything you need, without relying on slow repository servers.

        1. Yes, but I do imagine it’s not as user friendly.
          On a ready-made CD-ROM release, all of the included packages are known to be good/working, their depencies are met and they have the correct permissions.
          Servers, by contrast, are always a bit of a moving target.
          Something might been updated and broken every 14 days or so.

          1. In general, most packages have fairly wide compatibility, should have already been tested upstream long before it hits LTS.

            Patch level updates that are usually safe to automate:

            sudo apt-get update
            sudo apt-get upgrade

            Version updates that on rare occasion may jump/break some services:

            sudo apt-get update
            sudo apt-get dist-upgrade

            Something that is usually full of surprises if your system has a lot of customization, Desktop fragility, or older GPU drivers:

            sudo do-release-upgrade

            A clean install to a new ssd every 2 years from a usb is usually wiser, and import your old /home file data once everything has proven stable. Keeping the old OS SSD intact during a major upgrade can be nice if something goes wrong 2 weeks later. And always bet something will go wrong eventually.

            The more obscure hardware tends to get nastier surprises. The Desktops + gpu drivers seem to have a lot more issues jumping major OS releases. CLI Servers have lot less surface area for upgrade bugs… but it does happen, and usually on a Friday for extra annoyances. We all learn this the hard way at some point. =)

      1. Problem was though, …. always seemed to be ordering a new set of CDs…. I had a shelf full of them as Linux was moving so quickly. So the internet did solve that problem for better or for worst. I’ve since shredded all those old CDs.

        1. Back then a single CD held about 650 MB of data and the box shipped with 10 of them.
          That was about 7 GB of data and contained most if not all packages of the repository.
          Nowadays, a single BD can hold 25 GB of data and it’s nolonger enough?
          That’s not exactly a good sign.
          Because unless an enormous number of new applications had been written,
          it means that modern Linux applications must have become bloatware.
          I mean, let’s see it this way: Most popular, full-size applications such as GIMP or OpenOffice (ex StarOffice) had been available since the 90s.
          And they did fit on the CD-ROM bundle of the past.

      2. I had SuSe 6.3, that was good everything was there. I had very limited dial up at the time and these CDs were very useful. What I remenber is that the shipped kernel, 2.2.13, had a bug on unmounting floppy disks. It wouldn’t flush the buffers and you would be unable to mount any other floppy. Maybe there was a workaround, I just didn’t know and compiling a new kernel was outside my possibilities. My machine would crash when compiling with optimizations, bad RAM was ist, worked most of the time though.

    2. Mine too. Downloaded over a modem no less…. Then hoping all the floppies didn’t have a problem being read after the fact…

      That said, I am not going ‘back’ to that. Prefer the ‘new’ load from a USB thumb drive.

      1. That said, I am not going ‘back’ to that. Prefer the ‘new’ load from a USB thumb drive.

        Funnily, in my experience, physical CD/DVD installation media have had better compatibility than USB pen drives most of times.
        More than often, booting an “ISO” (or UDF etc) from USB device failed while the real thing just worked (unless the optical disc had scratches or there was an old drive).
        For once I don’t blame the Linux, though. Problem is a mix of bad BIOS support and bad bootloaders.

    1. There’s a review at Toasty Tech site about QNX.
      Please let’s note its reasonable system requirements (386, 8MB RAM)..
      http://toastytech.com/guis/qnxdemo.html

      By the turn of the century, QNX (QNX 6.1) was also used as a base OS for Amiga OS (AmigaXL for QNX).
      http://www.bambi-amiga.co.uk/amigahistory/emulators/amigaosxl.html
      https://www.amiga-news.de/de/news/AN-1998-11-00050-EN.html

      Years later, QNX continued to be used as an embedded OS in cars, industry and other use cases.
      Too bad QNX had remained niche. In my opinion using it was less of a pain compared to other *nix systems.

    1. That’s what Linux was meant to be, that’s where it belongs to. IMHO.
      The specs are fine and any classic network router based on a 486SXL or 486SX can run it (if RAM expansion is up to it).
      I just hope people in charge won’t forget that original purpose,
      while they continue to maintain all other “fun stuff” Linux ports for toasters, washing machines etc.

      1. Yep, desktops, workstations, laptops, servers, SBCs, super computers, etc… Linux fits very well in all these roles as well. Scale up, scale down. Load the software you want, or build the kernel and build from scratch and then add. Pick a distro and DE that fits your work flow. Doesn’t get any better.

        1. It fits best in cases where no user interaction is required.
          Servers, mainframes, routers, fridges, etc.
          Anything that has the character traits of a cage from an user’s point of view.
          I often think that Linux is the kind of creature being sung about in Hotel California.

      2. Linux was NOT meant to be an embedded router OS and it has been that and so much more including all the way up to the top 10 super-computers. The UNIX design philosophy allows it to scale very well. It’s been my primary desktop since the late 1990s.

        1. I think that the real usefulness of freeware *nixes used to be to run servers such as Apache.
          That’s what they were relevant for on a large scale (in the 90s).
          Both Linux and BSD are working best if they’re being left on their own, without any user interaction. Never touch a running system.

          Also, professionally operated servers had an UPS, uninterruptable power supply.
          And that was a requirement, because in that dreamworld Linux was living there was never an unexpected power loss.
          That’s why during an actual power loss, the EXT/EXT2 filesystem was toast (lost i-nodes etc).

          But again, servers had their own dedicated UPS to shield Linux from the cruel real-world.
          Beginning with EXT4 and Journaling, Linux reached the maturity of Windows NT 3.5x or NT4 with NTFS (has Journaling).

          On Mac OS X (BSD based), MacOS Extended filesystem (aka HFS+) got optional Journaling beginning with Mac OS X Server 10.2.2 (2002) and Mac OS X 10.3 (2003).
          That was by the early 2000s, about 10 years after Windows NT 3.x and NTFS were around (’93; or let’s take OS/2 and HPFS as example instead if you want).
          That was a couple of years before Linux supported it, still.

          Anyway, better late than never. Beginning with EXT4 (usable between 2006 and 2008),
          Linux finally got a reliable filesystem that could be trusted under real-world conditions.
          That’s when it started to be sorta useable on the mainstream desktop, I guess.
          Too bad Linux never had the ability to boot from reliable known-good filesystems such as NTFS or HPFS in the 1990s.

          1. Before anyone complains: There also was EXT3, true. It’s from late 2001 or so.
            Not sure what to make of it, though. It feels like an early beta version of EXT4 to me (I’m no Linux die-hard).
            Also, the modern/improved EXT4 filesystem driver can mount EXT2 and EXT3 volumes, too. That’s good, I think. 🙂

  1. It’s hard to build an old kernel on modern gcc, but it’s super easy to use QEMU to install an ancient version of debian on your supercomputer and use it to build an old kernel. For floppy linux, that’s the direction i’d go.

Leave a Reply to JoshuaCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.