Is MINIX Dead? And Does It Matter?

Is MINIX dead? OSnews is sounding its death-knell, citing evidence from the operating system’s git log that its last updates happened as long ago as 2018. Given that the last news story on the MINIX website is from 2016 and the last release version, 3.3, came out in 2014, it appears they they may have a point. But perhaps it’s more appropriate to ask not whether or not MINIX is dead, but whether indeed it matters that the venerable OS appears no longer in development. It started as an example to teach OS theory before becoming popular in an era when there were no other inexpensive UNIX-like operating systems for 16-bit microcomputers, but given that its successors such as Linux-based operating systems have taken its torch and raced ahead, perhaps its day has passed.

No doubt many of you will now be about to point out that MINIX lives on unexpectedly baked into the management engine core on Intel microprocessors, and while there’s some debate as to whether that’s still the case, you may have a point. But the more important thing for us isn’t whether MINIX is still with us or even whether it’s a contender, but what it influenced and thus what it was responsible for. This is being written on a GNU/Linux operating system, which has its roots in [Linus Torvalds]’ desire to improve on… MINIX.

Read more about the tangled web of UNIX-like operating systems here.

51 thoughts on “Is MINIX Dead? And Does It Matter?

      1. I think software is “done” when 1) it has no more undesired bugs (it might have a few desired bugs left), and 2) the platform that it runs on is “done” and won’t change anymore.

        I think that with MINIX, both are the case. There are hardly any 16-bit processors anymore, everything is either 8-bit, 32-bit or 64-bit. Also, MINIX is meant for general purposes computers (personal computers in the ’80’s meaning of the word). But who still makes 16-bit general purpose computers? The 16-bit platform itself is fully “done”, I would say.

        16-bit general purpose computers are only built in the hobbyist homes these days. Maybe MINIX can still find it’s way on those, for educational purposes.

        Maybe someone can port it for the Mensch Computer, which is basically meant to be an educational platform. Couple the Mensch Computer with MINIX, and you have an educational computer that runs a UNIX-like Operating System, and can teach students Operating Systems basics.

        That way, the 6502/65816 and MINIX might keep each other alive. :)

        https://www.westerndesigncenter.com/wdc/Mensch_Computer.php

    1. Actually, it sometimes does need updates. There was a severe bug discovered in 2017 (IIRC), allowing for remote code execution on Minix systems (just checked it, it was 2015). Now, iAMT is built-in into many (all?) Intel chipsets, possibly making Minix the most popular OS nowadays.

      Even though most users, most probably, not aware that they are running a Minix-based system.

      Now, the funny part is that the Intel chipsets which include the Minix OS (to run iAMT) are designed to run 24/7 as long as the power supply receives power (yep, even if the computer is tuned off) and can completely control the CPU, regardless on which other OS you chose to install on top of your Minix-controlled machine.

      OTOH, despite Minix being so popular, the number of security issues discovered in the last decade is….pretty low. Heck, even the *BSD Unix OSes were struck by the heartbleed “bug”! (I know the heartbleed “bug” was part of a library, not the OS, but…let’s not get in too much trolling there)

      So, pretty solid, if you ask me.

    1. I agree with you David. Minix 1.5 and 2 are very valuable tools for teaching. They can be understood with a single book and it leaves a lot of room for experimentation. I spent some time earlier this year finding University Courses via old links and the Wayback Machine that contained lectures, homework assignments, exams and projects based on Minix. I also have two editions of the books.

      1. Unfortunately, it teaches some bad ideas, specifically that microkernels are somehow better than monolithic kernels. The supposed superiority of microkernels completely falls apart when you consider high performance scaling.

          1. There are tons of applications where performance is critical. How long do you want to wait before your facebook/reddit/twitter/ebay/amazon/google page is loaded?

          2. … for a book/course on design of operating systems, it is a huge omission not to talk about scalability and the inherent disadvantage that microkernels have in this area.

          3. “There are tons of applications where performance is critical. ”

            rather

            “There are tons of applications where we are critical of performance.”

            but – does anyone fix the software bloat and endless upgrade cycles?

          4. i think this is an interesting argument because it’s almost what i said (from the other direction) in my other comment below. microkernels do enforce a kind of elegance, but linux voluntarily has that same elegance without the microkernel (and has had it since the very beginning). if it comes down to elegance, it is just a question of whether the code is well-factored, whether independent segments have clearly defined interfaces.

            the thing a microkernel can have that monolithic can’t have is hard protection…ideally, a memory-overwrite bug in one module is isolated from the rest of the kernel and will have strictly limited consequences for security and stability. and for that, you really can’t achieve it with a monolithic kernel. but that protection is what adds the task switching overhead that people complain about, and at the end of the day it doesn’t add up to much because if you can crash (for example) the filesystem module then you really have too much control over the system already even if the rest of the kernel is technically uncorrupted.

            but if you can tolerate the performance hit and want userspace filesystems, linux today supports that pretty seamlessly. it has all the power and all the downsides of a microkernel, if you want them.

            but the elegance argument really reminds me of the C vs C++ argument. i happen to work with two very large programs “A” and “B” that each implement the same functionality, and each recently switched from C to C++. “A” is well-factored, clean, readable, elegant, *beautiful*. “B” is not. when “A” switches to C++, it will be an effortless affair and even though i hate C++ i have to admit it probably won’t harm anything — they will use the same idioms they’ve been using, just with a slightly cleaner expression of them. when “B” switches to C++, it will be a nightmare…they will shoehorn their existing awful redundant idioms into obscure C++ syntax without fixing their underlying poor factoring decisions. what matters is the elegance, not the language.

            and on a totally different topic, those two programs are an excellent example of commenting style. “A” has what i consider a reasonable number of comments. but “B” has literally 1-3 lines of comments for every line of code, in the middle of functions! so where “A” might have a 10-line function, “B” has a 60-line function with 40 lines of comments and 20 lines of code. i personally think the “B” code would be more readable with only the 20 lines of code, but there is no denying that the 10 lines of clean bare code is way more readable than 60 lines of anything. almost all of the comments in “B” are either strictly redundant “if (x==NULL) /* check if x is NULL */” or exist just to try to make up for really really really bad factoring decisions.

          5. “a memory-overwrite bug in one module is isolated from the rest of the kernel and will have strictly limited consequences for security and stability. ”

            Not necessarily true. A single thread can overwrite its own memory, and then degrade its own stability and security. If this isn’t caught, because the data is not checked (or is checked, but plausible), and this thread performs services for others, the bad data can leak to other parts of the system. Imagine a multi threaded file system, where a single thread starts writing corrupted data to the disk, for instance.

            Also, if you have a multi threaded system, where a single thread happens to be the sole owner of a new piece of information, and that thread crashes, the information is lost. A failing thread can be restarted, but it won’t be restarted from the point where it went wrong, so there will still be a loss of information and consequences for security/stability.

            Since a multi threaded system is more complicated due to lack of coherency, programmer bugs will be more frequent, and total system security could very well be less.

          1. That makes sense though. Monolithic wins for high performance scaling of *shared resources*. As soon as you reach a point where two threads can work on their own resources (like a packet in memory), it makes sense to give them their own memory space. In most cases, however, these threads still need access to shared resources, such as network cards, buses, memory, interrupts, and that part still is best done with a monolithic kernel as a base.

  1. in all seriousness, i don’t have a real opinion on Minix…when Minix vs. Linux was a practical concern in my life, i ruled out Minix because at the time it didn’t really support 386 protected mode. as for microkernel vs monolithic kernel, i’m not really impressed by the argument — i think at the end of the day the real cost is organized codebase vs chaotic codebase, and Linux mostly does well by that metric. and for education, honestly i think reading linux-0.01 from cover to cover is as approachable as any other educational OS, and fractionally more relevant to the real world.

    but oh like 25 years ago i had a petty response to a rant that Tanenbaum had written against Linux which struck me as the height of ivory tower snobbery. an example of conduct that was both counter-productive for students and for users. so i feel some peavish delight today to see that modern Minix aficionados are using Linus’s git. :)

    1. My opinion was exactly the reverse back then.
      Back in the 90s, my desktop PC was a higher end 80286 with lots of RAM, a 16-Bit soundcard, modem, Laserjet, CD-ROM etc.
      It was fully supoorted by DOS/Windows 3.1 and Minix, but not Linux.
      Mr. Pengiun never bothered to create a proper 80286 build of the Linux kernal.

      And about that Tannenbaum rant..
      How would you feel about it, if say, a disrespectful student would mess up your brainchild until it looks like a total perversion of what it was used to be?

      Personally, I still see no elegance in Linux. It’s bloatware to me. It feels like Windows 95 back then, with its Active Desktop and trial versions. *sigh*🙄

      1. Personally, I think that the success of Linux wasn’t because of technology, but license.
        The GPL deserves all the respect, I think.
        Linux was successful, because it was a free pile of drivers, with thousands of minions working free of charge to write them, spending their valuable life time.
        It’s their sacrifice that made Linux so popular, I think. Linux alone is just a slowly moving, memory wasting behemoth. Again, in my opinion..

        1. totally agree about the license. i would say linux is a natural consequence of gcc, which itself is an example of gpl victory.

          i think you have got the finances a bit muddled though. even in the 90s, a lot of linux kernel development was by paid developers. by the end of the 90s, a lot of chip developers were starting to have their own in-house employees contributing to the drivers, and that is the overwhelming majority of linux driver development today. core characters like Alan Cox and Linus Torvalds and David S Miller did put in a ton of “volunteer labor” at the beginning, but it led to great financial opportunities, more like a downpayment on a career than charity. core developers are now mostly employed by redhat or linux foundation.

          a lot of linux’s value comes from drive-by patches from people like myself who commit small patches that mostly fix bugs or take into account unusual combinations. and we are legion and we are not necessarily paid to work on linux (though i think a lot of us are working on problems we were paid by our employers to confront). but i don’t think any of us are uncompensated either — the ability to fix the bug you run into is a blessing bestowed upon us.

          really i would say what is amazing about the GPL is that it has taken all of this for-profit labor that would have happened anyways (chip-makers are always writing drivers) and transformed it into something that enriches all of us, most especially with that blessing of being able to fix our own problems.

  2. “Is MINIX Dead? And Does It Matter?”

    Yes, it does matter. Most definitely. It matters not because Minix valuable as an operating system per se, but for the very simple reason that MINIX is, arguably, one of the very best ways to learn how to design and build operating systems. Period (“Full stop”, for you of the British persuasion).

    Andrew Tannenbaum wrote MINIX, and the accompanying textbook, “Operating Systems: Design and Implementation” in order to teach students the basics of operating systems; in order to teach how to write operating systems.
    When I created and taught a course on Operating System Design years ago, I found, as a lot of others apparently have, that there was (is?) no better, more elegant way to teach this subject. Tannenbaum created a classic for the ages; just ask Linus Torvalds (an aside: why do all the very best computer scientists’ efforts come from the Nordic countries? And, yes, I do know where Tannenbaum was born).
    Andrew Tannenbaum has somewhat inferred that he is seriously curtailing involvement in keeping MINIX robust; that he does not have the “fire’ to work on it which he once had.
    If someone, or some group does not step up to keep MINIX alive, well, and prospering, it will be a loss to the entire Computer Science establishment.
    The one bright spot is that we will always have the latest MINIX from Andy Tannenbaum and his students, and, just as importantly “Operating Systems: Design and Implementation”.

    Get the book; you can’t do any better (ps: Tannenbaum is right–microkernel is the way to go).

    Just one guy’s opinion…

    1. Well if there are as many people as passionate about Minix as you then once the news that it’s “dead” gets around I’m sure people will step forward to keep it alive, maybe this obit is a good thing?

      1. “…maybe this obit is a good thing?”
        May be (and thanks for the compliment); one can always hope.
        It would be nice if that quotation of Mark Twain applies here–

        “Reports of my death have been greatly exaggerated.”

    2. It’s not really a good teaching tool, because it projects the wrong message about microkernels. A lot of people read the book, and become convinced that microkernels are the way to go (myself included when I first got the book).

      The reality is that microkernels only provide a nice, clean design when doing a toy system such as Minix. They don’t scale for real world applications, and they cannot be adapted to scale. Even the toy Minix sucked when you had a machine with both floppy and hard drives. The single threaded file system would patiently wait for the slow floppy to write a sector, before paying any attention to another task that just needed a quick read from the hard drive.

      Only if you’re okay with tasks waiting for the floppy drive to finish, even if they don’t need the floppy drive, then microkernels are easy to design. If, instead, you want to optimize the system so that one task can access the floppy drive, and another task can simultaneously get access to the hard drive, then you’ll find the complexity of microkernels exploding. Tanenbaum completely ignores this problem.

      1. I challenge you to find some better way of education than teaching the way to make something fully work, and then letting him find out the limitations all by himself.

        Nothing more inspiring than trying to find out how something that should work great in theory, only works well in practice.

        You already learned what works well and why it should work well. And that makes you able to see through all the things that do work and zoom in on that one or two things that don’t work, AND understand why, without the need for further lecturing.

        MINIX is perfect. Because it taught you to understand all by yourself why the microkernel idea was maybe not such a good thing after all.

    3. Just that his name is Tanenbaum, with one m (or 2, if you count all the m’s in his name)…

      And also, we don’t consider the England, Netherlands and Germany ‘Nordic’ in Europe. That title goes to the Scandinavian countries.

      I do wonder why you are dragging descendancy into your discussion…? What does it matter where a computer scientist comes from?

      It’s simple: digital computing has been invented by the British. The Dutch and Germans have brought it where it became commercially useful. The Americans invented the transistor and the integrated circuit (although I would say that the Austro-Hungarians invented the transistor, in the same way that Charles Babbage invented the computer: theoretically).

      And the Americans subsequently invented a ‘personal computer’ to be able to sell as many transistors as they could. :)

  3. Doesn’t matter — nobody cares.

    Besides that, one of the best things that can happen for an “academic” operating system intended for study and learning is that it stays static. So in this case, being dead is a good thing.

  4. It’s already perfect. Anything more you want to do with MINIX is left as an exercise to the reader. It has more meat to it than Xv6 and is quite a bit deeper, but no harder to program for (perhaps easier to debug).

  5. It was written above,

    but oh like 25 years ago i had a petty response to a rant that Tanenbaum had written against Linux which struck me as the height of ivory tower snobbery. an example of conduct that was both counter-productive for students and for users. so i feel some peavish delight today to see that modern Minix aficionados are using Linus’s git…” [full responsibility: all emphasis is mine];

    …and there is still, to this day, a large contingent of people who believe that Torvalds stole from MINIX (and Tanenbaum) to create Linux. I suggest that you seriously read and understand all of the following. I am certain–and hope–that the graciousness and respect for Linus Torvalds (and vice-versa), from the man from whom Torvalds is supposed to have stolen, will not be lost on you.
    —————————————————————————
    Open Sources: Voices from the Open Source Revolution
    1st Edition January 1999
    Appendix A
    The Tanenbaum-Torvalds Debate
    https://www.oreilly.com/openbook/opensources/book/appa.html

    Department of Computer Science
    Vrije Universiteit
    Some Notes on the “Who Wrote Linux” Kerfuffle, Release 1.5
    https://www.cs.vu.nl/~ast/brown/

    Department of Computer Science
    Vrije Universiteit
    Andrew S. Tanenbaum’s Home Page
    https://www.cs.vu.nl/~ast/

    Department of Computer Science
    Vrije Universiteit
    Tanenbaum-Torvalds Debate Part II
    https://www.cs.vu.nl/~ast/reliable-os/

    OSnews
    Introduction to MINIX 3
    https://www.osnews.com/story/15960/introduction-to-minix-3/
    ——————————————————————————–
    I sincerely hope this enhances your knowledge and appreciation of MINIX, and Andrew S. Tanenbaum–as well as your knowledge of the origins of Linux.

    Regards…

    [p.s.: Tanenbaum is right: a microkernel OS is better]

    1. i’m sorry, as i said, it was a petty feeling that is of no consequence and i’m not really going to bother to re-examine the characters involved. but it is heartening to me to see that there is still a faultline of people who will sign every message with stuff like “Tanenbaum is right: a microkernel OS is better” so thanks! that tenor of debate is specifically wat i was responding to and i’m honestly glad to see some shadow of those days.

  6. Correct me if I’m wrong but MNIX is more educational tool than fully blown commercial product and as such should be judged. I have heard that it was used in some production environments but scale is totally different. So to answer question “is it dead?” we need to first check “is it still used by Andy T. to teach OS design and implementation?”.
    Was MINIX ever ported to RPi? I remember I tried to find it when I bought m first RPi just to discover that there was not interest in doing so.

    1. You are correct; MINIX was designed to be, and is, a tool for teaching the design of operating systems, and is not a “fully blown commercial product”. Tanenbaum, through design decisions not affecting its unequivocal use as an outstanding teaching tool, never intended MINIX to be a commercial venture.
      The answer to “is it still used by Andy T. to teach OS design and implementation?”–AST retired from teaching in 2014; concurrent with that, the inference was that his MINIX efforts would not continue (a major consideration for that decision, I’m certain, is that he no longer had one of his most valuable MINIX-development resources: his graduate students). There does not appear to be any entity which has come forward to take charge of MINIX’s continuation.
      As I have only a passing knowledge of the Raspberry Pi, someone else will have to answer your question regarding the porting of MINIX to that machine. My guess is that the answer would be “No”, but I suggest you contact the Raspberry Pi community (Raspberry Pi Trading Ltd, Raspberry Pi Foundation, and any forums and user groups) directly for a definitive answer.
      Regards…

  7. I’d just like to interject for a moment. What you’re referring to as GNU/Linux, is in fact, Linux, or as I’ve recently taken to calling it, the Linux operating system. GNU is not an operating system unto itself, but rather another free component of a fully functioning Linux system made useful by the Linux kernel, shell utilities and vital system components comprising a full OS as defined by POSIX. Many computer users run a modified version of the Linux system every day, without realizing it.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.