Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Is Minix dead? No commits since 2018 (minix3.org)
221 points by haunter on March 14, 2021 | hide | past | favorite | 142 comments


Pretty much, yeah.

https://groups.google.com/g/minix3/c/nUG1NwxXXkg

Short story as I watched it: Tanenbaum got a couple grants totalling several million Euro. He hired some of his grad students to work on making it a production system. They made a lot of improvements, research-wise, but they also sort of made a mess of things. What they didn't do is set up any reasonable project infrastructure (e.g., a bug tracker) that would allow Minix as a project to grow into a healthy, thriving project at a size beyond those who were being paid to work on it, and for it to outlive their interest in it. The userspace was largely replaced by NetBSD's, and that involved clang replacing ack, so builds went from ~10 minutes to 3+ hours. Lots of small stuff like this, adding up to a project where no one is willing to exercise ownership and few outsiders are equipped to deal with it in its current state (even if interested).

The best thing that you could do if you want Minix to be a thing is to prepare yourself for a deep dive and not look to "the community" with the belief that there is some consensus/approval to be had. Take charge in a fork of your own and make it the project you want it to be. If you wait for anyone else to take ownership, you're going to be waiting for a long time. If Minix is "dead", it's because poor ownership + the bystander effect killed it.

Shades of excuses #1, #2, and #3 from jwz's "nomo zilla" are pretty appropriate here as well.


The change of userland to NetBSD came just before I got involved as a contributor, but to characterize it as making a mess of things isn't very fair.

The reason why it was done is because the MINIX3 project felt that they didn't have the manpower to maintain a compiler toolchain, base userland and packages on top of the actual operating system. Which, in retrospect, was spot on. Having a modern toolchain also enabled work such as live updates of almost every driver and service without interruption of services, by instrumenting LLVM to generate all required information, as well as the ARM port.

This also brought vastly increased compatibility when compared to other Unix-likes, enabling a fair amount of pkgsrc packages to build and run without patches. Reinventing the wheel from scratch as with SerenityOS is fun, but not very helpful if you want to run other people's software, especially if all you have is a C89 compiler (ACK).

There is a bug tracker in the form of GitHub issues that came as part of the migration to GitHub and a Gerrit server, which also happened before I got involved.

Now, I'm not Andy Tanenbaum nor one of the MINIX3 gurus and I certainly don't have the full picture myself, but merely saying the project slumbered because of poor ownership isn't actually helpful to learn lessons from it. Software project management is anything but easy, especially if you lack paid contributors.


As somebody who maintains a fork of MINIX2, just for private pleasure, I know the limitations of ACK. In the end, you either need gcc or clang if you want to compile modern source code.

I doubt that clang would be reason not to volunteer for MINIX3. That said, I personally don't like what MINIX3 has become.

In the end, if you want to have a small, volunteer-based software project, you need somebody with a vision and who is also able to contribute quite a lot of time to the project (and possibily also quite a bit of resources).

The MINIX3 didn't have such a person in the community so the transition from paid developers to a community supported project failed.


Would tcc or pcc work?


It would've provided C99, but these days an operating system needs among other things a proper C++ compiler to support running modern software. Besides, the switch to NetBSD userland also brought build.sh and easy cross-compiling, which was a huge benefit for programmers.

As for the MINIX2 fork I assume it's Minix-vmd. MINIX didn't switch to ELF before the 3.2.0 release, so you either need a toolchain that can output the traditional a.out file format or add ELF support to MINIX2. Not impossible, but probably more work than maintaining a MINIX2 fork for private pleasure warrants.


It is indeed Minix-vmd. The current toolchain is way too old. Most of the software I want to run is written in C, often C89, or C99. So it doesn't cause a lot of issues at the the moment.

I think adding minimal ELF support would not be a lot of work.


Indeed, if you stick to ET_EXEC then it's only marginally more complex than a.out. I doubt it would take more than 150 lines of code to pull it off and it's a much saner executable file format for the 21st century. I'm assuming there's no dynamic linking support here and that you only care for a basic ELF loader.

Do you have pointers to online resources on Minix-vmd and a public source tree repository somewhere? minix-vmd.org seems down... No guarantees, but I do seem to be the type that does random drive-by contributions to open-source software.


Ah, it's sad that minix-vmd.org is down. It is not maintained by me, but I'll see if it can be revived.

That said, the sources on minix-vmd.org are quite old.

It seems that last time I put something online is also already awhile ago: http://stereo.hq.phicoh.net/Minix-vmd/images/


> that involved clang replacing ack, so builds went from ~10 minutes to 3+ hours

Are those real numbers? Yikes. I mean, I don't disbelieve it, it's just a reminder about how far we've fallen. Modern compilers are indeed amazing tools, but those last 8-10% of features (performance, safety, etc...) have come at an absolutely staggering price.

And clang isn't really that big an offender, being only a tiny bit slower than GCC. C++ and Rust are a solid order of magnitude slower still.


Easily believable. Keep in mind that clang is part of the userspace that clang is building. Moving to clang means building that instead of building ack, you're building clang.

Plan 9 is similar: a full system rebuild takes about 45 seconds on my hardware. A clang (and just a clang build) build on similar hardware takes about 2 hours.


Yes, i miss the days when i could make changes to the system and recompile in under a minute on Plan9. Such a distraction free programming environment.


Do you always need to rebuild everything? In FreeBSD, for example, you can do eg “cd bin/ls && make all install”; takes seconds.


I don't -- because they never fully went away. I'm typing this on a 9front system right now.


9front still works.


Are there any good articles on the reasons behind some of this stuff? Like beyond performance bit-twiddling, are there issues due to a lot of extra layers for more pluggable infrastructure?

I just don't get how the gulf can be this wide on something so many devs are clearly frustrated about.


Optimizations play a big part.

Modern optimizing compilers are solving lots of hoary computational graph problems, etc., behind the scenes.


What compiler does plan9 use?


It uses a homegrown C compiler; the Go compiler is actually derived from it.


As somebody that used program on the Amiga, using SAS C: GCC is/was terrible, speed-wise. We're talking 10-20x slowdown. Sounds like ack is on the same level as SAS C, and clang is on the same speed as GCC.

This had NOTHING to do with

> those last 8-10% of features (performance, safety, etc...) have come at an absolutely staggering price

SAS C had better generated code and more features.

It was purely a question of architecture - SAS C was architected for speed, and actively used caches (especially precompiled header files) to avoid re-doing work. GCC didn't, and I've seen no hint of that appearing in GCC since.


GCC has had precompiled headers for years. They don't really help in C, bceause the time spent parsing headers isn't a significant chunk of runtime in practice. They can be useful in C++, as it's common to include massive amounts of templated code in headers, but they still only buy you 20% at best.


For speed-architected C compilers/linkers lexing time is typically a very significant part of compilation time. The Unix compilers typically hasn't been speed-architected, instead having a focus on simplicity and ease of porting.


But do you recompile the tools every time you recompile the kernel? I don't understand the big deal if it's a once a week thing? Or even once a month? Are you changing the compiler/toolchain constantly as you change the kernel? That seems counter productive.


> He hired some of his grad students to work on making it a production system... They made a lot of improvements, research-wise, but they also sort of made a mess of things

Yeah, software development is a difficult profession, not a thing a grad student in CS can manage. But code written in academia is for the most part garbage in general.


True that.


If that’s how the grants panned out, that makes this flip-sounding comment[0] from BSDCan all the more painful to hear. Minix is conceptually interesting, but if the mismanagement ran deep, I guess that’s why it couldn’t capitalize on its simplicity alone (no 64-bit processor support? No USB?)[1]

[0] https://youtu.be/jMkR9VF2GNY?t=280

[1] https://en.wikipedia.org/wiki/Minix_3


I disagree that Minix is conceptually interesting. It's a half-way house between monolithic kernels and microkernels with the disadvantages of both and none of the advantages.

I'm a big fan of microkernels, I think Tanenbaum had it right in his spat with Linus but at the same time Minix' implementation is very far from what could have been a fantastic little OS.


How is it a half-way house? Genuinely interested and curious, not snarking.


Because it is 'not quite a microkernel'. For instance, many things that would be processes in a real microkernel are built-ins in Minix. Which makes them impossible to replace at runtime, nor can the kernel function without them. I get it, these are hard problems to solve especially when bootstrapping but it saddled Minix with a bunch of heritage that it has not been able to shake.

Posix compliance on the one side and running a true microkernel on the other is a hard problem, and after all the real goal of Minix was not to 'be the best microkernel' but to simply provide an OS for study purposes to students at VU and other universities.

If you want to see what an ideal microkernel looks like take a look at the QnX architecture, that's much closer.


>Posix compliance on the one side and running a true microkernel on the other is a hard problem

>If you want to see what an ideal microkernel looks like take a look at the QnX architecture, that's much closer.

Have you looked at Genode + seL4?


Yes, it's been a while though (a decade?), since I last looked at it. I will have another look to re-familiarize myself and to see what they've been up to, thank you for the pointer.


> For instance, many things that would be processes in a real microkernel are built-ins in Minix

Can you give some examples?


Just one example: the Minix kernel is 'root aware' which makes it hard - or even impossible - to switch out the root file system while the OS is running. The same goes for the process scheduler/interrupt handler which is a 'hard' part that can never be upgraded while the system is running, and is responsible for a fair amount of latency on the way from interrupt to the actual handler in the device driver (and not the stub). Device drivers that can not handle interrupts directly are going to be (much) slower than device drivers than can handle the interrupts themselves.

By the way, my Minix knowledge is somewhat dated, I always thought that it had huge potential if the 'holy houses that Andrew built' would have been allowed to be pushed over, even if that would come at the price of POSIX compatibility, which as far as I'm concerned is overrated if it is the thing that holds us back from having an ultra reliable field upgradeable operating system. The big problem with POSIX is 'fork', it more or less assumes that you're looking at UNIX and fork was a kludge all along that just happened to solve a couple of nasty little problems that have to do with file descriptors. It's one of those 'leaky abstractions' that end up haunting you forever if you do not address them immediately and decisively. But because POSIX compliance is what got Minix established in the first place it became very hard to let go of that. Plan 9 suffers from the same issue by the way.

https://en.wikipedia.org/wiki/Fork_(system_call)

I'm all for reliability over performance, unfortunately the rest of the world seems to disagree with me.


> Shades of excuses #1, #2, and #3 from jwz's "nomo zilla" are pretty appropriate here as well.

Interesting read, thanks for sharing.

jwz.org/gruntle/nomo.html

edit: JWZ doesn't like HN, copy and paste the link


For all the first time clickers to the above website, you probably want to copy and paste the link rather than clicking.


Thanks for pointing it out, edited my reply.


(For others: the above groups.google.com link is 2020, more recent than the 2018 in the HN post title.)


It may sound drastic, but why not just roll it all back if it was that bad?


Presumably the remainder of the netbsd userland relies on features not supported by ack.


I would support a Patreon for someone to work on it. I wonder how many would?


Money is often the issue but if they have multimillion grants then that isn't the issue here.


They had one such grant, many years ago.


You can dump all the money in the world into a project, but if there is zero ownership among those involved, including top leadership, then that money simply goes to waste.


From the description, I don't think the problem was directly money related.


NetBSD userland can be built with GCC or clang so it could be an option to use GCC for Minix.


Yeah and should you do decide to make a fork, make it paid and make some money out of it. Take some lessons from the dhcpd guy and do not work for free. Otherwise most likely Amazon will copy the software, spin it into some service and not contribute a single line of code or a single cent.


Hmm, I thought Linus famously didn't use project infrastructure, no project management, no bug tracker—just emails and patches. Linux hasn't suffered.

I'd wager Minix suffered from its original license (not really open source or "free" software) and Tanenbaum not being an egomaniac like Torvalds (for better or worse).


Linux certainly has project management, bug tracking, and lots of infrastructure. What it famously rejected were inadequate centralized systems that break the way kernel developers and maintainers actually work. Before the tools were there there already was a large human-operated distributed management system that ran on top of email. A lot of linux project operations are still a large semi-automated distributed management system that runs on top of email. It's gotten somewhat more centralized now, as a single source of truth is very useful for tracking bugs, but just because there's a single root to the tree of maintainers doesn't mean it's a a project run by Linus alone, and just because it's not using jira or gitlab or whatever the flavour of the day is doesn't mean it has no project infrastructure.


But Linus is still in total charge of linux and we have yet to see, how it works out without him. And with Minix, itseems Tannenbaum is not really in charge anymore?


That's not really true even if he is in a sort of BDFL role. There are maintainers who handle most of the day to day decision-making and there's enough money involved you can be pretty sure that Linux would keep on going with or without any single individual.


This. You even have people who do the final merges other than Linus - in the form of LTS (security/bugfix) updates.


Former MINIX3 contributor here, I'd like to share my perspective.

While others have noted that grant money has run out, for me MINIX3 is only as good as its weakest link: its microkernel has a really dated design. Imagine taking 1980s Unix and gutting it until you have a microkernel: it has a fork() primitive oddly enough, it's not SMP-safe, it's 32 bit only, processes only have one thread... It's the one area of the OS that hasn't changed that much since 1987.

So while there has been some very good research done on it (https://wiki.minix3.org/doku.php?id=publications, especially of note reliability in face of drivers and services misbehaving or crashing as well as live update of almost all drivers and services without interruption of service), it cannot take advantage of modern 64 bit, multi-core processors.

Furthermore, the second weakest link is driver support. Running on bare metal is possible but vintage hardware is best suited for it because of the limited hardware support. Especially problematic is the lack of USB support on x86. NetBSD's rumpkernel would have been a potential in-tree solution to fix this, but sadly it has never been done.

Anyways, these days the Zircon microkernel seems extremely promising. The design seems well thought-out, the object/handle/IPC mechanisms are especially interesting because it's rather unlike everything else done at this point that I'm aware of and processes can only interact with the rest of the system through handles, so it's trivial to sandbox things.

As for me on OS development, right now I'm having fun hacking on SerenityOS. It's a sin for a micro-kernel guy like me, but who's to say I can't have a little bit of fun on the side?


Wasn't that kind of thing an intentional part of its design?

My understanding was that Tanenbaum made it as a teaching tool and always resisted attempts to modernize it too much, lest it becomes too complicated for students to understand. The lack of SMP support may well be one of those things.


It's part of the legacy of an educational operating system from 1987 that initially targeted 8086 systems with 640 KiB memory and two 360 KiB floppy drives. The teaching focus was deemphasized after the release of MINIX 3.1.0, so the system did gain features like VFS and paged virtual memory after that. There was an early version in the 3.1.x era that had experimental SMP support in the form of a BKL, but it has bit-rotten a long time ago. Alas, the microkernel never received the rewrite it needed to bring it into the 21st century.

That is not to say that MINIX ever was just a toy OS, it was self-hosted from the first publicly released version if I'm not mistaken. But it takes much, much more than that to be a daily use OS nowadays.


In my opinion, the problem with MINIX3 is that as a product it lacked focus. MINIX2 was excellent for education. MINIX3 led to great research. But if you want to maintain a software product with a small team, you have to have a clear idea of what features are essential for your product and then make sure those have excellent support.


I'm very much in agreement with this comment, it is a fair assessment of Minix' weaknesses.


Zircon... Another Google-product... No thanks!

What's your opinion on Genode (on seL4)?

(redox-os seems to be heading the way of the dodo as well...)


I'll admit I've never played with or looked into Genode, so I can't meaningfully comment on it.

Regarding seL4, I haven't played with it but I did read their whitepaper. I believe they are setting the gold standard for microkernels in reliability and correctness. Not many systems can claim to have a formally-verified trusted computing base of not only their source code, but the binary artefacts too. This is the closest kernel project I know of that can potentially mitigate against attacks as described in the paper "Reflections on Trusting Trust" by Ken Thomson.

That is not to say that their approach is fool-proof. Like any proof, they make assumptions, which in this case is that the base hardware (CPU, MMU, memory...) does and only does what their specification claims, which the long list of erratas in Intel CPUs as well as Spectre, Meltdown, rowhammer, the Intel Management Engine debacle and others showed that they definitely don't. Not that other hardware vendors are immune too, by the way.

Their formally-verified proof is also limited to the microkernel. So while they claim to guarantee that userland processes can't compromise the microkernel nor compromise other userland processes through the microkernel (which does have its uses in embedded systems requiring high reliability), it has no bearing on what the userland does. Heartbleed wouldn't have been stopped by seL4 for example, not to mention social engineering, evil maid attacks, DMA attacks, attacks on packaging system (I lost count of the number of NPM modules that were compromised over the years), not to mention plain user stupidity.

In a way, "Reflections on Trusting Trust" by Ken Thomson has never been more relevant than today. On a typical Linux desktop operating system, we effectively blindly assume that hundreds of millions of lines of code, taken from tens of thousands of code repositories from the Internet, written in mostly unsafe languages (or languages whose interpreters/VMs are written in unsafe languages), running on a monolithic kernel with tens of millions of code written in C, running on hardware that is effectively a big black box, does what they're supposed to do and nothing more.

Even if you do audit stuff, a human can't possibly audit everything from the LLVM compiler to your web browser, not to mention we are effectively emotionally-driven, tribal-minded, chemically-influenced, prejudiced, illogical and error-prone ape brains and anyone pretending to be a perfect Vulcan is either delusional or lying.

Not to mention, I just read their whitepaper and thought to myself "yeah, sounds good". I haven't actually audited seL4, their methodology and their proof, so... As the meme says: everything is fine?


Before I trigger panic attacks, I do have to note that humans abstract away reality until we can handle it, whether it's in engineering, manufacturing, physics, food supply chains, sociology or anything else. This is effectively how our society operates: mostly that problems we don't deal with directly are taken care of by somebody else in good faith. That is not to say we can't mitigate reality if we're willing to put in the price, but except in mathematics we are never building on first principles.


> but except in mathematics we are never building on first principles

In practice, mathematicians also assume "that problems we don't deal with directly are taken care of by somebody else in good faith". We use theorems that others proved, and we trust that the authors and the reviewers did their job, without inspecting the proofs ourselves. Sometimes mistakes slip through, but they are usually caught fairly quickly, and can be fixed locally. In other words, we are less strict than we like to think we are, but we get away with it perfectly well.


Minix is still under continued development but Intel isn't sharing their tree.


The wonders of non copyleft open source.


Let's not presume the development would happen if the code was copylefted.


Right, just lot's of proprietary "modules" and firmware blobs would have been developed.


So google shares their modified Server-Linux-Kernel with you?

The wonders of copyleft...


So you have a Google server running in your PC? Copyleft only requires disclosure for source code that's actually shipped (which definetly applies to Intel).


That's what i meant, the difference is not so big, i also have IntelME not running, but lots of firmware blobs.


During the pandemic, I found myself unemployed with mountains of free time.

I knew I specifically wanted to dig into minix as a pastime. In my mind, the text book(s), and a fully featured Unix with a small code base were a unique combination...

That is, until I sat down to actually buy the book. The $200 price tag was beyond sticker shock. It was literally offensive.

I didn’t take long to find Operating Systems: Three Easy Pieces (OSTEP)[1]. The book can be had for Legally FREE or Affordable. And the introduction contained (among everything else) a brief manifesto admonishing the crazy high prices of college text books.

The book is funny, technically excellent, and also features a tiny-sized Unix like operating system: xv6.

Pouring my hours into this book and learning the XV6 operating system was time well spent.

[1]: https://pages.cs.wisc.edu/~remzi/OSTEP/?source=techstories.o...


What job did you get with your knowledge?


From Unix v6 to Advanced Programming in a Unix Environment it's a tiny step to do, and knowing C+Unix/Posix it's a huge advantage today.


I was looking at this the other day to learn OS stuff. Too bad, as it seems like a very nice design.

Does anyone have any OS course / book recommendations?

I've worked my way through the excellent MIT course on xv6 [0], but I'm not sure what to work on next. Something related to Linux or one of the BSDs would be nice to see how things are done in the real world.

I cannot recommend this MIT course enough for those getting started. The projects are set up in a very nice way (i.e. if you can't complete one tricky assignment, you don't need it as a prereq for later ones) and the code is very simple. They've also gone to great lengths to set it up in a simple way (e.g. using cross-compilation for RISCV on qemu). It's also a great experience to really understand OS, as you'll make mistakes that will leave you scratching your head until you realize you messed up your page table and yeeted your kernel stack out of your address space.

[0] https://pdos.csail.mit.edu/6.828/2020/index.html


There's a nice free OS textbook by Remzi and Andrea Arpaci-Dusseau named Three Easy Pieces (https://pages.cs.wisc.edu/~remzi/OSTEP/), which was recently featured on this site (https://news.ycombinator.com/item?id=26051386). There are some exercises from the textbook that use xv6.

Here are some other nice OS textbooks from a Unix standpoint that I've used over the years whenever I need to study some aspect of Unix in depth:

- The Design of the UNIX Operating System by Maurice Bach (1986)

- The Design and Implementation of the FreeBSD Operating System, 2nd Edition by McKusick, Neville-Neil, and Watson (2014)

- UNIX Internals: The New Frontiers by Uresh Vahalia (1996) (note that this book covers the implementations of Unix variants that were contemporary in 1996)


Wow thanks! These are awesome resources!

Do these books (or perhaps courses around them) contain a graduated set of exercises for understanding?

That's the thing that was so great about the MIT course, but it's been difficult to find that elsewhere.

A similar course with a great pedagogical progression was CS631 at Stevens [0], which went over Advanced Programming in the Unix Environment with a bunch of videos and exercises.

[0] https://stevens.netmeister.org/631/


I just pulled these books off my shelf, and all three of them have exercises at the end of each chapter (though there is no answer guide, although some of the questions are actually research problems).


It depends what you're looking for. If you don't know, read https://wiki.osdev.org/What_order_should_I_make_things_in and find what sounds interesting to you. Then either start or join a project to do said stuff you find interesting. If you're looking for publications, most academical operating system projects have a bunch of papers (https://wiki.minix3.org/doku.php?id=publications, http://www.helenos.org/wiki/Documentation).

Personally I'm having fun hacking on SerenityOS. So far I've done a NE2000 network card driver, net booting, fixes to boot on an old Athlon XP computer, fixes to the ext2 filesystem, printing lines numbers in the JavaScript interpreter backtraces, some keyboard accessibility stuff (https://github.com/SerenityOS/serenity/commits?author=boricj)... Basically any random thing that I find fun to work on.


* http://jdebp.uk./FGA/operating-system-books.html

I must put in the review one day. (-:


Genode running on top of seL4 seems very promising!!!

But there's still no minimal decent desktop (e.g.: lxqt)


I could be wrong, but isn't Intel's management engine running Minix?



The article says its a closed source variation, which is no doubt being actively worked on by Intel.


Probably not too actively, tbh. This could easily be a "vendor and forget" situation.


This begs the question, has Intel taken now over project ownership?


As someone else mentioned here, MINIX is BSD-licensed rather than using a copyleft license like GPL.

Intel is not required to share the source code (and they don't).

If MINIX as a publicly developed project does die then it's a data point for copyleft licensing being more robust.


If it was licensed as copyleft it probably wouldn’t get used by Intel in the first place.


But now they're stuck with an out-of-date OS that's never getting any security updates. Ever.

And it's exposed on the network because its used in Intel Management Engine.

Meanwhile the Linux kernel is going strong with many companies helping maintain it.

Seems like a shortsighted decision on Intel's part.


The ME is 32 bit x86. Linux support there is pretty weak these days. I don't know that Intel has a lot of good choices.


Is Linux kernel support for i386 really that weak? I know they've dropped 486 a few years ago and Debian dropped 586 (but the Linux kernel hasn't) and 686 is definitely still widely used.

Linux only drops support when nobody uses the architecture anymore. And if Intel is using it for ME then there's a big case to be made for keeping it.



Just a correction after some quick Googling: Linux dropped 386 in 2012, but 486 is still supported as best I can tell.


That's like saying Apple got stuck with an out of date browser because KHTML development slowed.


Who would have thought in ~2002 that KHTML/Konqueror would, in a background/seeding sense, literally do just that...


Not in a way that matters for the public/foss version


With no commits since 2018, it seems the public/FOSS version has effectively died.


RIP: Minix was the first "Unix like" I was able to tinker with. It was the very early 90s and I was taking a computer repair class (learning DOS, Mac, and Netware). Eventually I asked a teacher, "what is Unix?" and he said, "oh I've used it once or twice, it is quite elegant compared to DOS..."

Sounded interesting, and somehow I found a copy of Minix for my 386 that I cobbled together from parts. Not sure where I got it, but probably downloaded from a BBS. A year or two later after getting a job, I'd discover Sun, SGI, and Linux. My experience with Minix helped me get around at the shell.


First ran Minix in the late 1980s on an 8088 PC with two floppy drives and no hard drive. I think it had 512K RAM.

Used it for an undergrad operating systems course. Never used it again.


Likewise. It was an intro OS-hacking platform, fine for learning about operating systems hands-on but clearly not up to par for serious use. Once learned, it inspired many students to delve into other competing - and more exotic - operating systems; having had a good look around the subject, we then found the real world was running on a small number of robust platforms, and Minix was never even close to being one of them.


Last time I checked it was the most popular OS in the world:

https://www.networkworld.com/article/3236064/minix-the-most-...


Are there really that many Intel processors?


There is a good chance that this isn't true. For example I don't think most Android phones have any Minix and many could servers are running more than one instance of Linux vs the 1 instance of Linux.


That's an extremely bad article and I wouldn't take any of its claims seriously


Including the claim that Minix is running on modern Intel CPUs behind the scenes? This is referenced at:

https://en.wikipedia.org/wiki/Intel_Management_Engine


Didn't Tanenbaum retire about that time?


He still maintains the famous electorial college site, though as Hillary fanboy his activity dropped during the Trump years. But still much more than on minix, which is now run by Intel internally.


The group linked from the project page is somewhat active: https://groups.google.com/g/minix3?pli=1


And someone seems to have asked the same question back in January: https://groups.google.com/g/minix3/c/CaGIAWYbzUo


This is sad. I have fond memories of running MINIX on the Atari ST - my first encounter with Unix.


I hope the same doesn't happen to NetBSD noticeably but also the other BSDs in general.


What advantages does NetBSD have that distinguish it from other BSDs and other existing OS options?


I read the GP comment as feeling that NetBSD was more vulnerable to developers losing interest in it.


Yeah, and I was (sincerely) curious why they’d be disappointed if that happened.


Check out Redox OS. Takes a lot of inspiration from Minix.


I suppose https://fuchsia.dev/ is another option if you wanted a microkernel.


I consider https://www.genode.org the state of the art.


It does seem like they might have a keg up on security.

But if it's written in C..


Firstly, it is C++, not C.

Secondly, non-C/C++ use in OSs is uncommon, and thus considered research.

Thirdly, the microkernel with the strongest formal proof of correctness, seL4, is written in C.


See also Haiku OS[1], same but from BeOS.

Looks very promising, but they are still on Beta. [1]https://www.haiku-os.org/


Mmm. Redox has a microkernel like Minix, but written in Rust, so that it's potentially safer or at least easier to reason about what might be unsafe and why.

Haiku is a conventional monolithic design, like BeOS, in fact arguably more so than shipped releases of BeOS because BeOS had a separate TCP/IP stack which made its network performance abysmal, Be Inc. had been working to fix that by just plonking the entire stack inside the kernel, and Haiku copies this approach. Like BeOS, Haiku is written in some specific dialect of C++.

So the main similarity is that Haiku also hasn't shipped a version 1. Although that's maybe more striking after 20 years of development than Redox's 5-6 years.


Building an operating system is very hard. I'm not surprised that haiku is not yet 1.0. I would be surprised if redox os hit 1.0. It takes a lot of investment and unless you are targeting a few device (like apple does) it is almost impossible to have perfect compatibility with the huge variety of PC hardware out there.


I'm having trouble finding the thread, but I believe the lead developer has stated that even in the kernel, the code only sits at something like 20-30% unsafe anyway, which is a huge reduction in overall vulnerable surface area.


Netcraft confirms it.


and in Soviet Russia Minix develops you...


As coroner I must aver, I have thoroughly examined it, And it's not only merely dead, It's really most sincerely dead.


Interesting, I don't see any announcement of a fork about Minux.


Every technology dies unless there is a human alive who cares enough about it.

For futureproofing it, perhaps make a Docker container of it


> For futureproofing it, perhaps make a Docker container of it

You can't (directly) put an OS in a Docker container. Docker is an application container technology, not an OS virtualisation technology. Apps inside Docker run in user mode under the Linux (or less commonly, Windows) kernel.

What you can do, is put an emulator in Docker, and put the OS in the emulator image. However, if your OS runs on a common platform (e.g x86 or x86-64), there is probably not much value to creating a Docker container vs just serving up the VM (such as a disk image or OVA file) for use with QEMU or VirtualBox or whatever.

If you are dealing with a more obscure platform, it can make more sense to package the image and emulator together into a Docker container. For example, I put the 1970s vintage IBM mainframe operating system MVS 3.8J [0] in a Docker container. I think that makes more sense because it requires a less common emulator (Hercules as opposed to QEMU or whatever), and Hercules needs to be configured with a specific configuration to make it work.

[0] https://hub.docker.com/r/skissane/mvs38j


What a cool project. Now you have me on the hunt for SIMH docker images for PDP 11 and Vaxen.


Just use SIMH and learn to do it, dammit.


This.

It's not hard. I got a copy of DEC VAX/VMS running inside SimH a decade ago, in one evening's work.


In my defense - I have used naked SIMH. I just thought the idea of point and grunt SIMH sounded fun.


Good points all. Also, this is awesome.

I think "machine preservation via emulation" may need to be a thing. Supposedly there's thousands of science papers, for example, whose supporting code may no longer trivially run...


Lol, in the end, Andrew S. Tanenbaum won the operating systems wars[*], since MINIX is apparently likely now the most widely used operating system in the world: https://www.networkworld.com/article/3236064/minix-the-most-...

Interestingly, Linux might be right behind it, at second place, dominating servers, cars, and all sorts of industrial devices.

But funnily enough in the end, when it comes to laptop and desktop computers, the Windows NT kernel, and the macOS Darwin kernel are still the reigning champions. It’s funny, because I think in the 90s, every OS designer was trying to take the place of MS-DOS/Windows/Mac. The two flamewar[*] buddies won out in the end, but likely not in the market segments they had initially expected to win.

[*] https://en.wikipedia.org/wiki/Tanenbaum–Torvalds_debate


Linux runs on every android, most smart tv's, servers, routers... These are way more numerous than intel systems. Tanenbaum definitely lost this one.


Also myriad of networking equipment around the world are in the process of switching or already runs linux


The Linux kernel does.

Minix is full OS.


There are 3+ Billion of Android Smartphone alone, and that is excluding the Tablet, SmartTV, Server, and other gadget making the number much closer to 4 Billion.

Intel's number wouldn't even be half of that.


The use of Linux kernel on Android is an implementation detail, only exposed to OEMs.

Applications make use of Java and Kotlin frameworks, ISO C and C++ standard libraries and a couple of Android specific native APIs.


The fact that Android apps don't directly call the Linux API is not at all relevant (unless Google switch out the Linux kernel for an alternative like Fuchsia/Zircon). The discussion here is about OS market share.


That OS is Android, Linux is just a kernel.

Android can be running on top of Zircon or any BSD tomorrow, besides OEMs and device rooters, no one else would notice.

Doesn't matter how one tries to sell it, it isn't GNU/Linux and doesn't count as such.


Nobody's brought up GNU vs Android userspace. This entire discussion has been about the kernel market share, and Android uses the Linux kernel.


It does, and no one cares besides bragging rights from failed attempts to turn it into the "Year of Desktop Linux".


I think L4 has it beat. One variant alone, OKL4, has shipped in billions (possibly tens of billions) of mobile devices.


I would argue that L4 is closer to "unix like" as a class than it is to Minix or Linux. The L4 kernels share a common API, but not necessarily a common ABI. Of course it's complicated because L4 itself is minimalistic, and the common API means you can get a monolith with a pluggable L4 kernel so there isn't a true apples to apples comparison.


My phone runs Linux. My GF's phone runs Linux. My TV runs Linux, too. So does the TV box-set (cable provider), and the zipitz2 runs Linux within OpenWRT.

Probably, ,my teleco's router runs Linux, too.

And I say this as a desktop OpenBSD user.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: