Interesting! If they can really get it ticking this time, it would be really interesting to see how a microkernel plays out in 'popular' use.
It's sort of funny how the major kernels are primarily monolithic (please correct me if I'm wrong wrt recent versions of Windows), but academic research says microkernels are better. Worse is better? First-mover advantage?
I'm not an expert in this but I think the issue is one of tradeoffs. I think I remember reading (someone correct me if I'm wrong) that broadly generalized, microkernels have better security at the expense of performance, vice versa for macrokernels. See the "Tanenbaum–Torvalds debate" on Wikipedia.
Either way I'm interested to see if HURD will ever take off in any real sense. For example, could we see an Ubuntu/HURD mix in 5 or 10 years? Will it even matter with that kind of timeframe? Would there ever be any practical advantage to use HURD vs Linux besides more "freedoms"?
> I think I remember reading (someone correct me if I'm wrong) that broadly generalized, microkernels have better security at the expense of performance, vice versa for macrokernels.
that's true, but the "security" is not really what we perceive in the post-google, massively distributed era.
microkernels made sense back when key to uptime was hot-swappable devices. e.g. if your nic goes awry due to a hardware problem, it can crash its driver. in case of a monolithic kernel, this would in turn crash the os, whereas with a microkernel, the rest of the system should continue humming just fine. this makes replacing the failed component possible without causing downtime by shutting down the single node the service runs on.
but nowadays, we know how to set up systems so that taking down an entire node (gasp!) won't harm the operation of the service as a whole. so i don't think microkernels are that relevant anymore.
IIRC Tru64 or OSF/1 (nee Ultrix) was a mainstream Unix based on a microkernel.
The problem with microkernels is their advantages (in terms of, a component of the kernel crashing won't take down the box) has rarely been worth it in light of the disadvantages (all that message passing hurts performance). So "better" depends on your POV.
The BSD-style kernel "subsystem" of XNU, Darwin's kernel, is statically linked with Mach and executes in privileged mode. So, while it "has" Mach and some parts of the kernel interact with other parts via Mach port interfaces, XNU isn't really a microkernel as classically defined anymore since it provides all the facilities you'd expect of a non-micro kernel.
It's been a while since I have kept up with the kernel space but I remember when I last was interested in it, that there was a trend toward hybridization. That Linux was adopting some micro like architectures as well as windows. OSX probably could be considered a hybrid from the start.
Actually I believe that's incorrect: stock Mach 2.x was considered a microkernel (though, like Mach 3.x, not always used as one in practice), and, versions of XNU since the first client release of Mac OS X are built on a modified Mach 3.0 kernel. But you're right that it's not used in a microkernel-like way.
You're partially right with Windows. Windows NT, since the beginning IIRC, uses a hybrid kernel in which things like drivers and IPC are and such are still in the kernel, but the application subsystems and fileservers run in user space. Plan 9 has a very similar design.
It's sort of funny how the major kernels are primarily monolithic (please correct me if I'm wrong wrt recent versions of Windows), but academic research says microkernels are better. Worse is better? First-mover advantage?