On 1/28/20, Martin Decky martin.decky@huawei.com wrote:
FYI: The microkernel-based OS Huawei is working on internally has Linux compatibility (both from the syscall API and from the drivers point of view) as one of its goals.
Is it natively Unix-like, or is the Linux compatibility layer some kind of containerized "penalty box" that doesn't really integrate into the system? Also, since you're not developing it publicly , that may limit its ability to compete with Linux, which of course has a completely open development model without copyright assignment. In addition, from one article I read, it sounded like even though it is going to be open source eventually, it was designed with the assumption that it would be running on tivoized devices with locked bootloaders, with no way for the user to get full control.
GNU Hurd obviously also tries to be GNU/Linux compatible (not on the syscall level, but on the glibc level). Many other microkernel-based systems (e.g. Genode, just to name one) maintain their own adaptation layers for hosting Linux device drivers.
From what I understand, Hurd doesn't implement a lot of
Linux-kernel-specific APIs.
On 1/28/20, Paul Boddie paul@boddie.org.uk wrote:
I actually think that most people agree with you on this, at least when confronted with the notion for the first time. But I don't think such considerations should overrule every other consideration: it reminds me of people in the English-speaking world (particularly in my country of origin)
who say that if they could be bothered to learn a foreign language, it would
be "something like Spanish that a lot of people speak", justifying the choice in terms of all the millions of individual speakers with whom they could be
having hypothetical conversations that, naturally, they wouldn't end up having anyway.
For my purposes, I would much rather have an OS that runs Linux programs natively rather than having to port every single application I want to run.
Your project being this one:
Is there a summary of it anywhere?
This is the closest thing to a summary that I have at the moment:
https://gitlab.com/uxrt/uxrt-toplevel/blob/master/architecture_notes
I wonder whether people don't already rely on things like L4Linux for such functionality. However, I don't personally think that having boxed-up driver
frameworks is particularly elegant or plays to the strengths of microkernel- based paradigms. In L4Re, I found myself staring down bits of imported Linux
kernel code and wondering whether things might not have been easier - and most certainly clearer - had something been developed independently.
Yes, it's true that it's maybe not the most elegant way to run drivers, but I think it's a worthwhile trade-off for broad hardware support. As far as UX/RT is concerned it's not really that big a sacrifice because it will have relatively little vertical modularization anyway (e.g. instead of separate disk filesystem, partition driver, and disk device driver processes, all three will be incorporated into a single "disk server" process, much as on QNX; this should improve performance over most other microkernel OSes while not really sacrificing much as far as security or error recovery).
Of course, the real solution is to have a library of functionality that a variety of systems could use. Then, we wouldn't need to pick over the remains of some other project. Being familiar with this industry over a couple of decades, however, I am fully aware that people exhibit rather strong tendencies to insist that such cooperation is not possible, usually because of something super-special that they insist that only they are doing.
I'd be willing to cooperate on such libraries as long as they don't place unnecessary burdens on client code.
Dear Andrew,
Is it natively Unix-like, or is the Linux compatibility layer some kind of containerized "penalty box" that doesn't really integrate into the system?
Neither. The OS is NOT a Unix clone, but the Linux compatibility tries to be as tightly integrated as possible. Unfortunately, I cannot go into details. Windows Subsystem for Linux version 1 (not version 2) has had a similar goal, but we target even tighter integration. Of course, it is a delicate balancing act, as I try to explain below.
Also, since you're not developing it publicly , that may limit its ability to compete with Linux, which of course has a completely open development model without copyright assignment. In addition, from one article I read, it sounded like even though it is going to be open source eventually, it was designed with the assumption that it would be running on tivoized devices with locked bootloaders, with no way for the user to get full control.
I cannot really add any comment to this. I am a researcher, not a manager :) In my previous email, I have merely pointed to the existence of our effort. I am not promoting it nor advocating our development model :)
From what I understand, Hurd doesn't implement a lot of Linux-kernel- specific APIs.
Yes. This is exactly the reason why I have made the distinction between "syscall-level compatibility" and "glibc-level compatibility".
For my purposes, I would much rather have an OS that runs Linux programs natively rather than having to port every single application I want to run.
Unfortunately, a bitter truth is that there will never be a better Linux than Linux.
From my experience, designing an inherently better (i.e. more robust, more safe, more secure, etc.) OS with a microkernel design and at the same time having an almost complete compatibility with a legacy OS of a completely different design (e.g. Linux, Unix) is inevitably corrupting both goals.
What do I mean by this? Having some kind of compatibility layer for running unmodified legacy programs is certainly a nice feature, but it should be a last resort option, not a first-class option for a microkernel-based system. We simply need to draw the line somewhere and cut ourselves from the legacy ideas that are no longer useful [1]. There are viable ways (i.e. virtualization) for running 100% genuine Linux on top of a microkernel-based OS without compromising the microkernel design.
[1] https://blog.systems.ethz.ch/blog/2019/fork.html
Best regards
Martin Decky
On 1/29/20, Martin Decky martin.decky@huawei.com wrote:
Neither. The OS is NOT a Unix clone, but the Linux compatibility tries to be as tightly integrated as possible. Unfortunately, I cannot go into details. Windows Subsystem for Linux version 1 (not version 2) has had a similar goal, but we target even tighter integration. Of course, it is a delicate balancing act, as I try to explain below.
Even if it's more tightly integrated, it's still a penalty box IMO if it's a second-class citizen within an incompatible native environment (which few people are going to port anything to no matter how good it is because any new OS is going to have limited market share, leaving pretty much everyone to just write Linux programs, much as with OS/2 and Windows back in the 90s).
Unfortunately, a bitter truth is that there will never be a better Linux than Linux.
From my experience, designing an inherently better (i.e. more robust, more safe, more secure, etc.) OS with a microkernel design and at the same time having an almost complete compatibility with a legacy OS of a completely different design (e.g. Linux, Unix) is inevitably corrupting both goals.
What about QNX? It's one of the most practical and successful microkernel OSes in the world and it's natively Unix-like. It's not perfect, but none of those imperfections are really inherent to the Unix-like architecture. Pretty much every other OS developer seems to ignore it though. I really don't get why that is.
UX/RT will more or less just take QNX's general architecture and enhance/fix a lot of things. Assuming it is successful, it will only be the second working QNX-like OS other than QNX itself of which I am aware (VSTa was the first QNX-like OS besides QNX itself that I am aware of, but it is now abandoned).
What do I mean by this? Having some kind of compatibility layer for running unmodified legacy programs is certainly a nice feature, but it should be a last resort option, not a first-class option for a microkernel-based system. We simply need to draw the line somewhere and cut ourselves from the legacy ideas that are no longer useful [1]. There are viable ways (i.e. virtualization) for running 100% genuine Linux on top of a microkernel-based OS without compromising the microkernel design.
UX/RT won't just be a legacy-misfeature-for-legacy-misfeature compatible reimplementation of conventional Unix on a microkernel. There will be a lot of longtime Unix features that it will either throw out completely (e.g. setuid executables, binding of device nodes by major/minor numbers, utmp) or reimplement on top of new APIs (e.g. fork, which will be be built on top of an API that allows creating a "blank" process and manipulating its state, and BSD sockets, which will be reimplemented on top of a filesystem-based API sort of like that of Plan 9). Even though it will be superficially familiar-looking in a lot of ways, it will still be quite different from conventional Unix.
Dear Andrew,
Even if it's more tightly integrated, it's still a penalty box IMO if it's a second- class citizen within an incompatible native environment (which few people are going to port anything to no matter how good it is because any new OS is going to have limited market share, leaving pretty much everyone to just write Linux programs, much as with OS/2 and Windows back in the 90s).
I am not good in predicting the future. I can only try to influence it. But still I think you are overgeneralizing a bit here. Not everyone is writing Linux programs. Many people write Windows programs (if we like it or not). Many people write programs for managed environments such as JVM and .NET (among others) or use high-level languages such as Python, R and Haskell (among others) and they don't really care much about the underlying OS. Many people actually write programs for niche environments (i.e. RTOSes).
Saying that Linux (or Unix, for that matter) is the entire world and anything that does not try hard to mimic Linux (or Unix) is deemed to fail is just black-and-white thinking to me. The world is actually quite colorful :) and it is surely not static.
What about QNX? It's one of the most practical and successful microkernel OSes in the world and it's natively Unix-like. It's not perfect, but none of those imperfections are really inherent to the Unix-like architecture. Pretty much every other OS developer seems to ignore it though. I really don't get why that is.
I can assure you that I am definitively NOT ignoring QNX. I know the code base of the open-sourced release of QNX quite well and I could spend hours discussing what I like and what I dislike about it :) Plus, I have to deal with the legacy of QNX even more than I would like on a daily basis (wink-wink :)).
But, frankly, if QNX is indeed successful primarily because of being Unix-like then it only demonstrates that being Unix-like is not a sufficient condition for world domination. I am not arguing that being Unix-like is not a necessary condition for _some_people_ (like you), I am just saying that it is of little importance or even a hindrance for _some_other_people_ (like me).
UX/RT will more or less just take QNX's general architecture and enhance/fix a lot of things. Assuming it is successful, it will only be the second working QNX-like OS other than QNX itself of which I am aware (VSTa was the first QNX-like OS besides QNX itself that I am aware of, but it is now abandoned).
I sincerely wish you good luck regarding that! I honestly welcome every effort related to microkernels. And after all, the observable reality is the only true judge of our ideas :)
UX/RT won't just be a legacy-misfeature-for-legacy-misfeature compatible reimplementation of conventional Unix on a microkernel. There will be a lot of longtime Unix features that it will either throw out completely (e.g. setuid executables, binding of device nodes by major/minor numbers, utmp) or reimplement on top of new APIs (e.g. fork, which will be be built on top of an API that allows creating a "blank" process and manipulating its state, and BSD sockets, which will be reimplemented on top of a filesystem-based API sort of like that of Plan 9). Even though it will be superficially familiar-looking in a lot of ways, it will still be quite different from conventional Unix.
OK. But now I don't really understand what is the main conceptual difference between your approach and the approach of some of the other microkernel-based systems that also provide some (sometimes native, sometimes optional) Unix compatibility (GNU Hurd, MINIX 3, Redox, Huawei's OS, maybe to a lesser degree Genode, HelenOS, etc.).
We all follow the same reasoning: Take inspiration in Unix where it makes sense, ignore the parts that are obsolete (or purely obscure from today's perspective), but provide some sort of "polyfills" for the use cases where running Linux programs (with or without recompilation) is really strictly needed by someone.
Of course, we will probably never agree where to actually draw the dividing lines and what specific implementation means are the best (what should be "native" and what should be a "polyfill"). And that's perfectly fine because we all have different motivations and tastes.
But I fail to see why do you label the approaches of others as "penalty boxes" with respect to full genuine Linux compatibility while you don't label your own approach as such. In case you plan to throw out setuid executables (your own words) then you can hardly achieve full Linux compatibility.
Best regards
Martin Decky
On 1/29/20, Martin Decky martin.decky@huawei.com wrote:
I am not good in predicting the future. I can only try to influence it. But still I think you are overgeneralizing a bit here. Not everyone is writing Linux programs. Many people write Windows programs (if we like it or not). Many people write programs for managed environments such as JVM and .NET (among others) or use high-level languages such as Python, R and Haskell (among others) and they don't really care much about the underlying OS. Many people actually write programs for niche environments (i.e. RTOSes).
Saying that Linux (or Unix, for that matter) is the entire world and anything that does not try hard to mimic Linux (or Unix) is deemed to fail is just black-and-white thinking to me. The world is actually quite colorful :) and it is surely not static.
A non-Unix-like OS isn't necessarily doomed to fail, but it's still not going to have as broad appeal as a Unix-like one.
But I fail to see why do you label the approaches of others as "penalty boxes" with respect to full genuine Linux compatibility while you don't label your own approach as such. In case you plan to throw out setuid executables (your own words) then you can hardly achieve full Linux compatibility.
To me a "penalty box" environment is one that segregates legacy applications to a container that must be visibly separate from the rest of the system and has limited support for native features, as opposed to one that tries to be more of a wrapper/filter that integrates into the native environment.
And I'm not actually planning to support 100% compatibility with Linux. UX/RT will be compatible with the vast majority of Linux applications and will allow running them in basically the same environment as native programs (with overlays for Linux libraries, procfs, and sysfs). Anything that expects to manage logins won't work except with fakeroot (in which case it will only be able to manage logins within the fakeroot environment itself).
Even though setuid executables won't be supported natively (fakeroot will implement setuid within its own environment to the extent allowed by permissions) there will be a facility to mark particular binaries as running with elevated privileges, which will allow defining rules to determine when privilege escalation actually takes place and what kind of privileges the process gets (sort of like an implicit sudo but more flexible).
On Tuesday 28. January 2020 17.52.21 Andrew Warkentin wrote:
On 1/28/20, Martin Decky martin.decky@huawei.com wrote:
FYI: The microkernel-based OS Huawei is working on internally has Linux compatibility (both from the syscall API and from the drivers point of view) as one of its goals.
Is it natively Unix-like, or is the Linux compatibility layer some kind of containerized "penalty box" that doesn't really integrate into the system?
I would personally concentrate on the conceptual penalties rather than any performance penalties, which is often the emphasis in these kinds of discussions. For example, it bothers me somewhat that people are enthusiastic about things like the NetBSD rump kernel which doesn't come across as a particularly elegant or coherent solution to me. One ends up with some alien component that must either be completely relied upon to just work or which demands its own kind of expertise.
This is not to say that NetBSD, to continue with that example, doesn't offer anything. In fact, my C library experiments with L4Re utilise Newlib which is mostly a derivative of NetBSD's C library. Meanwhile, Minix 3 could be described as a NetBSD variant given the volume of NetBSD code involved (despite the many #ifdefs to activate Minix-specific sections). Maybe that is what one really ends up with when adopting whole subsystems from an existing project.
Also, since you're not developing it publicly , that may limit its ability to compete with Linux, which of course has a completely open development model without copyright assignment. In addition, from one article I read, it sounded like even though it is going to be open source eventually, it was designed with the assumption that it would be running on tivoized devices with locked bootloaders, with no way for the user to get full control.
GNU Hurd obviously also tries to be GNU/Linux compatible (not on the syscall level, but on the glibc level). Many other microkernel-based systems (e.g. Genode, just to name one) maintain their own adaptation layers for hosting Linux device drivers.
From what I understand, Hurd doesn't implement a lot of Linux-kernel-specific APIs.
Nor do the BSDs, but it is the reliance on Linux-specific features that ends up hurting Free Software. We only need look at the Free Software desktops and their reliance on software which demands an increasingly monolithic and "opinionated" stack of Linux-specific software. Things like Debian GNU/Hurd and kFreeBSD, whatever their merits, suffer from this creeping monoculture.
[...]
For my purposes, I would much rather have an OS that runs Linux programs natively rather than having to port every single application I want to run.
It depends on what your ambitions are for interoperability. From what I recall, the BSDs and maybe various proprietary Unix products sought to support ABI compatibility with Linux, meaning that you could run Linux executables - presumably Intel x86 flavoured - on those operating systems. (I think Spring also sought to support Solaris binaries in such a fashion, by the way.)
But I imagine that you're looking for source code compatibility. However, I don't think that portability should be too much of a concern for well- engineered software projects: we all lived with multiple Unix flavours and autotools for years and things went pretty well. And some projects deal with even more esoteric things like Mac OS X, Windows NT (and successors), and sometimes even their predecessors.
[Imported but perhaps not "containerized" drivers]
Yes, it's true that it's maybe not the most elegant way to run drivers, but I think it's a worthwhile trade-off for broad hardware support. As far as UX/RT is concerned it's not really that big a sacrifice because it will have relatively little vertical modularization anyway (e.g. instead of separate disk filesystem, partition driver, and disk device driver processes, all three will be incorporated into a single "disk server" process, much as on QNX; this should improve performance over most other microkernel OSes while not really sacrificing much as far as security or error recovery).
My recent experiences with Linux drivers suggest that understanding what they do is arguably much better than relying on them to work, even in Linux! But particularly so if they end up in another environment altogether. The different driver frameworks in Linux are not well-documented, unless there's some O'Reilly book that everyone is supposed to buy.
My own experiments have separated components out into layers. This is partly for the conceptual convenience, but I also want flexibility in how components can be combined. Unfortunately, I don't have any real familiarity with QNX: I know it has a long and interesting history as a Unix-like system, but when looking at historically interesting systems, I tended to choose the ones that seemed more pertinent to the topics I was investigating at the time.
[...]
I'd be willing to cooperate on such libraries as long as they don't place unnecessary burdens on client code.
For some kinds of drivers, the logic should be pretty indifferent to what kind of framework they are used in, and fairly conventional functions implementing the principal operations should cover most of the work. (I say this with some confidence looking at the drivers I wrote for L4Re, even though most of them are rather simple.) Generally, frameworks are the things that tend to inhibit re-use because they start making demands on how everything is to be done in the system.
I thought a bit more about libraries after writing my last message, and I do think that there would be benefits in describing component interfaces, if only to have productive discussions about how functionality might be arranged in such systems. I would say that such elaboration of interfaces has been rather de-emphasised in things like L4Re, with libraries being used to hide away such details, but just as various modelling diagrams can help to understand software, so we might expect interface descriptions to help us reason a bit better about the systems we want to develop.
Paul
On 1/29/20, Paul Boddie paul@boddie.org.uk wrote:
Nor do the BSDs, but it is the reliance on Linux-specific features that ends
up hurting Free Software. We only need look at the Free Software desktops and their reliance on software which demands an increasingly monolithic and "opinionated" stack of Linux-specific software. Things like Debian GNU/Hurd
and kFreeBSD, whatever their merits, suffer from this creeping monoculture.
Yes, that's a problem I've come across more and more. That's exactly the reason why Linux compatibility will be a priority for UX/RT. I'm hoping I can make Linux world domination work for me rather than against me.
It depends on what your ambitions are for interoperability. From what I recall, the BSDs and maybe various proprietary Unix products sought to support ABI compatibility with Linux, meaning that you could run Linux executables -
presumably Intel x86 flavoured - on those operating systems. (I think Spring
also sought to support Solaris binaries in such a fashion, by the way.)
But I imagine that you're looking for source code compatibility. However, I
don't think that portability should be too much of a concern for well- engineered software projects: we all lived with multiple Unix flavours and autotools for years and things went pretty well. And some projects deal with
even more esoteric things like Mac OS X, Windows NT (and successors), and sometimes even their predecessors.
Yes, well-designed applications should usually be portable, but as you said it is becoming increasingly common for programs to depend on Linuxisms.
I thought a bit more about libraries after writing my last message, and I do
think that there would be benefits in describing component interfaces, if only to have productive discussions about how functionality might be arranged in
such systems. I would say that such elaboration of interfaces has been rather de-emphasised in things like L4Re, with libraries being used to hide away such details, but just as various modelling diagrams can help to understand software, so we might expect interface descriptions to help us reason a bit
better about the systems we want to develop.
If you're talking about RPC interfaces, components that require them would be something I would consider an unnecessary burden in many cases. UX/RT will limit its use of traditional dynamic RPC except where it is actually the best way to implement something (like a lot of desktop/GUI-related things; the process server will use a limited form of RPC that is purely static).
On Tuesday 28. January 2020 17.52.21 Andrew Warkentin wrote:
This is the closest thing to a summary that I have at the moment:
https://gitlab.com/uxrt/uxrt-toplevel/blob/master/architecture_notes
I felt that this deserved a separate message. :-)
Firstly, you've got your work cut out with all those goals and criteria, but I think that it is always useful to document such thoughts and ideas, although I find that it can also be rather overwhelming, too. I haven't read through everything and have only taken a quick look because there is a lot to read and digest on that page.
Given previous discussions about compatibility, I find it interesting that you have described specific details of certain features such as the naming of disks and partitions in the filesystem. I guess that you and I have different approaches: I would probably defer such details until later, maybe even leaving them open for different "personalities" or configurations of the system.
Indeed, I would say that a fair amount of the document could conceivably describe a kind of system personality that could be supported by other systems. To take the example of a password database exposed at /etc/passwd in the filesystem, all that would be needed to provide this in something like L4Re is a server pretending to be a file. (Of course, the provided filesystem mechanisms in L4Re arguably don't support the necessary modularity, which is how I ended up looking into the matter.)
One thing that I was looking for, and so it immediately jumped out, was the choice of C library. It seems very fashionable for people to choose musl-libc (or however it is meant to be written), and there is certainly some persuasive material suggesting it to be a "better" choice than other C library implementations, but when I looked at it, I found there to be rather a lot of system calls sprinkled around in places where they seem like optimisations, meaning that the assumption is that a syscall would be "obvious" or necessary at such a point in the code, whereas one might have expected a plain function call to something that may or may not incorporate a syscall.
Meanwhile, it also seemed that the library rather assumed a Unix-style collection of distinct syscalls, arguably making it less than ideal for adaptation to something like L4 with a minimal set of more generic syscall operations. Now I accept that your needs might be different from mine, but I wouldn't mind knowing the rationale for your choice (and for anyone else's choice, musl-libc or otherwise, for that matter). In my own experiments, looking for the easiest option (as usual, although nothing was actually easy), there were a couple of other libraries that looked more malleable or more readily usable (dietlibc was easiest to build, Newlib was easier to imagine modifying).
I will say that quite a few of your architectural goals seem possible with something like Fiasco.OC as the microkernel, at least going from my limited understanding of it and L4Re through experimentation. I would go as far to say that we probably have broadly similar goals: I think that a filesystem- oriented approach is persuasive and is broadly accepted, although its benefits are often badly communicated (the now-familiar Hurd promotion of "translators", for instance) or seem arcane (various stuff related to Plan 9). When people have sought to question the approach by offering alternatives, like the concept of namespaces in Spring, I would argue that they have largely replicated the concept of a filesystem whilst failing to recognise what the benefits of a more generalised filesystem might be.
Anyway, I could probably continue like this for many more pages, but I hope that my words are some form of encouragement, and I look forward to more discussion if that interests anyone.
Paul
On 1/29/20, Paul Boddie paul@boddie.org.uk wrote:
Given previous discussions about compatibility, I find it interesting that you have described specific details of certain features such as the naming of disks and partitions in the filesystem. I guess that you and I have different approaches: I would probably defer such details until later, maybe even leaving them open for different "personalities" or configurations of the system.
Indeed, I would say that a fair amount of the document could conceivably describe a kind of system personality that could be supported by other systems. To take the example of a password database exposed at /etc/passwd in the filesystem, all that would be needed to provide this in something like L4Re is a server pretending to be a file. (Of course, the provided filesystem mechanisms in L4Re arguably don't support the necessary modularity, which is
how I ended up looking into the matter.)
I've never really been a fan of multi-personality systems, and I'd say they are less relevant now than ever because of low diversity and ubiquitous hardware virtualization. There are only really two environments for which compatibility matters in most cases - Linux and Windows. Linux compatibility can be implemented in a natively Unix-like environment, and so can Windows compatibility (with Wine). If you really need to run an entire alternate OS environment it can just be virtualized.
Having things like personality-neutral services just complicates the design of the system. It's far easier if you can just write servers to the same API as applications (under UX/RT, all servers except for the root server will be completely normal processes). Multiple personality support would almost certainly make the minimalist file-oriented architecture I have planned a lot more difficult to implement.
One thing that I was looking for, and so it immediately jumped out, was the
choice of C library. It seems very fashionable for people to choose musl-libc (or however it is meant to be written), and there is certainly some persuasive material suggesting it to be a "better" choice than other C library implementations, but when I looked at it, I found there to be rather a lot of system calls sprinkled around in places where they seem like optimisations,
meaning that the assumption is that a syscall would be "obvious" or necessary at such a point in the code, whereas one might have expected a plain function call to something that may or may not incorporate a syscall.
Meanwhile, it also seemed that the library rather assumed a Unix-style collection of distinct syscalls, arguably making it less than ideal for adaptation to something like L4 with a minimal set of more generic syscall operations. Now I accept that your needs might be different from mine, but I
wouldn't mind knowing the rationale for your choice (and for anyone else's choice, musl-libc or otherwise, for that matter). In my own experiments, looking for the easiest option (as usual, although nothing was actually easy), there were a couple of other libraries that looked more malleable or more readily usable (dietlibc was easiest to build, Newlib was easier to imagine
modifying).
Under UX/RT the raw seL4 API won't be exposed at all because it's not stable. Only the Unix-like transport layer and VFS/process server APIs will be visible to user processes, and these will be in a "libroot" library separate from libc. For functions usually implemented as system calls on traditional Unix but not present in libroot, there will be another library separate from both libc and libroot. Programs written in non-C languages won't even have to link with libc at all.
I will say that quite a few of your architectural goals seem possible with something like Fiasco.OC as the microkernel, at least going from my limited
understanding of it and L4Re through experimentation. I would go as far to say that we probably have broadly similar goals: I think that a filesystem- oriented approach is persuasive and is broadly accepted, although its benefits are often badly communicated (the now-familiar Hurd promotion of "translators", for instance) or seem arcane (various stuff related to Plan 9). When people have sought to question the approach by offering alternatives, like the concept of namespaces in Spring, I would argue that they have largely replicated the concept of a filesystem whilst failing to recognise what the
benefits of a more generalised filesystem might be.
I had thought of using Fiasco.OC at one point but I settled on seL4 because it's much more lightweight and it's formally verified. There's also a runtime for Rust root servers on seL4 but not Fiasco (back when I was considering Fiasco, I wasn't planning to use Rust). Another big problem with Fiasco is that it apparently requires a process to map a page into its own address space to be able to grant access to it, whereas on seL4 a process can map pages into address spaces of other processes without having them mapped into its own.
On Wednesday 29. January 2020 15.32.35 Andrew Warkentin wrote:
I've never really been a fan of multi-personality systems, and I'd say they are less relevant now than ever because of low diversity and ubiquitous hardware virtualization. There are only really two environments for which compatibility matters in most cases - Linux and Windows. Linux compatibility can be implemented in a natively Unix-like environment, and so can Windows compatibility (with Wine). If you really need to run an entire alternate OS environment it can just be virtualized.
Maybe my idea of personalities is somewhat less extensive than yours. For example, providing a filesystem with the necessary files in different places, with libraries looking for those files in the appropriate places, is to me providing a different personality. Consider, for instance, if Solaris had provided a BSD personality rather than putting BSD-related files in one set of places, SysV-related files in another, and so on.
Such lightweight personalities might not be more sophisticated than something supported by chroot or a limited "jail" solution in a traditional Unix-like system, although making such things work within those kinds of solutions can be easier said than done. Systems like Plan 9 might have an easier time of it, though.
Having things like personality-neutral services just complicates the design of the system. It's far easier if you can just write servers to the same API as applications (under UX/RT, all servers except for the root server will be completely normal processes). Multiple personality support would almost certainly make the minimalist file-oriented architecture I have planned a lot more difficult to implement.
In an L4Re-based system, servers do use the same API as applications and they are just normal processes (but maybe I misunderstand your point). But my argument is that personalities would be built on generic services, not that anyone needs to specifically engineer services for certain personalities, or at least not for all functionality. There may be some special behavioural requirements - there was a FreeVMS project that may have had such requirements and had struggled to implement them - but there should be a lot of common facilities that most personalities share.
I can think of awkward behavioural traits that might interfere with a generic services approach. For example, having to support Windows-like file locking semantics where an open file can obstruct usage of that file by other processes. This would raise questions about how processes expecting Unix-like semantics could co-exist with such an arrangement.
As a more mundane example of a personality, consider what L4Re provides already: it is an elementary virtual filesystem offering files with a limited amount of metadata to each task. That does not preclude the development of a Unix personality, nor does it prevent anyone from developing these facilities in a way that completely ignores things like processes and users and other Unix concepts, offering some kind of single-user personality or environment for certain tasks.
So, there could be C library support like that already in L4Re which doesn't really support Unix concepts, whereas another C library could contain the necessary support for interacting with Unix-enabling infrastructure. Such things would presumably be feasible to explore with Linux to an extent, but it probably isn't a particularly interesting avenue to pursue.
To go further, I can imagine a system where notions of users and privileges emerge purely from the configuration of the system. At the same time, there may be notions of users that emerge from filesystem metadata. It is entirely plausible that these notions may co-exist or be orthogonal in some way. But this is perhaps another topic.
Paul
l4-hackers@os.inf.tu-dresden.de