L3 is a microkernel based operating system developed by Jochen Liedtke and others at GMD's SET institute. Its key features are persistence, minimality and speed. L3 features synchronous message-based IPC, a simple-to-use external pager mechanism and a domain-based security design. [1] [2]
L3 knows about only a small number of kernel objects: tasks, threads
and messages. (task denotes a protection domain provided
by the (μ-)kernel, and thread means an activity within a task.) IPC Architecture
A message is a collection of data that is passed to the kernel in one system call. The kernel is responsible for delivering this message to the intended recipient.
In L3 messages are sent by one thread to another thread without intermediate objects like ports. IPC is synchronous, i.e. the sender thread blocks until the receiver thread has received the message. These design decisions are a good base for enhancements of IPC performance. [3]
A message can contain components of four different types:
A message dope is a structure that describes how many components
of each of these types are to be processed. The buffer dope
describes the layout of the provided message buffer and therefore how
many items can be received, the send dope describes how many
items are to be sent. (The two dopes differ when you send less data
than you are prepared to receive.)
Design of the Virtual Memory Subsystem
During preparation of this document, L3's virtual memory (VM) subsystem has been comprehensively documented for the first time. This section will only explain the most important concepts of this subsystem. For a more detailed discussion please refer to the [5] and [6].
The semantics associated with a valid region of virtual memory are provided by memory managers which implement abstract memory objects called data spaces (discussed below). When a new memory region is established in an address space, a thread id of a manager providing those semantics needs to be specified.
From the kernel's perspective, data spaces are uniquely identified by a <user_task, memory_manager, data_space_number> triple, where the data space number is an identifier which is never interpreted by the kernel. It is the manager's job to decide whether the same data space number in two different user tasks refer to the same memory object or not (in the standard memory manager, it does not).
The kernel should be viewed as using main memory as a (directly accessible) cache for the contents of the various data spaces.
Every thread can be a memory manager for every other thread. There is no special initialization necessary to become a memory manager, and the kernel provides no special access protection for them (as with all L3 IPC). Once the id of some thread has been specified in a map system call, the kernel starts treating the thread as the memory manager for the region (and send page fault messages to it, which, of course, can be ignored or refusingly replied to by the thread).
Memory managers willing to back a data space are involved in a simple IPC dialogue with the kernel [5]. Basically, the kernel reports page faults of client tasks, and the memory manager responds with a flexpage mapping to resolve the page fault and allow the kernel to establish a temporary page mapping.
Memory object manipulation in L3.
Figure [here] shows the basic memory object manipulation dialogue between the kernel and a memory manager: A client task references a portion of a memory range for which a temporary mapping has not yet been established. The kernel will send a page fault message to the respective memory manager and set up a receive operation on behalf of the client. The memory manager analyzes the request and responds with a flexpage directly to the client.
The kernel is free to forget the flexpage mapping at any time (perhaps when evicting the page from main memory) which may result in making clients send page fault messages requesting previously supplied pages again.
The memory manager may also forcibly remove a flexpage mapping
established earlier (with a system call), for
instance, when the manager is low on memory and wants to recycle
memory regions previously committed as flexpages.
Kernel Resource Management
Except the CPU and the kernel clock, the L3 kernel doesn't manage system resources at all; that's done by of kernel-external tasks (discussed below).
The clock is managed by the kernel for two reasons: First, the
system clock can well be thought of as a part of the CPU, and secondly
(and more importantly) the kernel requires access to the clock to perform IPC
timeouts. [7]
Kernel-external Standard Facilities
L3 comes with a number of kernel-external standard facilities which handle many of the resources conventionally managed by an operating system kernel.
Device drivers are controlled in a uniform fashion using the Generic Driver Protocol.
Please note that L3 currently doesn't come with any C libraries to access system services. That's why our group has developed several libraries providing C bindings to many system services [5]; these libraries are now being exploited regularly and gladly:
Mach is a microkernel originally developed at CMU. It features an object-oriented design approach, sophisticated memory management and message-based IPC.
Mach is meant as a foundation for operating systems built
up on it and provides several kernel abstractions commonly required,
its key elements being tasks and threads (similar to L3's), ports,
messages and memory objects. IPC
Messages are sent via unidirectional communication channels, so-called ports. The ability to send messages to ports and to receive messages is stored in port rights. Port rights are kernel-protected capabilities, the kernel enforces these capabilities and controls the transfer of port rights between tasks. Due to this access control properties ports also used to name other kernel resources: tasks, threads, devices, memory objects, among others. [8]
Unlike L3, IPC in Mach is asynchronous. When the send operation cannot deliver the message immediately it stores the message in a message queue and returns. The receive operation then dequeues this message.
The state associated with a port is:
Figure [here] shows a port and associated objects. These are described in detail below.
Available message types are:
There are three kinds of port rights:
Port rights are system-wide identifiers for ports. They are stored in kernel-protected port name spaces and can only be manipulated by tasks via port names.
Mach's virtual memory design is one of the most remarkable aspects of the system. It is layered in a simple hardware dependent portion and a hardware independent portion providing a complicated set of high-level abstractions. It claims to feature high performance, exploiting optimizations like lazy copying and shared memory. [8]
The abstractions provided are:
Memory objects are named by abstract memory object representative ports (the port specified to the kernel when mapping a memory object into a client task's virtual address space) and abstract memory object ports (the port the memory manager presents to the kernel when initializing a memory object; see below).
Memory object manipulation in Mach.
A memory manager is a task holding a send right to a memory cache object port. It is involved in a dialogue with the kernel to maintain the main memory cache of the abstract memory object backed by it. In this dialogue, the kernel sends requests to the memory manager's abstract memory object port, and the memory manager supplies requested data using IPC to the kernel's respective memory cache control port (see figure [here]).
Memory is passed back and forth between the kernel and the memory manager as out-of-line data in usual Mach messages. The kernel may return data it was supplied earlier at any time; when doing so, the returned data becomes backed by the default memory manager (a manager built into Mach, the ``pager of last resort'') until the manager explicitly deallocates it.
Memory object setup dialogue in Mach.
The protocol to initialize an abstract memory object has a special meaning: It is used to authenticate the kernel and the memory manager to each other. For details about the employed authentication scheme, please refer to [8].(1)
During this dialogue (figure [here]), the kernel
needs to establish an association between the memory object
representative port right presented in the vm_map system call
and the memory object's abstract memory object port. That's why the
kernel communicates a send right for its memory cache control port to
the owner of the specified representative port and believes any task
which can present the rights of both the representative port and the
cache control port to be the memory manager for said representative
port. This dialogue also has the side effect of exchanging the cache
control port and abstract memory object port rights between the kernel
and the memory manager so that the memory object manipulation dialogue
discussed above can begin.
Kernel Resource Management
The Mach kernel virtualizes many physical resources conventionally
managed by operating systems: clocks, devices, hosts, nodes,
processors and processor sets (see [8] for an explanation of these resource classes). The
kernel also provides for resource accounting using ledgers, a
special kernel resource providing a mechanism to limit consumption of
another resource.(2)
Kernel-external Standard Facilities
Mach comes with extensive C libraries and an RPC interface generator which make interfacing the microkernel easy.
This section will compare the two microkernels Mach and L3. We will identify features available in both microkernels and features available in one but missing in the other.
Later in this section, we will take a closer look at the virtual
memory subsystems of the kernels. Features Available in Both μ-kernels
If one system were to be emulated on the other, a fair (albeit varying
from feature to feature) amount of emulation glue would be required.
Features Only Available in Mach
The following fundamental Mach features don't have an equivalent on L3:
External Pagers in Mach and L3. The figure shows the abstractions (objects) involved in the external pager mechanisms, and the various types of references the active elements (tasks and kernel) hold, as explained in sections [here] and [here]
Both microkernels possess virtual memory subsystems which fit well in their respective kernel design philosophies: L3 strikes for minimality, simplicity and speed, while Mach goes for object-orientation, elegance and kernel-enforced reference protection.
Figure [here] compares the elements involved in Mach's and L3's external pager mechanisms. We see that these elements have similarities to their respective counterparts in the other kernel, but their interconnections (and interactions) differ a lot.
There are further, more subtle differences:
In L3, it works differently: Memory managers merely pass their data by reference using flexpage mappings; the memory remains under control of the memory manager. Both the kernel and the memory manager can remove the mapping at any time. [4]
L3 provides no equivalent interface: Region management is carried out entirely within memory managers (hence only available on single data spaces, not on virtual memory regions having any number of data spaces mapped), so client tasks need to communicate with their memory manager to manipulate memory regions.
There are two approaches to implementing Unix operating systems on
microkernels: multi-server and single-server.
This work (and this section) focus on the single-server approach; for
a general discussion of multi-server vs. single-server vs. monolithic
Unix design, please refer to [6] and the [12].
2.2.2 Implementation of Lites on Mach
Lites is a free Unix implementation implemented as a single-server and an emulation library running under Mach. It has been developed by Johannes Helander at HUT and borrows code from CMU's Unix single-server implementation and UCB's 4.4BSD Lite. [12]
In this section we will look at certain aspects of the Lites
implementation, namely the aspects relevant to porting the Lites
server to another microkernel. Types of IPC Used
Ports are not just used as communication channels; they're exploited in various other ways, too:
Besides ports and Mach's IPC facilities, Lites makes use of various other Mach facilities:
The L3 kernel is implemented using a DOS 32-bit assembler by Pharlap. This assembler is run in the DOS environment and we cannot give it away at no cost because of licensing restrictions. The memory management subsystem is written in CDL, but we did not consider to use this non-widespread language. (Our group even started a project to rewrite the memory management in C.)
ELAN, an ALGOL-like language, is used as command language, for scripts and as implementation language for about all non-kernel programs. The ELAN compiler translates very efficiently into i386 assembler. [13]
A port of gcc (the GNU C Compiler) version 1.37 is available. Compared to the current gcc 2.x it lacks some features, among them limited inline assembler facilities and no C++.
We didn't find a make-like programs .