On 9/17/18, Paul Boddie paul@boddie.org.uk wrote:
From what I've seen in L4Re in various places, there is a certain amount of
device driver code registering itself for incorporation by application programs. I feel that this is a bit awkward - code getting run at program start-up to register itself in fixed-length arrays enumerating the available
drivers (many of which may be irrelevant to a given platform) - and it surely blends different responsibilities into the same process, potentially leaving
applications with access to I/O memory regions.
What my own rather elementary work does is to separate drivers out into separate servers. Since many of them do not need to actively communicate with other components - they either initialise peripherals and do practically nothing or they share designated memory with other tasks - the only cost of
employing multiple servers is the accumulation of duplicate or superfluous code contributed by each one of them in their standalone form, which is why I have been trying to use shared libraries as much as possible.
Yes, I'd say putting drivers into servers rather than libraries is the best idea, and putting shared state into libraries is a terrible idea, since it effectively bypasses memory protection if shared memory is used. Despite this, it seems like libraries with shared state (is there a name for this anti-pattern? I call it "cooperative state") seem to be common (on Linux, ALSA and Qt Embedded are notable examples), even though there is no reason why the shared state can't be placed in a server. I'm not at all sure why it's so common.
The only exception to this would be on a kernel that allows protecting libraries from the rest of a process (which could be rather tricky to implement and might have the same overhead as just running the drivers as servers in the first place).
The virtual filesystem libraries also seem to promote a "client-side" approach to providing such functionality. Looking at different filesystem architectures is another of my current activities.
UX/RT will use a split architecture for its VFS, where read(), write(), and seek() will call kernel IPC APIs to communicate with the server directly, and all other filesystem-related functions will call the VFS component in the process server. This seems to be the easiest way to implement a VFS architecture that maps each file descriptor onto a capability (actually, a group of related capabilities) while still having reasonable performance.