IPC Timeouts

Ronald Aigner ra3 at os.inf.tu-dresden.de
Thu Feb 24 09:15:30 CET 2005

Good Morning ;)
after reading tonights discussion I believe that finite timeouts in IPCs
do not have to be supported, even for transfer-timeouts (IPC pagefault
timeouts). Finite timeouts in IPCs are impractical.

Concerning the usage of IPC timeouts to sleep/wait for a finite amount of
time: I claim that seperating the support for finite time events (see
below) and IPC will *not* make a microkernel more complex but rather more
simple. The reason for this claim is that it is easier to verify the
behaviour of IPC with either zero or infinite timeouts then with
additional finite timeouts. (I agree that this is painting black and white
and leaving all the gray out.)

Finite time events are in some way already implemented in Fiasco: Udo
Steinberg provided mechanisms in the kernel to set up time-slices for
real-time execution. When the time-slice expired or a thread reached the
deadline of its period an "event" (read IPC) was send to the threads
preempter, which can react to this event. The real-time thread could avoid
the delivery of the event by synchronizing with the kernel: it could wait
for the start of the next period (or the start of the next
timeslice--albeit this is a rather simplistic description). This could
also be combined with the receiption of an IPC, thus combining "receive
IPC with (absolute) timeout". I would argue that this mechanism can be
used (with modifications) to implement time events in a microkernel.

I disagree with the opinion that the complexity of a microkernel should be
measured by the number of its system-calls. I find it rather complex to
multiplex dozen flavours of IPC via one systemcall.

Marcus Brinkmann said:
> The reason why I hope we will get away with it is that we are pursueing
> a model where every task is self-paged.  Tasks get a contingent of
> physical memory, which is guaranteed over a long time (exact details
> how to negotiate that number are not determined yet).  There are many
> complications, but in the end this allows the client to make paging
> decisions itself, and thus it can wire down a region of memory
> effectively.
> Sounds good, but there is a problem, and I think this just illustrates
> the point you were making: The physical memory server needs to be able
> to revoke arbitrary mappings temporarily, for example to make space
> for DMA buffer regions or to reorganize memory for super-page
> allocation.  This means that the operation of the physical memory
> server is not transparent to the clients, and thus even to the server.
> (Only the kernel could make this operation transparent by an atomic
> copy-and-remap operation).
I assume that the available physical memory for a task (its working set?)
will entriely be used for IPC.  Therefore you might differentiate between
pinned and pageable physical memory.

Ronald Aigner
ra3 at os.inf.tu-dresden.de

More information about the l4-hackers mailing list