Increase IPC-Stream size

Adam Lackorzynski adam at
Sun May 17 23:01:29 CEST 2015

On Tue May 12, 2015 at 13:30:21 +0200, ba_f wrote:
> Am 2015-05-10 23:07, schrieb Adam Lackorzynski:
> >>I need to transmit a 4kB buffer and 4kB as response.
> >>What is L4's most elegant way to do so?
> >
> >Elegant depends probably :) One way without much bells and whistles is
> >creating a dataspace in the config script and attaching it on each side:
> > local ds = L4.Env.mem_alloc:create(L4.Proto.Dataspace, 4096);
> >
> > L4.default_loader:start({ caps = { shmds = ds:m("rw"); }}, "rom/one");
> > L4.default_loader:start({ caps = { shmds = ds:m("rw"); }}, "rom/two");
> >
> >Then attach it via L4Re::Rm::attach(). I guess you also want some kind
> >of notification. You could use standard IPC for that, or, better, use
> >Irqs, i.e. two, one for each side. libshmc wraps this, esp. settings
> >this up incl. the notification via Irqs.
> Allright, thanks.
> But, i still have a question about Flexpages.
> So my problem with the shared DS is that i have one server but multiple
> clients.
> I could use one DS for each client, but this might waste too much MEM for an
> embedded system.
> (I admit, this is a bit of a theoretical problem, since todays embedded
> systems are so "big" and strong.)
> I also could use only one DS on the server side, which any client uses,
> separately.
> But this solution is not an option if there are sensitive data transmitted
> between client and server.

You see the options yourself, good. So if the clients shall not see
others clients data, you need to use different shared memories for each
> So is Flexpage the way to go? Is this even the reason why flexpage exists,
> beside shared mem.

Flexpages are a base mechanism, e.g. to map memory. For your use-case
dataspaces are just fine. Your smallest shared memory is 4kb, whether
it's a dataspace or done by hand (mapping pages yourself). Dataspaces
implement low-level work (with flexpages) so that we do not need to do
it again and again.

Adam                 adam at

More information about the l4-hackers mailing list