User space filesystems for L4Linux

Uwe Geuder 5vwrpnxfb7 at
Fri Jul 11 18:33:57 CEST 2014


On Fri, 11 Jul 2014 00:16:57 +0300, I wrote:

> The next step is compression. Both ramdisks and cpio archives can be
> compressed, again Linux can dynamically recognize that and uncompress
> before usage. (I have only tested cpio.gz so far, but I understand
> also disk images work and both faster lzo and more aggressive bzip2
> and xz are supported.)

Just for completeness I might mention that I now have also tested
cpio.lzo and cpio.bz2.

Both worked as expected. Lzo results in bigger image than gz, so
loading is slower, but uncompressing is faster than gz. The net gain
was negative in my case, gzip winning the race.

For bzip2 it was the opposite. A smaller image that can be loaded
faster, but needs longer to uncompress. The net gain was strongly
negative, so gzip solution still unbeaten.

After doing the comparison if found ,
which taught me that the compression level has major effect. I had
run everything with -9 wrongly assuming that the main penalty of using
a high level is during compression (on the host, where I don't care
that much). From looking at those tables it appears that lzma -e -9
might achieve best boot time results. Or maybe -7 because otherwise I
might run out of memory during decompression. Let's see whether I can
find time to verify this estimation.

Most interesting from L4 perspective though is the fact that xz
decompression in L4Linux failed with an L4 error. I think I will post
the details to this list soon under a more suitable subject.

No, I have not used the options froms scripts/ because I
found it only after my testing. Of course different options might
result in non-optimal results, but I don't see how they could cause
that crash.


Uwe Geuder
Nomovok Ltd.
Tampere, Finland
uwe.gxuder at (bot test: humans correct 1 obvious spelling error)

More information about the l4-hackers mailing list