User space filesystems for L4Linux

Martin Schröder martin.schroeder at openlimit.com
Mon Jul 14 12:46:25 CEST 2014


Hi,

Am 11.07.2014 18:33, schrieb Uwe Geuder:
> After doing the comparison if found
> http://pokecraft.first-world.info/wiki/Quick_Benchmark:_Gzip_vs_Bzip2_vs_LZMA_vs_XZ_vs_LZ4_vs_LZO ,
> which taught me that the compression level has major effect. I had
> run everything with -9 wrongly assuming that the main penalty of using
> a high level is during compression (on the host, where I don't care
> that much). From looking at those tables it appears that lzma -e -9
> might achieve best boot time results. Or maybe -7 because otherwise I
> might run out of memory during decompression. Let's see whether I can
> find time to verify this estimation.

The drawback of high compression levels for small embedded systems is 
also the memory requirement. The man page names 65 MiB for -9 which is 
possibly too much for your platform.

> Most interesting from L4 perspective though is the fact that xz
> decompression in L4Linux failed with an L4 error. I think I will post
> the details to this list soon under a more suitable subject.

For me this looks like you are running out of memory.

> No, I have not used the options froms scripts/xz_wrap.sh because I
> found it only after my testing. Of course different options might
> result in non-optimal results, but I don't see how they could cause
> that crash.

You could rebuild the cpio with a less aggressive compression level and 
limit the memory usage for decompression by using the 
--memlimit-decompress=limit parameter to see if it is still crashing.


Martin.




More information about the l4-hackers mailing list