Hi!
When looking at the examples and forum archives I get the impression that L4Linux users tend to work with ramdisks. However, in general I understand that in the Linux world ramdisks are considered mostly obsolete. Initramfs is the preferred solution
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/tree/Do...
I replaced my disk image with a cpio archive suitable for initramfs and everything still works fine. Linux can dynamically recognize whether it is cpio or not. I understand my filesystem can grow dynamically until Linx runs out of memory. I dropped the ramdisk_size parameter from the kernel command line. Are there any drawbacks or pitfalls I am missing?
The next step is compression. Both ramdisks and cpio archives can be compressed, again Linux can dynamically recognize that and uncompress before usage. (I have only tested cpio.gz so far, but I understand also disk images work and both faster lzo and more aggressive bzip2 and xz are supported.)
However, when I tried to used my cpio.gz archive, the effect was zero. Someone had magically uncompressed my archive before making the U-Boot image. I tracked it down to the script src/l4/pkg/bootstrap/server/src/build.pl.
system("$prog_gzip -dc $file > $modname.ugz 2> /dev/null"); $file = "$modname.ugz" if !$?;
It unpacks everything that is gzip compressed. Maybe that is necessary for object or executable files, I see some assembly manipulation a couple of lines further down, which I have not analyzed. However, for disk images or initramfs archives I cannot see the point.
So I made the following hack
diff --git a/src/l4/pkg/bootstrap/server/src/build.pl b/src/l4/pkg/bootstrap/server/src/build.pl index 22382d6..a31743e 100755 --- a/src/l4/pkg/bootstrap/server/src/build.pl +++ b/src/l4/pkg/bootstrap/server/src/build.pl @@ -75,8 +75,13 @@ sub build_obj($$$) printf STDERR "Merging image %s to %s [%dkB]\n", $file, $modname, ((-s $file) + 1023) / 1024; # make sure that the file isn't already compressed - system("$prog_gzip -dc $file > $modname.ugz 2> /dev/null"); - $file = "$modname.ugz" if !$?; + if ($file =~ m/ramdisk-arm.rd$/) { + system("false"); + } else { + system("$prog_gzip -dc $file > $modname.ugz 2> /dev/null"); + } + $file = "$modname.ugz" if !$? ; + print "using ", $file ; system("$prog_objcopy -S $file $modname.obj 2> /dev/null") if $strip && !$no_strip; system("$prog_cp $file $modname.obj")
This worked as expected, my disk image remained compressed and Linux could uncompress it a boot time. Of course it is a hack, because the file name should not be hard-coded there. Same question as before, am I missing anything here?
Also build.pl would fail to do the uncompression if a module is compressed with lzo, bzip2 or xz.
Yes, I have found the variable BOOTSTRAP_UIMAGE_COMPRESSION. It worked fine during image creation, but during boot U-Boot failed with
Verifying Checksum ... OK Uncompressing Kernel Image ... Error: inflate() returned -5 GUNZIP: uncompress, out-of-mem or overwrite error - must RESET board to recover resetting ...
I assume this could be fixed by modiyfing the memory map. However, because my kernel is very small compared to the user space image and Linux has this nice dynamic support for various compression schemes I think I prefer an uncompressed U-Boot image containing a compressed user space image.
Regards,
Uwe Geuder Nomovok Ltd. Tampere, Finland uwe.gxuder@nomovok.com (bot test: humans correct 1 obvious spelling error)
Hi!
On Fri, 11 Jul 2014 00:16:57 +0300, I wrote:
The next step is compression. Both ramdisks and cpio archives can be compressed, again Linux can dynamically recognize that and uncompress before usage. (I have only tested cpio.gz so far, but I understand also disk images work and both faster lzo and more aggressive bzip2 and xz are supported.)
Just for completeness I might mention that I now have also tested cpio.lzo and cpio.bz2.
Both worked as expected. Lzo results in bigger image than gz, so loading is slower, but uncompressing is faster than gz. The net gain was negative in my case, gzip winning the race.
For bzip2 it was the opposite. A smaller image that can be loaded faster, but needs longer to uncompress. The net gain was strongly negative, so gzip solution still unbeaten.
After doing the comparison if found http://pokecraft.first-world.info/wiki/Quick_Benchmark:_Gzip_vs_Bzip2_vs_LZM... , which taught me that the compression level has major effect. I had run everything with -9 wrongly assuming that the main penalty of using a high level is during compression (on the host, where I don't care that much). From looking at those tables it appears that lzma -e -9 might achieve best boot time results. Or maybe -7 because otherwise I might run out of memory during decompression. Let's see whether I can find time to verify this estimation.
Most interesting from L4 perspective though is the fact that xz decompression in L4Linux failed with an L4 error. I think I will post the details to this list soon under a more suitable subject.
No, I have not used the options froms scripts/xz_wrap.sh because I found it only after my testing. Of course different options might result in non-optimal results, but I don't see how they could cause that crash.
Regards,
Uwe Geuder Nomovok Ltd. Tampere, Finland uwe.gxuder@nomovok.com (bot test: humans correct 1 obvious spelling error)
Hi,
Am 11.07.2014 18:33, schrieb Uwe Geuder:
After doing the comparison if found http://pokecraft.first-world.info/wiki/Quick_Benchmark:_Gzip_vs_Bzip2_vs_LZM... , which taught me that the compression level has major effect. I had run everything with -9 wrongly assuming that the main penalty of using a high level is during compression (on the host, where I don't care that much). From looking at those tables it appears that lzma -e -9 might achieve best boot time results. Or maybe -7 because otherwise I might run out of memory during decompression. Let's see whether I can find time to verify this estimation.
The drawback of high compression levels for small embedded systems is also the memory requirement. The man page names 65 MiB for -9 which is possibly too much for your platform.
Most interesting from L4 perspective though is the fact that xz decompression in L4Linux failed with an L4 error. I think I will post the details to this list soon under a more suitable subject.
For me this looks like you are running out of memory.
No, I have not used the options froms scripts/xz_wrap.sh because I found it only after my testing. Of course different options might result in non-optimal results, but I don't see how they could cause that crash.
You could rebuild the cpio with a less aggressive compression level and limit the memory usage for decompression by using the --memlimit-decompress=limit parameter to see if it is still crashing.
Martin.
l4-hackers@os.inf.tu-dresden.de