On Sat Mar 11, 2006 at 16:25:46 +0900, Sungkwan Heo wrote:
I measured network I/O performance of native Linux-2.4 and Fiasco-1.2+L4Linux-2.4 using the netperf benchmark.
The TCP streaming test results are following:
Recv Socket Size bytes: Linux = 87380, L4Linux = 87380 Send Socket Size bytes: Linux = 16384, L4Linux = 16384 Send Message Size bytes: Linux = 16384, L4Linux=16384 Elapsed Time secs: Linux = 10.002, L4Linux = 10.002 Throughput MBytes/s: Linux = 11.22, L4Linux = 11.206 Utilization Send local % S: Linux = 11.359, L4Linux = 4.35
These figures do not really show much, except that both configurations are able to saturate your fast ethernet connection.
The last 2 lines indicates the network I/O throughput and CPU utilization for the network I/O. One weird thing is the L4Linux CPU utilization. It spent much less CPU cycles than the native Linux did.
In my humble opinion, it is normal case because the L4 Fiasco schedules the L4Linux application (netperf) as a L4 process and L4Linux kernel doesn't have correct CPU usage information of the application.
Is it correct?
The accounting in 2.4 is not as correct as it should be due to its design. 2.6 is better in this regard. Additionally you are looking at L4Linux, and not at the system utilization, and I think that's what matters. l4con will show you the CPU utilization when you start it with '--cpuload' and Fiasco with '-loadcnt'.
Adam