On Monday 08 August 2005 21:57, Martin Pohlack wrote:
[...]
Yes, it goes away. But I thought that, if you have a real-time application running it will not be influenced by some "time-sharing" application as the logserver should be.
Just a short answer from me: This is of course only true if you don't use these services! How should the system protect you if you call a non-realtime service???
Btw. using the logserver or any other printf-style output inside a real-time loop is a *very* bad idea. This services ususally use very slow output devices (serial line, text-mode display).
As a general rule: You must decouple the output between the output service and you rt-app. Of course, this can not be achieved with synchronous IPC to a non-rt service ...
Ack.
One would typically use a memory buffer and either collect the data and output it *after* the experiment or use a asynchronous memory protocol to transfer the information to another io service (Note: do not use LOG here as it also slows down the rt-part of the system as it causes serious slowdown in hardware IO (Jork once meassured a lot more than one µs per character !!!).
The logserver is normally built with "serial support" which means that it performs output to the serial console itself. Previous implementations of the logserver used the kernel debugging interface which induced the bad behaviour Martin is talking about (because of disabling the interrupts for character output).
The logserver with serial support does not influences real-time applications.
Frank