What happens on timeslice overrun?
Udo A. Steinberg
us15 at os.inf.tu-dresden.de
Tue Nov 22 20:14:34 CET 2005
On Wed, 21 Sep 2005 18:29:03 +0200 Marcus Brinkmann (MB) wrote:
MB> I hate to distract from the real issue, but I should note that
MB> volatile does not do what you describe here. If you need to make sure
MB> that you see the write of another thread, you must use a memory
MB> barrier or another proper synchronization primitive. "volatile" is
MB> not the answer.
I argue that you do NOT need any memory barrier in this case. Cache
coherency ensures that as soon as CPU1 writes the relevant cache line
with deadline_miss = 1, it goes invalid on CPU0 and the next read
from CPU0 for deadline_miss will fetch the cache line from CPU1 and
both CPUs go to shared.
You need a memory barrier to enforce program order between two loads, two
stores or a load and a store (on a particular CPU) on architectures that
can reorder such memory operations.
The example code does not rely on the order of reads or writes on a
particular CPU. It only does:
deadline_miss = 0;
CPU0: CPU1:
----- -----
if (deadline_miss) deadline_miss = 1;
do_something_about_it
MB> For an extensive analysis, please see for example sec. 5 in
MB> "C++ and the Perils of Double-Checked Locking":
MB>
MB> http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf
This is an example where ordering must be enforced. What is described in
this paper is similar to:
foo = inited = 0;
CPU0: CPU1:
----- -----
foo = 1; while (!inited);
//wmb //rmb
inited = 1; use foo;
Here you must prevent CPU0 from reordering the writes and CPU1 from
reordering the reads.
-Udo.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <http://os.inf.tu-dresden.de/pipermail/l4-hackers/attachments/20051122/1a50355e/attachment-0001.asc>
More information about the l4-hackers
mailing list