What happens on timeslice overrun?
Udo A. Steinberg
us15 at os.inf.tu-dresden.de
Tue Nov 22 20:14:34 CET 2005
On Wed, 21 Sep 2005 18:29:03 +0200 Marcus Brinkmann (MB) wrote:
MB> I hate to distract from the real issue, but I should note that
MB> volatile does not do what you describe here. If you need to make sure
MB> that you see the write of another thread, you must use a memory
MB> barrier or another proper synchronization primitive. "volatile" is
MB> not the answer.
I argue that you do NOT need any memory barrier in this case. Cache
coherency ensures that as soon as CPU1 writes the relevant cache line
with deadline_miss = 1, it goes invalid on CPU0 and the next read
from CPU0 for deadline_miss will fetch the cache line from CPU1 and
both CPUs go to shared.
You need a memory barrier to enforce program order between two loads, two
stores or a load and a store (on a particular CPU) on architectures that
can reorder such memory operations.
The example code does not rely on the order of reads or writes on a
particular CPU. It only does:
deadline_miss = 0;
if (deadline_miss) deadline_miss = 1;
MB> For an extensive analysis, please see for example sec. 5 in
MB> "C++ and the Perils of Double-Checked Locking":
This is an example where ordering must be enforced. What is described in
this paper is similar to:
foo = inited = 0;
foo = 1; while (!inited);
inited = 1; use foo;
Here you must prevent CPU0 from reordering the writes and CPU1 from
reordering the reads.
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 189 bytes
Desc: not available
More information about the l4-hackers