Hi Paul,
I'm always happy to read from you.
Dataspaces and regions are two different things. The Dataspace is a piece of memory, which you can divide up and map to different places. You tell the region manager the goto-dataspace in case of a page fault and the region manager will map a piece (aka region) from the dataspace to the faulting address.
You can have multiple regions in the region manager referring to the same dataspace as backing memory. The region manager stores a start and size in your virtual address space, a dataspace cap and an offset into this dataspace, which corresponds to the start of your region.
Of course you can have multiple entries in the region mapper referring to the same dataspace and even the same offset.
Now the question is: What happens if you detach a region from the region mapper? You need to specify just the address or the address and a size to detach a single region. In case of just an address you can use any address within a region to unmap it. So this affects just a single region entry in the region manager. In case of the address-size call you can detach multiple regions at once. In the process, the region's memory is unmapped from your local address space.
So if I understand your case correctly, you attach a dataspace to some region (a1, s1) -> (d1, o1) and later unmap this single region. This should unmap all mappings between a1 and a1+s1. However, if there are other regions attached, e.g. (a2, s2) -> (d1, o2), this will still remain and as soon as you unmap the d1-capability, you have stale entries in your region map.
To ensure you detach all regions in an area you can use detach(start, size, ds, task), however, I just noticed there is no C-version provided for the corresponding C++ interface.
To answer your question: If you invoke detach on a specific region, all mappings in this and only this region are unmapped. Other regions served by the same dataspace are not touched.
I'm still curious about your setup, because your experience seems to differ. Can you give me some info about your attach calls?
I hope this sheds some light and my assumptions on what you are doing are not too far fetched.
Cheers, Philipp
On 2/24/21 1:05 AM, Paul Boddie wrote:
On Thursday, 11 February 2021 01:32:58 CET Paul Boddie wrote:
Anyway, I'm now back to debugging concurrency issues, and perhaps other resource issues, once again. But it was useful to review this particular issue and to try and venture beyond a basic proof of concept towards something more robust.
Sorry for repeating my usual bad habit of following up to myself with questions and observations, but maybe there are some insights to be shared. :-)
I have been exercising my code, discovering issues (of course), and have been seeing interesting behaviour with regard to attaching and detaching dataspaces to and from tasks. My code deliberately does this a lot, with each thread of a "client" task attaching to a distinct dataspace provided via a capability, accessing the dataspace, and then detaching the dataspace (and also releasing the capability).
What I now wonder about is what the region manager does when asked to detach a dataspace and what happens to any memory mappings established within that dataspace's region. Obviously, during the lifetime of the association, map requests will have caused flexpages to be transferred to the accessing task, and thus virtual memory mappings will be defined within the dataspace region.
My observations indicate that a dataspace may be associated with a particular base address, but if the dataspace is detached and then another is attached later, the same base address may be associated with the new dataspace. Intuitively, this should be expected (the region manager is merely reusing address space) and not be problematic. After all, the new dataspace is distinct from the old one, and the expectation should be that any traces of the old one should have been removed.
However, my observations also seem to indicate that having attached such a new dataspace, accesses may proceed without any page faults occurring (and thus without any map requests being made), with the accessed data being that made available by the previous dataspace. You can probably understand now why I am doing all this testing!
So, does the region manager request the invalidation of virtual memory mappings when dataspaces are detached, or is it the job of the task providing the dataspace to unmap the regions involved? Or is there a function that I have overlooked to make the region manager invalidate mappings?
(Currently, I just use l4re_rm_detach, but l4re_rm_detach_unmap seems to be equivalent when the current task is indicated. My dataspaces will unmap memory but only if it is not exported by another dataspace, since there are several in existence at any given time.)
Not detaching dataspaces prevents these data conflicts and suggests that the problem has something to do with dataspace management and not necessarily some implementation issue within the dataspace itself. However, not detaching dataspaces is obviously not a solution.
As always, any helpful suggestions or recommendations are very welcome!
Paul
l4-hackers mailing list l4-hackers@os.inf.tu-dresden.de http://os.inf.tu-dresden.de/mailman/listinfo/l4-hackers