Hi,
That happens during IPC loop. I am pretty there is no migration during my IPC. However the helping lock will cause migration, right? Bur I do not think this will happen in my case. Do you think something can delay the dequeue operation so that the __pending_rqq is still in the queue when next request happens? Because after I add some delay in the IPC path (add printf somewhere), the number of IPIs is exactly 4 for flex-page mapping IPC, and 2 for simple data-only IPC.
Thanks a lot. Yuxin
On Thu, Jul 24, 2014 at 4:46 PM, Adam Lackorzynski < adam@os.inf.tu-dresden.de> wrote:
On Mon Jul 21, 2014 at 10:09:27 +0800, Yuxin Ren wrote:
I think I have found the reson. But I cannot understand the logic of that code. In the Context::enqueue_drq method, there is a piece of code to determin
if
IPI is needed. if (!_pending_rq.queued()) { if (!q.first()) ipi = true; q.enqueue(&_pending_rq); } In my understanding, the logic behind this is to check if _pending_rq is queued first, if it is queued, it's done; else enqueue it. And check if the queue is empty. If it is empty, IPI is needed, else this request can be picked up by IPI of other request. The problem is I cannot image in what case the _pending_rq is already queued, at least in my program. My scenario is: there is only one client and one server in different
core,
and client sends ipc in a tight loop. I think each time the server
receives
the request, it should dequeue _pending_rq, so it is not possible for client to see the _pending_rq is queued when it wants to ipc.But I
observed
this indeed happan sometime.
As I read the code this part is handling the case of migration that might happen to the thread. Your description does not like like threads would move between cores. Do you see this during your IPC loop or before?
Adam
Adam adam@os.inf.tu-dresden.de Lackorzynski http://os.inf.tu-dresden.de/~adam/
l4-hackers mailing list l4-hackers@os.inf.tu-dresden.de http://os.inf.tu-dresden.de/mailman/listinfo/l4-hackers