|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Minios-devel] [UNIKRAFT PATCH v2 04/10] plat/xen: Add support for communication with Xenstore daemon
Hi Yuri,
I removed the rest of the mail since this is the only open comment.
On 08/30/2018 12:42 PM, Yuri Volchkov wrote:
>>>> +/* Release a request identifier */
>>>> +static void xs_request_put(struct xs_request *xs_req)
>>>> +{
>>>> + __u32 reqid = xs_req->hdr.req_id;
>>>> +
>>>> + ukarch_spin_lock(&xs_req_pool.lock);
>>>> +
>>>> + UK_ASSERT(test_bit(reqid, xs_req_pool.entries_bm) == 1);
>>>> +
>>>> + clear_bit(reqid, xs_req_pool.entries_bm);
>>>> + xs_req_pool.num_live--;
>>>> +
>>>> + if (xs_req_pool.num_live == 0 ||
>>>> + xs_req_pool.num_live == REQID_MAP_SIZE - 1)
>>> I understand the second condition, but why we need to wake up waiter if
>>> num_live == 0?
>>
>> The first condition is used for suspending. We should wait until all
>> requests were answered, that's why we're waiting for num_live to get to
>> zero. Maybe it would be better to remove the first condition and add it
>> back when we'll have the suspend/resume functionality ready.
> Probably removing it now is a good decision.
>
> However, how waiters even possible if num_live == 0? Every time there is
> at least one slot in the pool, all waiters will wake up. If there is a
> slot at the time xs_request_get was called, it will get one and will not
> wait.
For num_live == 0 the requester/client/etc is the thread trying (let's
call it the suspend thread) to suspend the xenstore thread. The suspend
thread needs to wait until all requests were answered. Function
'suspend_xenbus' at [1] should shed some light on how such waiting would
look like.
[1]
http://xenbits.xen.org/gitweb/?p=mini-os.git;a=blob;f=xenbus/xenbus.c;h=d72dc3a0f5d942bc2664c12588d707a791a34b7e;hb=HEAD#l376
Costin
_______________________________________________
Minios-devel mailing list
Minios-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/minios-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |