|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 01/12] xen/events: avoid race with raising an event in unmask_evtchn()
On 20/03/13 11:00, Stefano Stabellini wrote:
> On Tue, 19 Mar 2013, David Vrabel wrote:
>> From: David Vrabel <david.vrabel@xxxxxxxxxx>
>>
>> In unmask_evtchn(), when the mask bit is cleared after testing for
>> pending and the event becomes pending between the test and clear, then
>> the upcall will not become pending and the event may be lost or
>> delayed.
>>
>> Avoid this by always clearing the mask bit before checking for
>> pending.
>>
>> This fixes a regression introduced in 3.7 by
>> b5e579232d635b79a3da052964cb357ccda8d9ea (xen/events: fix
>> unmask_evtchn for PV on HVM guests) which reordered the clear mask and
>> check pending operations.
>
> The race you are trying to fix is real, but the fix you are proposing
> breaks PV on HVM and ARM guests again.
>
> From the description of b5e579232d635b79a3da052964cb357ccda8d9ea, it's
> clear that the reason to call EVTCHNOP_unmask is to trigger an event
> notification injection again.
> But if you clear the evtchn_mask bit *before* calling EVTCHNOP_unmask,
> EVTCHNOP_unmask won't reinject the event.
> From evtchn_unmask:
>
> if ( test_and_clear_bit(port, &shared_info(d, evtchn_mask)) &&
> test_bit (port, &shared_info(d, evtchn_pending)) &&
> !test_and_set_bit (port / BITS_PER_EVTCHN_WORD(d),
> &vcpu_info(v, evtchn_pending_sel)) )
> {
> vcpu_mark_events_pending(v);
> }
>
> The first condition for reinjection would fail.
I missed this. The only way I can think of fixing this is to set the
mask bit before call the unmask hypercall.
The FIFO-based ABI doesn't have this problem as it always tries to
relink the event whatever the previous state of the mask bit was.
David
>> Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx>
>> Cc: stable@xxxxxxxxxxxxxxx
>> Cc: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
>> ---
>> drivers/xen/events.c | 10 +++++-----
>> 1 files changed, 5 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
>> index d17aa41..4bdd0a5 100644
>> --- a/drivers/xen/events.c
>> +++ b/drivers/xen/events.c
>> @@ -403,11 +403,13 @@ static void unmask_evtchn(int port)
>>
>> if (unlikely((cpu != cpu_from_evtchn(port))))
>> do_hypercall = 1;
>> - else
>> + else {
>> + sync_clear_bit(port, BM(&s->evtchn_mask[0]));
>> evtchn_pending = sync_test_bit(port, BM(&s->evtchn_pending[0]));
>>
>> - if (unlikely(evtchn_pending && xen_hvm_domain()))
>> - do_hypercall = 1;
>> + if (unlikely(evtchn_pending && xen_hvm_domain()))
>> + do_hypercall = 1;
>> + }
>>
>> /* Slow path (hypercall) if this is a non-local port or if this is
>> * an hvm domain and an event is pending (hvm domains don't have
>> @@ -418,8 +420,6 @@ static void unmask_evtchn(int port)
>> } else {
>> struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu);
>>
>> - sync_clear_bit(port, BM(&s->evtchn_mask[0]));
>> -
>> /*
>> * The following is basically the equivalent of
>> * 'hw_resend_irq'. Just like a real IO-APIC we 'lose
>> --
>> 1.7.2.5
>>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |