[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH WIP 5/6] xen: fix unmask_evtchn for HVM guests
When unmask_evtchn is called, if we already have an event pending, we just set evtchn_pending_sel waiting for irq to be re-enabled. That is because x86 pv guests overwrite the irq_enable pvops with xen_irq_enable_direct that also handles pending events. However x86 HVM guests and ARM guests do not change or do not have the irq_enable pvop, so this scheme doesn't work properly for them. Considering that having a pending irq when unmask_evtchn is called is not very common, and it is better to keep the native_irq_enable implementation for HVM guests and ARM guests, the best thing to do is just using the EVTCHNOP_unmask callback (Xen re-injects pending events in response). Considering that this patch fixes a bug on x86 for current PV on HVM guests, I'll resend it separately. Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx> CC: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> --- drivers/xen/events.c | 7 +++++-- 1 files changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/xen/events.c b/drivers/xen/events.c index eae0d0b..0132505 100644 --- a/drivers/xen/events.c +++ b/drivers/xen/events.c @@ -372,8 +372,11 @@ static void unmask_evtchn(int port) BUG_ON(!irqs_disabled()); - /* Slow path (hypercall) if this is a non-local port. */ - if (unlikely(cpu != cpu_from_evtchn(port))) { + /* Slow path (hypercall) if this is a non-local port or if this is + * an hvm domain and an event is pending (hvm domains don't have + * their own implementation of irq_enable). */ + if (unlikely((cpu != cpu_from_evtchn(port)) || + (xen_hvm_domain() && sync_test_bit(port, &s->evtchn_pending[0])))) { struct evtchn_unmask unmask = { .port = port }; (void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask); } else { -- 1.7.2.5 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |