[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v2 1/4] xen/events: Clear cpu_evtchn_mask before resuming



When a guest is resumed, the hypervisor may change event channel
assignments. If this happens and the guest uses 2-level events it
is possible for the interrupt to be claimed by wrong VCPU since
cpu_evtchn_mask bits may be stale. This can happen even though
evtchn_2l_bind_to_cpu() attempts to clear old bits: irq_info that
is passed in is not necessarily the original one (from pre-migration
times) but instead is freshly allocated during resume and so any
information about which CPU the channel was bound to is lost.

Thus we should clear the mask during resume.

We also need to make sure that bits for xenstore and console channels
are set when these two subsystems are resumed. While rebind_evtchn_irq()
(which is invoked for both of them on a resume) calls irq_set_affinity(),
the latter will in fact postpone setting affinity until handling the
interrupt. But because cpu_evtchn_mask will have bits for these two
cleared we won't be able to take the interrupt.

With that in mind, we need to bind those two channels explicitly in
rebind_evtchn_irq(). We will keep irq_set_affinity() so that we have a
pass through generic irq affinity code later, in case something needs
to be updated there as well.

(Also replace cpumask_of(0) with cpumask_of(info->cpu) in
rebind_evtchn_irq(): it should be set to zero in preceding
xen_irq_info_evtchn_setup().)

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
Reported-by: Annie Li <annie.li@xxxxxxxxxx>
---

Changes in v2:
* Don't use IRQ_MOVE_PCNTXT, bind channels to VCPUs explicitly in 
rebind_evtchn_irq()

 drivers/xen/events/events_2l.c   |   10 ++++++++++
 drivers/xen/events/events_base.c |   13 +++++++++++--
 2 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c
index 5db43fc..7dd4631 100644
--- a/drivers/xen/events/events_2l.c
+++ b/drivers/xen/events/events_2l.c
@@ -345,6 +345,15 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
        return IRQ_HANDLED;
 }
 
+static void evtchn_2l_resume(void)
+{
+       int i;
+
+       for_each_online_cpu(i)
+               memset(per_cpu(cpu_evtchn_mask, i), 0, sizeof(xen_ulong_t) *
+                               EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD);
+}
+
 static const struct evtchn_ops evtchn_ops_2l = {
        .max_channels      = evtchn_2l_max_channels,
        .nr_channels       = evtchn_2l_max_channels,
@@ -356,6 +365,7 @@ static const struct evtchn_ops evtchn_ops_2l = {
        .mask              = evtchn_2l_mask,
        .unmask            = evtchn_2l_unmask,
        .handle_events     = evtchn_2l_handle_events,
+       .resume            = evtchn_2l_resume,
 };
 
 void __init xen_evtchn_2l_init(void)
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 70fba97..26f372a 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -1259,6 +1259,7 @@ EXPORT_SYMBOL_GPL(xen_hvm_evtchn_do_upcall);
 void rebind_evtchn_irq(int evtchn, int irq)
 {
        struct irq_info *info = info_for_irq(irq);
+       struct evtchn_bind_vcpu bind_vcpu;
 
        if (WARN_ON(!info))
                return;
@@ -1279,8 +1280,16 @@ void rebind_evtchn_irq(int evtchn, int irq)
 
        mutex_unlock(&irq_mapping_update_lock);
 
-       /* new event channels are always bound to cpu 0 */
-       irq_set_affinity(irq, cpumask_of(0));
+       bind_vcpu.port = evtchn;
+       bind_vcpu.vcpu = info->cpu;
+       if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_vcpu, &bind_vcpu) == 0)
+               bind_evtchn_to_cpu(evtchn, info->cpu);
+       else
+               pr_warn("Failed binding port %d to cpu %d\n",
+                       evtchn, info->cpu);
+
+       /* This will be deferred until interrupt is processed */
+       irq_set_affinity(irq, cpumask_of(info->cpu));
 
        /* Unmask the event channel. */
        enable_irq(irq);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.