[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 10/13] xen/pvticket: allow interrupts to be enabled while blocking



From: Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>
---
 arch/x86/xen/spinlock.c |   28 +++++++++++++++++++++++++---
 1 files changed, 25 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 2ed5d05..7b89439 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -106,11 +106,26 @@ static void xen_lock_spinning(struct arch_spinlock *lock, 
unsigned want)
 
        start = spin_time_start();
 
-       /* Make sure interrupts are disabled to ensure that these
-          per-cpu values are not overwritten. */
+       /* Make sure an interrupt handler can't upset things in a
+          partially setup state. */
        local_irq_save(flags);
 
+       /*
+        * We don't really care if we're overwriting some other
+        * (lock,want) pair, as that would mean that we're currently
+        * in an interrupt context, and the outer context had
+        * interrupts enabled.  That has already kicked the VCPU out
+        * of xen_poll_irq(), so it will just return spuriously and
+        * retry with newly setup (lock,want).
+        *
+        * The ordering protocol on this is that the "lock" pointer
+        * may only be set non-NULL if the "want" ticket is correct.
+        * If we're updating "want", we must first clear "lock".
+        */
+       w->lock = NULL;
+       smp_wmb();
        w->want = want;
+       smp_wmb();
        w->lock = lock;
 
        /* This uses set_bit, which atomic and therefore a barrier */
@@ -135,10 +150,15 @@ static void xen_lock_spinning(struct arch_spinlock *lock, 
unsigned want)
                goto out;
        }
 
+       /* Allow interrupts while blocked */
+       local_irq_restore(flags);
+
        /* Block until irq becomes pending (or perhaps a spurious wakeup) */
        xen_poll_irq(irq);
        ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
 
+       local_irq_save(flags);
+
        kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 
 out:
@@ -160,7 +180,9 @@ static void xen_unlock_kick(struct arch_spinlock *lock, 
unsigned next)
        for_each_cpu(cpu, &waiting_cpus) {
                const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
 
-               if (w->lock == lock && w->want == next) {
+               /* Make sure we read lock before want */
+               if (ACCESS_ONCE(w->lock) == lock &&
+                   ACCESS_ONCE(w->want) == next) {
                        ADD_STATS(released_slow_kicked, 1);
                        xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
                        break;
-- 
1.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.