[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen PVM: Strange lockups when running PostgreSQL load

On 18.10.2012 14:43, Stefan Bader wrote:
>> Obviously when this is an acquire not disabling interrupts, and
>> an interrupt comes in while in the poll hypercall (or about to go
>> there, or just having come back from one).
>> Jan
> Obviously. ;) Ok, so my thinking there was ok and its one level deep max. At
> some point staring at things I start question my sanity.
> A wild thinking would be whether in that case the interrupted spinlock may 
> miss
> a wakeup forever when the unlocker only can check for the toplevel. Hm, but 
> that
> should be easy to rule out by just adding an error to spin_unlock_slow when it
> fails to find anything...
Actually I begin to suspect that it could be possible that I just overlooked the
most obvious thing. Provoking question: are we sure we are on the same page
about the purpose of the spin_lock_flags variant of the pv lock ops interface?

I begin to suspect that it really is not for giving a chance to re-enable
interrupts. Just what it should be used for I am not clear. Anyway it seems all
other places more or less ignore the flags and map themselves back to an
ignorant version of spinlock.
Also I believe that the only high level function that would end up in passing
any flags, would be the spin_lock_irqsave one. And I am pretty sure that this
one will expect interrupts to stay disabled.

So I tried below approach and that seems to be surviving the previously breaking
testcase for much longer than anything I tried before.


From f2ebb6626f3e3a00932bf1f4f75265f826c7fba9 Mon Sep 17 00:00:00 2001
From: Stefan Bader <stefan.bader@xxxxxxxxxxxxx>
Date: Thu, 18 Oct 2012 21:40:37 +0200
Subject: [PATCH 1/2] xen/pv-spinlock: Never enable interrupts in

I am not sure what exactly the spin_lock_flags variant of the
pv-spinlocks (or even in the arch spinlocks) should be used for.
But it should not be used as an invitation to enable irqs.

The only high-level variant that seems to end up there is the
spin_lock_irqsave one and that would always be used in a context
that expects the interrupts to be disabled.
The generic paravirt-spinlock code just maps the flags variant
to the one without flags, so just do the same and get rid of
all the stuff that is not needed anymore.

This seems to be resolving a weird locking issue seen when having
a high i/o database load on a PV Xen guest with multiple (8+ in
local experiments) CPUs. Well, thinking about it a second time
it seems like one of those "how did that ever work?" cases.

Signed-off-by: Stefan Bader <stefan.bader@xxxxxxxxxxxxx>
 arch/x86/xen/spinlock.c |   23 +++++------------------
 1 file changed, 5 insertions(+), 18 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 83e866d..3330a1d 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -24,7 +24,6 @@ static struct xen_spinlock_stats
        u32 taken_slow_nested;
        u32 taken_slow_pickup;
        u32 taken_slow_spurious;
-       u32 taken_slow_irqenable;

        u64 released;
        u32 released_slow;
@@ -197,7 +196,7 @@ static inline void unspinning_lock(struct xen_spinlock *xl,
struct xen_spinlock
        __this_cpu_write(lock_spinners, prev);

-static noinline int xen_spin_lock_slow(struct arch_spinlock *lock, bool 
+static noinline int xen_spin_lock_slow(struct arch_spinlock *lock)
        struct xen_spinlock *xl = (struct xen_spinlock *)lock;
        struct xen_spinlock *prev;
@@ -218,8 +217,6 @@ static noinline int xen_spin_lock_slow(struct arch_spinlock
*lock, bool irq_enab
        ADD_STATS(taken_slow_nested, prev != NULL);

        do {
-               unsigned long flags;
                /* clear pending */

@@ -239,12 +236,6 @@ static noinline int xen_spin_lock_slow(struct arch_spinlock
*lock, bool irq_enab
                        goto out;

-               flags = arch_local_save_flags();
-               if (irq_enable) {
-                       ADD_STATS(taken_slow_irqenable, 1);
-                       raw_local_irq_enable();
-               }
                 * Block until irq becomes pending.  If we're
                 * interrupted at this point (after the trylock but
@@ -256,8 +247,6 @@ static noinline int xen_spin_lock_slow(struct arch_spinlock
*lock, bool irq_enab

-               raw_local_irq_restore(flags);
                ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
        } while (!xen_test_irq_pending(irq)); /* check for spurious wakeups */

@@ -270,7 +259,7 @@ out:
        return ret;

-static inline void __xen_spin_lock(struct arch_spinlock *lock, bool irq_enable)
+static inline void __xen_spin_lock(struct arch_spinlock *lock)
        struct xen_spinlock *xl = (struct xen_spinlock *)lock;
        unsigned timeout;
@@ -302,19 +291,19 @@ static inline void __xen_spin_lock(struct arch_spinlock
*lock, bool irq_enable)

        } while (unlikely(oldval != 0 &&
-                         (TIMEOUT == ~0 || !xen_spin_lock_slow(lock, 
+                         (TIMEOUT == ~0 || !xen_spin_lock_slow(lock))));


 static void xen_spin_lock(struct arch_spinlock *lock)
-       __xen_spin_lock(lock, false);
+       __xen_spin_lock(lock);

 static void xen_spin_lock_flags(struct arch_spinlock *lock, unsigned long 
-       __xen_spin_lock(lock, !raw_irqs_disabled_flags(flags));
+       __xen_spin_lock(lock);

 static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
@@ -424,8 +413,6 @@ static int __init xen_spinlock_debugfs(void)
        debugfs_create_u32("taken_slow_spurious", 0444, d_spin_debug,
-       debugfs_create_u32("taken_slow_irqenable", 0444, d_spin_debug,
-                          &spinlock_stats.taken_slow_irqenable);

        debugfs_create_u64("released", 0444, d_spin_debug, 
        debugfs_create_u32("released_slow", 0444, d_spin_debug,

Attachment: 0001-xen-pv-spinlock-Never-enable-interrupts-in-xen_spin_.patch
Description: Text Data

Attachment: signature.asc
Description: OpenPGP digital signature

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.