[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V2] x86 spinlock: Fix memory corruption on completing completions

Ccing Davidlohr, (sorry that I got confused with similar address in cc

On 02/09/2015 08:44 PM, Oleg Nesterov wrote:
On 02/09, Raghavendra K T wrote:

+static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock)
+       arch_spinlock_t old, new;
+       __ticket_t diff;
+       old.tickets = READ_ONCE(lock->tickets);
+       diff = (old.tickets.tail & ~TICKET_SLOWPATH_FLAG) - old.tickets.head;
+       /* try to clear slowpath flag when there are no contenders */
+       if ((old.tickets.tail & TICKET_SLOWPATH_FLAG) &&
+               (diff == TICKET_LOCK_INC)) {
+               new = old;
+               new.tickets.tail &= ~TICKET_SLOWPATH_FLAG;
+               cmpxchg(&lock->head_tail, old.head_tail, new.head_tail);
+       }

Can't we simplify it? We own .head, and we already know it. We only need
to clear TICKET_SLOWPATH_FLAG in .tail atomically?


        static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t 
*lock, __ticket_t head)
                __ticket_t old_tail, new_tail;

                new_tail = head + TICKET_LOCK_INC;
                old_tail = new_tail | TICKET_SLOWPATH_FLAG;

                if (READ_ONCE(lock->tickets.tail) == old_tail)
                        cmpxchg(&lock->tickets.tail, old_tail, new_tail);


        -       __ticket_check_and_clear_slowpath(lock);
        +       __ticket_check_and_clear_slowpath(lock, inc.tail);

Or I missed something?

Thanks.. Perfect, 'll update with this change. (Jeremy had hinted

And I think it would be better to avoid ifdef(CONFIG_PARAVIRT_SPINLOCKS),
ww can just do


While at it, I think current arch_spin_unlock() has similar structure
and wanted to clean it up. considering we define
TICKET_SLOWPATH_FLAG 0 or 1, I think compiler would be smart enough
to generate appropriate code and we could avoid #ifdef.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.