[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Xen-devel] [PATCH v10 09/19] qspinlock: Prepare for unfair lock support
- To: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
- From: Waiman Long <waiman.long@xxxxxx>
- Date: Fri, 09 May 2014 21:19:32 -0400
- Cc: linux-arch@xxxxxxxxxxxxxxx, Raghavendra K T <raghavendra.kt@xxxxxxxxxxxxxxxxxx>, Oleg Nesterov <oleg@xxxxxxxxxx>, Gleb Natapov <gleb@xxxxxxxxxx>, kvm@xxxxxxxxxxxxxxx, Scott J Norton <scott.norton@xxxxxx>, x86@xxxxxxxxxx, Paolo Bonzini <paolo.bonzini@xxxxxxxxx>, linux-kernel@xxxxxxxxxxxxxxx, virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx, Ingo Molnar <mingo@xxxxxxxxxx>, Chegu Vinod <chegu_vinod@xxxxxx>, David Vrabel <david.vrabel@xxxxxxxxxx>, "H. Peter Anvin" <hpa@xxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, Thomas Gleixner <tglx@xxxxxxxxxxxxx>, "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>, Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
- Delivery-date: Sat, 10 May 2014 01:20:00 +0000
- List-id: Xen developer discussion <xen-devel.lists.xen.org>
On 05/08/2014 03:06 PM, Peter Zijlstra wrote:
On Wed, May 07, 2014 at 11:01:37AM -0400, Waiman Long wrote:
If unfair lock is supported, the lock acquisition loop at the end of
the queue_spin_lock_slowpath() function may need to detect the fact
the lock can be stolen. Code are added for the stolen lock detection.
A new qhead macro is also defined as a shorthand for mcs.locked.
NAK, unfair should be a pure test-and-set lock.
I have performance data showing that a simple test-and-set lock does not
scale well. That is the primary reason of ditching the test-and-set lock
and use a more complicated scheme which scales better. Also, it will be
hard to make the unfair test-and-set lock code to coexist nicely with PV
spinlock code.
/**
* get_qlock - Set the lock bit and own the lock
- * @lock: Pointer to queue spinlock structure
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 otherwise
*
* This routine should only be called when the caller is the only one
* entitled to acquire the lock.
*/
-static __always_inline void get_qlock(struct qspinlock *lock)
+static __always_inline int get_qlock(struct qspinlock *lock)
{
struct __qspinlock *l = (void *)lock;
barrier();
ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
barrier();
+ return 1;
}
and here you make a horribly named function more horrible;
try_set_locked() is that its now.
Will do.
-Longman
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|