|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCHv1] Prototype ticket locks
>>> On 03.02.15 at 12:50, <david.vrabel@xxxxxxxxxx> wrote:
> Use ticket locks for spin locks instead of the current byte locks.
> Ticket locks are fair. This is an important property for hypervisor
> locks.
Isn't Linux moving to queue locks? Shouldn't we skip the intermediate
step then, not the least because they will cause problems when Xen
itself gets run virtualized?
> This prototype has the following limitations:
> - Only 256 CPUs (8 bit tickets).
> - Broken lock profiling.
> - x86 only.
>
> Note that spin_lock_irq() and spin_lock_irqsave() now spin with irqs
> disabled (previously, they would spin with irqs enabled if possible).
> This is required to prevent deadlocks when the irq handler tries to
> take the same lock with a higher ticket.
That's another pretty undesirable property (albeit it's not a strict
implication of using ticket locks - one can make them work with
interrupts getting properly re-enabled while spinning).
It was for these reasons that so far I didn't go and steal Linux'es
ticket lock implementation (and I think you should at least mention
the origin in the description).
> --- a/xen/include/asm-x86/spinlock.h
> +++ b/xen/include/asm-x86/spinlock.h
> @@ -6,29 +6,67 @@
> #include <asm/atomic.h>
>
> typedef struct {
> - volatile s16 lock;
> + volatile u16 lock;
> } raw_spinlock_t;
With this, you can't ...
> +static always_inline int _raw_spin_is_locked(raw_spinlock_t *lock)
> +{
> + int tmp = *(volatile signed int *)(&lock->lock);
... do this. One of the many cases where using casts hides bugs.
As to the rest of the change - if we went the ticket lock route, I
think we should use Linux'es newer, asm()-less implementation,
which would hopefully also make it easier to use the same code
for ARM.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |