|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 08/45] xen: arm64: spinlocks
Comparing the existing arm32 locks:
At 15:56 +0000 on 23 Jan (1358956574), Ian Campbell wrote:
> +static always_inline void _raw_spin_unlock(raw_spinlock_t *lock)
> +{
> + ASSERT(_raw_spin_is_locked(lock));
> +
> + smp_mb();
> +
> + __asm__ __volatile__(
> +" str %1, [%0]\n"
> + :
> + : "r" (&lock->lock), "r" (0)
> + : "cc");
> +
> + dsb_sev();
> +}
> +
> +static always_inline int _raw_spin_trylock(raw_spinlock_t *lock)
> +{
> + unsigned long tmp;
> +
> + __asm__ __volatile__(
> +" ldrex %0, [%1]\n"
> +" teq %0, #0\n"
> +" strexeq %0, %2, [%1]"
> + : "=&r" (tmp)
> + : "r" (&lock->lock), "r" (1)
> + : "cc");
> +
> + if (tmp == 0) {
> + smp_mb();
> + return 1;
> + } else {
> + return 0;
> + }
> +}
with the new arm64 ones:
> +static always_inline void _raw_spin_unlock(raw_spinlock_t *lock)
> +{
> + ASSERT(_raw_spin_is_locked(lock));
> +
> + asm volatile(
> + " stlr %w1, [%0]\n"
> + : : "r" (&lock->lock), "r" (0) : "memory");
> +}
> +
> +static always_inline int _raw_spin_trylock(raw_spinlock_t *lock)
> +{
> + unsigned int tmp;
> +
> + asm volatile(
> + " ldaxr %w0, [%1]\n"
> + " cbnz %w0, 1f\n"
> + " stxr %w0, %w2, [%1]\n"
> + "1:\n"
> + : "=&r" (tmp)
> + : "r" (&lock->lock), "r" (1)
> + : "memory");
> +
> + return !tmp;
> +}
The 32-bit ones have a scattering of DSBs and SEVs that aren't there in
64-bit. The DSBs at least seem useful. Not sure about SEV - presumably
that's useful if the slow path has a WFE in it?
Tim.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |