[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V5] x86 spinlock: Fix memory corruption on completing completions

Well, I regret I mentioned the lack of barrier after enter_slowpath ;)

On 02/15, Raghavendra K T wrote:
> @@ -46,7 +46,8 @@ static __always_inline bool static_key_false(struct 
> static_key *key);
>  static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
>  {
> -     set_bit(0, (volatile unsigned long *)&lock->tickets.tail);
> +     set_bit(0, (volatile unsigned long *)&lock->tickets.head);
> +     barrier();
>  }

Because this barrier() looks really confusing.

Firsty, it is equally unneeded on x86. At the same time, it can not help.
We need a memory barrier() between set_bit(SLOWPATH) and READ_ONCE(head)
to avoid the race with spin_unlock().

So I think you should replace it with smp_mb__after_atomic() or remove it.

Other than that I believe this version is correct. So I won't insist, this
is cosmetic after all.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.