[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] xen/x86: lock cacheline for add_sized()
On 06.08.2019 09:00, Juergen Gross wrote: > add_sized() should use an atomic update of the memory word, as it is > used by spin_unlock(). > > With ticket locks unlocking needs to be atomic in order to avoid very > rare races when someone tries to get the lock while the lock holder > is just releasing it. Considering the commit introducing the function (3c694aec08) explicitly saying "The add is /not/ atomic" this needs more detail. The idea behind not using a LOCKed access here was, iirc, that no-one else could ever update the head pointer; someone trying to acquire the lock would only write to the tail portion. But yes, I think I can see how this was wrong: The arch_fetch_and_add() acts on head_tail after all, not just tail. > --- a/xen/include/asm-x86/atomic.h > +++ b/xen/include/asm-x86/atomic.h > @@ -21,7 +21,7 @@ static inline void name(volatile type *addr, type val) \ > #define build_add_sized(name, size, type, reg) \ > static inline void name(volatile type *addr, type val) \ > { \ > - asm volatile("add" size " %1,%0" \ > + asm volatile("lock; add" size " %1,%0" \ I realize pre-existing code in our tree uses this not fully correct form of specifying prefixes (there shouldn't really be a line separator between prefix and main mnemonic, unless gas would refuse assembling the result because of it wrongly thinking the prefix was inapplicable to the insn, like is e.g. the case here and there for the REP prefixes), but I think we should try to avoid widening the issue. See e.g. gnttab_clear_flags() where we already have no semicolon, and this has gone fine since around 4.2. I seem to vaguely recall that Linux has been using this construct to avoid build issues with some specific (Sun?) assembler. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |