[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 6/9] spinlock: Introduce spin_lock_cb()



On 08/14/2017 10:42 AM, Julien Grall wrote:
>
>
> On 14/08/17 15:39, Boris Ostrovsky wrote:
>>
>>>>
>>>> +#define spin_lock_kick(l)                       \
>>>> +({                                              \to understand why
>>>> you need a stronger one here
>>>> +    smp_mb();                                   \
>>>
>>> arch_lock_signal() has already a barrier for ARM. So we have a double
>>> barrier now.
>>>
>>> However, the barrier is slightly weaker (smp_wmb()). I am not sure why
>>> you need to use a stronger barrier here. What you care is the write to
>>> be done before signaling, read does not much matter. Did I miss
>>> anything?
>>
>> Yes, smp_wmb() should be sufficient.
>>
>> Should I then add arch_lock_signal_wmb() --- defined as
>> arch_lock_signal() for ARM and smp_wmb() for x86?
>
> I am not an x86 expert. Do you know why the barrier is not in
> arch_lock_signal() today?

Possibly because _spin_unlock() which is the only instance where
arch_lock_signal is used has arch_lock_release_barrier() (and
preempt_enable has one too). This guarantees that incremented ticket
head will be seen after all previous accesses have completed.



OTOH,

>
>>
>>
>> -boris
>>
>>>
>>> Cheers,
>>>
>>>> +    arch_lock_signal();                         \
>>>> +})
>>>> +
>>>>  /* Ensure a lock is quiescent between two critical operations. */
>>>>  #define spin_barrier(l)               _spin_barrier(l)
>>>>
>>>>
>>>
>>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.