[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Xen-devel] [PATCH V5] x86 spinlock: Fix memory corruption on completing completions
- To: David Vrabel <dvrabel@xxxxxxxxxx>
- From: Raghavendra K T <raghavendra.kt@xxxxxxxxxxxxxxxxxx>
- Date: Tue, 17 Feb 2015 15:33:23 +0530
- Cc: jeremy@xxxxxxxx, kvm@xxxxxxxxxxxxxxx, peterz@xxxxxxxxxxxxx, jasowang@xxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, paul.gortmaker@xxxxxxxxxxxxx, hpa@xxxxxxxxx, ak@xxxxxxxxxxxxxxx, a.ryabinin@xxxxxxxxxxx, x86@xxxxxxxxxx, borntraeger@xxxxxxxxxx, mingo@xxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxxx, paulmck@xxxxxxxxxxxxxxxxxx, dave@xxxxxxxxxxxx, sasha.levin@xxxxxxxxxx, davej@xxxxxxxxxx, tglx@xxxxxxxxxxxxx, virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx, waiman.long@xxxxxx, oleg@xxxxxxxxxx, pbonzini@xxxxxxxxxx, akpm@xxxxxxxxxxxxxxxxxxxx, torvalds@xxxxxxxxxxxxxxxxxxxx
- Delivery-date: Tue, 17 Feb 2015 10:02:05 +0000
- List-id: Xen developer discussion <xen-devel.lists.xen.org>
On 02/16/2015 10:17 PM, David Vrabel wrote:
On 15/02/15 17:30, Raghavendra K T wrote:
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -41,7 +41,7 @@ static u8 zero_stats;
static inline void check_zero(void)
{
u8 ret;
- u8 old = ACCESS_ONCE(zero_stats);
+ u8 old = READ_ONCE(zero_stats);
if (unlikely(old)) {
ret = cmpxchg(&zero_stats, old, 0);
/* This ensures only one fellow resets the stat */
@@ -112,6 +112,7 @@ __visible void xen_lock_spinning(struct arch_spinlock
*lock, __ticket_t want)
struct xen_lock_waiting *w = this_cpu_ptr(&lock_waiting);
int cpu = smp_processor_id();
u64 start;
+ __ticket_t head;
unsigned long flags;
/* If kicker interrupts not initialized yet, just spin */
@@ -159,11 +160,15 @@ __visible void xen_lock_spinning(struct arch_spinlock
*lock, __ticket_t want)
*/
__ticket_enter_slowpath(lock);
+ /* make sure enter_slowpath, which is atomic does not cross the read */
+ smp_mb__after_atomic();
+
/*
* check again make sure it didn't become free while
* we weren't looking
*/
- if (ACCESS_ONCE(lock->tickets.head) == want) {
+ head = READ_ONCE(lock->tickets.head);
+ if (__tickets_equal(head, want)) {
add_stats(TAKEN_SLOW_PICKUP, 1);
goto out;
}
@@ -204,8 +209,8 @@ static void xen_unlock_kick(struct arch_spinlock *lock,
__ticket_t next)
const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
/* Make sure we read lock before want */
- if (ACCESS_ONCE(w->lock) == lock &&
- ACCESS_ONCE(w->want) == next) {
+ if (READ_ONCE(w->lock) == lock &&
+ READ_ONCE(w->want) == next) {
add_stats(RELEASED_SLOW_KICKED, 1);
xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
break;
Acked-by: David Vrabel <david.vrabel@xxxxxxxxxx>
Although some of the ACCESS_ONCE to READ_ONCE changes are cosmetic and
are perhaps best left out of a patch destined for stable.
Thanks.
Yes, will send out a separate patch for -stable without READ_ONCE
changes once this patches goes in.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|