[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 3/4] xen/x86: Replace remaining mandatory barriers with SMP barriers



On 17/08/17 09:37, Jan Beulich wrote:
>>>> On 16.08.17 at 13:22, <andrew.cooper3@xxxxxxxxxx> wrote:
>> There is no functional change.  Xen currently assignes smp_* meaning to
>> the non-smp_* barriers.
>>
>> All of these uses are just to deal with shared memory between multiple
>> processors, so use the smp_*() which are the correct barriers for the 
>> purpose.
> Taking this together with ...
>
>> --- a/xen/arch/x86/acpi/cpu_idle.c
>> +++ b/xen/arch/x86/acpi/cpu_idle.c
>> @@ -390,9 +390,9 @@ void mwait_idle_with_hints(unsigned int eax, unsigned 
>> int ecx)
>>  
>>      if ( boot_cpu_has(X86_FEATURE_CLFLUSH_MONITOR) )
>>      {
>> -        mb();
>> +        smp_mb();
>>          clflush((void *)&mwait_wakeup(cpu));
>> -        mb();
>> +        smp_mb();
>>      }
> See commit 48d32458bc ("x86, idle: add barriers to CLFLUSH
> workaround") for why these better stay the way they are.
>
>> @@ -755,10 +755,10 @@ void acpi_dead_idle(void)
>>               * instruction, hence memory fence is necessary to make sure 
>> all 
>>               * load/store visible before flush cache line.
>>               */
>> -            mb();
>> +            smp_mb();
>>              clflush(mwait_ptr);
>>              __monitor(mwait_ptr, 0, 0);
>> -            mb();
>> +            smp_mb();
>>              __mwait(cx->address, 0);
> ... the comment the tail of which is in context here, I'm rather
> surprised you convert these: They're there strictly for
> correctness on a single processor (the need for prior memory
> accesses to be visible isn't limited to the CPUs in the system).
>
> In both cases, while smp_mb() and mb() are the same, I'd rather
> keep the distinction at use sites with the assumption that the
> smp_* ones would expand to just barrier() when !CONFIG_SMP (a
> configuration we currently simply don't allow). The only alternative
> I see would be to open-code the fences.

Yeah - in hindsight they should logically stay as mb() (even as you say,
there is no change).

>
>> --- a/xen/arch/x86/hvm/ioreq.c
>> +++ b/xen/arch/x86/hvm/ioreq.c
>> @@ -91,7 +91,7 @@ static bool_t hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, 
>> ioreq_t *p)
>>      {
>>          unsigned int state = p->state;
>>  
>> -        rmb();
>> +        smp_rmb();
>>          switch ( state )
>>          {
>>          case STATE_IOREQ_NONE:
>> @@ -1327,7 +1327,7 @@ static int hvm_send_buffered_ioreq(struct 
>> hvm_ioreq_server *s, ioreq_t *p)
>>      }
>>  
>>      /* Make the ioreq_t visible /before/ write_pointer. */
>> -    wmb();
>> +    smp_wmb();
>>      pg->ptrs.write_pointer += qw ? 2 : 1;
> I agree with these changes, but it needs to be clear that their
> counterparts cannot be smp_?mb().
>
>> --- a/xen/arch/x86/time.c
>> +++ b/xen/arch/x86/time.c
>> @@ -976,10 +976,10 @@ static void __update_vcpu_system_time(struct vcpu *v, 
>> int force)
>>  
>>      /* 1. Update guest kernel version. */
>>      _u.version = u->version = version_update_begin(u->version);
>> -    wmb();
>> +    smp_wmb();
>>      /* 2. Update all other guest kernel fields. */
>>      *u = _u;
>> -    wmb();
>> +    smp_wmb();
>>      /* 3. Update guest kernel version. */
>>      u->version = version_update_end(u->version);
>>  
>> @@ -1006,10 +1006,10 @@ bool update_secondary_system_time(struct vcpu *v,
>>          update_guest_memory_policy(v, &policy);
>>          return false;
>>      }
>> -    wmb();
>> +    smp_wmb();
>>      /* 2. Update all other userspace fields. */
>>      __copy_to_guest(user_u, u, 1);
>> -    wmb();
>> +    smp_wmb();
>>      /* 3. Update userspace version. */
>>      u->version = version_update_end(u->version);
>>      __copy_field_to_guest(user_u, u, version);
> Same fore these.

Why?  The guest side of this protocol is just reads.

Irrespective, how do you suggest I make things more clear?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.