[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: Use deep C states for off-lined CPUs



Hmm, no.

It need flush cache, as long as *deep Cx* would be spurious-wokenup.
The reason clflush here is, it's a light-weight flush, in fact it also could 
use wbinvd if not consider performance.

For halt, it don't need to do so since cpu still keep snoop when sleep.

Thanks,
Jinsong

Ostrovsky, Boris wrote:
> The patch is adding IO-based C-states. My understading is that CFLUSH
> was to work around a MONITOR-related erratum. 
> 
> Or are you referring to something else?
> 
> -boris
> 
> 
> ________________________________________
> From: Zhang, Yang Z [yang.z.zhang@xxxxxxxxx]
> Sent: Tuesday, February 28, 2012 8:37 PM
> To: Ostrovsky, Boris; xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: RE: [Xen-devel] [PATCH] x86: Use deep C states for off-lined
> CPUs 
> 
> I noticed the following comments when using mwait based idle:
> -------------------------------------------------------------------------
>         while ( 1 )
>         {
>             /*
>              * 1. The CLFLUSH is a workaround for erratum AAI65 for
>              * the Xeon 7400 series.
>              * 2. The WBINVD is insufficient due to the
> spurious-wakeup 
>              * case where we return around the loop.
>              * 3. Unlike wbinvd, clflush is a light weight but not
> serializing 
>              * instruction, hence memory fence is necessary to make
> sure all 
>              * load/store visible before flush cache line.
>              */
>             mb();
>             clflush(mwait_ptr);
>             __monitor(mwait_ptr, 0, 0);
>             mb();
>             __mwait(cx->address, 0);
>         }
>     }
> -------------------------------------------------------------------------
> Your patch should follow it too.
> 
> best regards
> yang
> 
> 
>> -----Original Message-----
>> From: xen-devel-bounces@xxxxxxxxxxxxx
>> [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of Boris Ostrovsky
>> Sent: Wednesday, February 29, 2012 6:09 AM
>> To: xen-devel@xxxxxxxxxxxxxxxxxxx
>> Cc: boris.ostrovsky@xxxxxxx
>> Subject: [Xen-devel] [PATCH] x86: Use deep C states for off-lined
>> CPUs 
>> 
>> # HG changeset patch
>> # User Boris Ostrovsky <boris.ostrovsky@xxxxxxx> # Date 1330466573
>> -3600 # Node ID 9e5991ad9c85b5176ce269001e7957e8805dd93c
>> # Parent  a7bacdc5449a2f7bb9c35b2a1334b463fe9f29a9
>> x86: Use deep C states for off-lined CPUs
>> 
>> Currently when a core is taken off-line it is placed in C1 state
>> (unless MONITOR/MWAIT is used). This patch allows a core to go to
>> deeper C states resulting in significantly higher power savings.
>> 
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxx>
>> 
>> diff -r a7bacdc5449a -r 9e5991ad9c85 xen/arch/x86/acpi/cpu_idle.c
>> --- a/xen/arch/x86/acpi/cpu_idle.c    Mon Feb 27 17:05:18 2012 +0000
>> +++ b/xen/arch/x86/acpi/cpu_idle.c    Tue Feb 28 23:02:53 2012 +0100
>> @@ -573,10 +573,10 @@ static void acpi_dead_idle(void)
>>      if ( (cx = &power->states[power->count-1]) == NULL )         
>> goto default_halt; 
>> 
>> -    mwait_ptr = (void *)&mwait_wakeup(smp_processor_id()); -
>>      if ( cx->entry_method == ACPI_CSTATE_EM_FFH )
>>      {
>> +        mwait_ptr = (void *)&mwait_wakeup(smp_processor_id()); +
>>          /*
>>           * Cache must be flushed as the last operation before
>> sleeping. 
>>           * Otherwise, CPU may still hold dirty data, breaking cache
>> coherency, @@ -601,6 +601,20 @@ static void acpi_dead_idle(void)    
>>              mb(); __mwait(cx->address, 0);
>>          }
>> +    }
>> +    else if ( cx->entry_method == ACPI_CSTATE_EM_SYSIO ) +    {
>> +        /* Avoid references to shared data after the cache flush */
>> +        u32 address = cx->address;
>> +        u32 pmtmr_ioport_local = pmtmr_ioport;
>> +
>> +        wbinvd();
>> +
>> +        while ( 1 )
>> +        {
>> +            inb(address);
>> +            inl(pmtmr_ioport_local);
>> +        }
>>      }
>> 
>>  default_halt:
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxx
>> http://lists.xen.org/xen-devel
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.