[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] RFC: Linux: disable APERF/MPERF feature in PV kernels



>>> On 22.05.12 at 19:08, Malcolm Crossley <malcolm.crossley@xxxxxxxxxx> wrote:
> On 22/05/12 17:52, Jeremy Fitzhardinge wrote:
>> On 05/22/2012 09:07 AM, Andre Przywara wrote:
>>> Hi,
>>>
>>> while testing some APERF/MPERF semantics I discovered that this
>>> feature is enabled in Xen Dom0, but is not reliable.
>>> The Linux kernel's scheduler uses this feature if it sees the CPUID
>>> bit, leading to costly RDMSR traps (a few 100,000s during a kernel
>>> compile) and bogus values due to VCPU migration during the measurement.
>>> The attached patch explicitly disables this CPU capability inside the
>>> Linux kernel, I couldn't measure any APERF/MPERF reads anymore with
>>> the patch applied.
>>> I am not sure if the PVOPS code is the right place to fix this, we
>>> could as well do it in the HV's xen/arch/x86/traps.c:pv_cpuid().
>>> Also when the Dom0 VCPUs are pinned, we could allow this, but I am not
>>> sure if it's worth to do so.
>> Seems reasonable to me.  Do all those RDMSR traps have a measurable
>> performance effect?
>>
>> Also, is there a symbolic constant for that bit?
>>
>>      J
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxx 
>> http://lists.xen.org/xen-devel 
> Hi,
> 
> I've attached a patch which masks the matching CPUID leaves in the Xen 
> pv_cpuid function.
> Should the logic in pv_cpuid be changed to only pass through explictly 
> allowed CPUID leaves rather than masking
> them using case statements?
> 
> Malcolm

As said in another reply, I don't think we should mask the feature
in the hypervisor (and certainly not when "cpufreq=dom0-kernel").
Furthermore your patch does this only for Dom0.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.