|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 16/26] x86/cpu: Context switch cpuid masks and faulting state in context_switch()
On Wed, Mar 23, 2016 at 04:36:19PM +0000, Andrew Cooper wrote:
> A single ctxt_switch_levelling() function pointer is provided
> (defaulting to an empty nop), which is overridden in the appropriate
> $VENDOR_init_levelling().
>
> set_cpuid_faulting() is made private and included within
> intel_ctxt_switch_levelling().
>
> One functional change is that the faulting configuration is no longer special
> cased for dom0. There was never any need to, and it will cause dom0 to
There was. See 1d6ffea6
ACPI: add _PDC input override mechanism
And in Linux see xen_check_mwait().
> observe the same information through native and enlightened cpuid.
Which will be a regression when it comes to ACPI C-states - as we won't
expose the deeper ones (C6 or such) on SandyBridge CPUs.
But looking at this document:
http://www.intel.com/content/dam/www/public/us/en/documents/application-notes/virtualization-technology-flexmigration-application-note.pdf
The CPUID Masking all talks about VM guests - but PV guests are not
really VM (no VMCS container for them). Does that mean that if a PV
guests does an 'native' CPUID it the CPUID results are not masked
by CPUID masking (or faulting?). I would think not since:
> @@ -154,6 +156,11 @@ static void intel_ctxt_switch_levelling(const struct
> domain *nextd)
> struct cpuidmasks *these_masks = &this_cpu(cpuidmasks);
> const struct cpuidmasks *masks = &cpuidmask_defaults;
>
> + if (cpu_has_cpuid_faulting) {
> + set_cpuid_faulting(nextd && is_pv_domain(nextd));
Which would give us a NULL for Dom0. So no engagning of CPUID faulting for PV
guests.
And I suppose the CPUID masking is only for guests in VMCS container?
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |