[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 21/30] x86/pv: Provide custom cpumasks for PV domains



On 17/02/16 11:14, Jan Beulich wrote:
>>>> On 17.02.16 at 12:03, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 17/02/16 08:13, Jan Beulich wrote:
>>>>>> On 05.02.16 at 14:42, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>> --- a/xen/arch/x86/cpu/amd.c
>>>> +++ b/xen/arch/x86/cpu/amd.c
>>>> @@ -208,7 +208,9 @@ static void __init noinline probe_masking_msrs(void)
>>>>  static void amd_ctxt_switch_levelling(const struct domain *nextd)
>>>>  {
>>>>    struct cpuidmasks *these_masks = &this_cpu(cpuidmasks);
>>>> -  const struct cpuidmasks *masks = &cpuidmask_defaults;
>>>> +  const struct cpuidmasks *masks =
>>>> +            (nextd && is_pv_domain(nextd) && 
>>>> nextd->arch.pv_domain.cpuidmasks)
>>>> +            ? nextd->arch.pv_domain.cpuidmasks : &cpuidmask_defaults;
>>> Mixing tabs and spaces for indentation.
>>>
>>>> --- a/xen/arch/x86/domain.c
>>>> +++ b/xen/arch/x86/domain.c
>>>> @@ -574,6 +574,11 @@ int arch_domain_create(struct domain *d, unsigned int 
>> domcr_flags,
>>>>              goto fail;
>>>>          clear_page(d->arch.pv_domain.gdt_ldt_l1tab);
>>>>  
>>>> +        d->arch.pv_domain.cpuidmasks = xmalloc(struct cpuidmasks);
>>>> +        if ( !d->arch.pv_domain.cpuidmasks )
>>>> +            goto fail;
>>>> +        *d->arch.pv_domain.cpuidmasks = cpuidmask_defaults;
>>> Along the lines of not masking features for the hypervisor's own use
>>> (see the respective comment on the earlier patch) I think this patch,
>>> here or in domain_build.c, should except Dom0 from having the
>>> default masking applied. This shouldn't, however, extend to CPUID
>>> faulting. (Perhaps this rather belongs here so that the non-Dom0
>>> hardware domain case can also be taken care of.)
>> Very specifically not.  It is wrong to special case Dom0 and the
>> hardware domain, as their cpuid values should relevent to their VM, not
>> the host.
> I can't see how this second half of the sentence is a reason for
> not special casing Dom0.

Dom0 is just a VM which happens to have all the hardware by default.

It has the same requirements as all other VMs when it comes to cpuid;
most notably that it shouldn't see features which it can't use.  The
problem comes far more obvious with an HVMLite dom0, running an
almost-native kernel.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.