[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 20/28] x86/domctl: Update PV domain cpumasks when setting cpuid policy



On 21/03/16 17:06, Jan Beulich wrote:
>>>> On 15.03.16 at 16:35, <andrew.cooper3@xxxxxxxxxx> wrote:
>> @@ -87,6 +88,129 @@ static void update_domain_cpuid_info(struct domain *d,
>>          d->arch.x86_model = (ctl->eax >> 4) & 0xf;
>>          if ( d->arch.x86 >= 0x6 )
>>              d->arch.x86_model |= (ctl->eax >> 12) & 0xf0;
>> +
>> +        if ( is_pv_domain(d) && ((levelling_caps & LCAP_1cd) == LCAP_1cd) )
>> +        {
>> +            uint64_t mask = cpuidmask_defaults._1cd;
>> +            uint32_t ecx = ctl->ecx & pv_featureset[FEATURESET_1c];
>> +            uint32_t edx = ctl->edx & pv_featureset[FEATURESET_1d];
>> +
>> +            /*
>> +             * Must expose hosts HTT value so a guest using native CPU can
> DYM "native CPUID" here?

Yes.

>
>> +             * correctly interpret other leaves which cannot be masked.
>> +             */
>> +            edx &= ~cpufeat_mask(X86_FEATURE_HTT);
>> +            if ( cpu_has_htt )
>> +                edx |= cpufeat_mask(X86_FEATURE_HTT);
>> +
>> +            switch ( boot_cpu_data.x86_vendor )
>> +            {
>> +            case X86_VENDOR_INTEL:
>> +                mask &= ((uint64_t)edx << 32) | ecx;
>> +                break;
>> +
>> +            case X86_VENDOR_AMD:
>> +                mask &= ((uint64_t)ecx << 32) | edx;
>> +
>> +                /* Fast-forward bits - Must be set. */
>> +                if (ecx & cpufeat_mask(X86_FEATURE_XSAVE))
> Missing blanks inside parens. Also I think the comment should be
> expanded to include the fact that the need to do this here but
> not in the Intel case was empirically(?) determined

It was empirically determined that AMD did have to behave like this.

>  - just in case
> something isn't quite right with this on some future hardware,
> and readers (possibly including ourselves) wonder.

Intel and AMD masking MSRs are fundamentally different, and function
differently.

Intel MSRs are documented strictly an & mask, and experimentally, are
applied before OSXSAVE/APIC are fast-forwarded from hardware.

AMD MSRs are documented strictly as an override, with a caution to avoid
setting bits which aren't actually supported in the hardware, and
experimentally need the fast-forward bits set if fast-forwarding is to
occur.

I suppose I could split this up and move this logic into
cpu/{amd,intel}.c as appropriate, and call a vendor specific function to
appropriately translate a guest cpuid value into a mask value.  However,
a lot of common logic would end up needing to be duplicated.

>
>> +            case X86_VENDOR_AMD:
>> +                /*
>> +                 * Must expose hosts CMP_LEGACY value so a guest using 
>> native
>> +                 * CPU can correctly interpret other leaves which cannot be
> CPUID again?

Yes

>
>> +                 * masked.
>> +                 */
>> +                ecx &= ~cpufeat_mask(X86_FEATURE_CMP_LEGACY);
>> +                if ( cpu_has_cmp_legacy )
>> +                    ecx |= cpufeat_mask(X86_FEATURE_CMP_LEGACY);
>> +
>> +                mask &= ((uint64_t)ecx << 32) | edx;
>> +
>> +                /* Fast-forward bits - Must be set. */
>> +                ecx = 0;
>> +                edx = cpufeat_mask(X86_FEATURE_APIC);
>> +
>> +                mask |= ((uint64_t)ecx << 32) | edx;
> This certainly looks strange (as in more complicated than it needs to
> be) - why not just mask |= cpufeat_mask(X86_FEATURE_APIC)?

Copy and paste from the leaf 1 case.  I specifically wanted to avoid
directly manipulating the mask value, as it is already confusing enough
which way ecx and edx are.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.