[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 03/10] x86/cpuid: Handle leaf 0x1 in guest_cpuid()



>>> On 20.02.17 at 12:00, <andrew.cooper3@xxxxxxxxxx> wrote:
> The ebx word is more problematic.  The low 8 bits are the brand ID and safe to
> pass straight through.  The next 8 bits are the CLFLUSH line size.  This value
> is forwarded straight from hardware, as nothing good can possibly come of
> providing an alternative value to the guest.

Risking the value guests see to change across migration.

> The final 8 bits are the initial legacy APIC ID.  For HVM guests, this was
> overridden to vcpu_id * 2.  The same logic is now applied to PV guests, so
> guests don't observe a constant number on all vcpus via their emulated or
> faulted view.

They won't be the same everywhere, but every 128th CPU will
share values. I'm therefore not sure it wouldn't be better to hand
out zero or all ones here.

> --- a/xen/arch/x86/cpuid.c
> +++ b/xen/arch/x86/cpuid.c
> @@ -176,6 +176,9 @@ static void recalculate_misc(struct cpuid_policy *p)
>      switch ( p->x86_vendor )
>      {
>      case X86_VENDOR_INTEL:
> +        p->basic.raw_fms &= 0x0fff3fff;

Wouldn't it be better to use the same mask as for AMD here? The
field is effectively unused / reserved on 64-bit processors.

> @@ -967,6 +920,12 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
>          break;
>  
>      case 0x80000001:
> +        /* SYSCALL is hidden outside of long mode on Intel. */
> +        if ( p->x86_vendor == X86_VENDOR_INTEL &&
> +             has_hvm_container_domain(d) && !hvm_long_mode_enabled(v) )
> +            res->d &= ~cpufeat_mask(X86_FEATURE_SYSCALL);
> +
> +    common_dynamic_adjustments: /* Adjustments common with leaf 1. */

Perhaps have "leaf1" in the label named then?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.