[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 0/8] x86/hvm, libxl: HVM SMT topology support




On 03/02/2016 08:03 PM, Andrew Cooper wrote:
> On 02/03/16 19:18, Joao Martins wrote:
>>
>> On 02/25/2016 05:21 PM, Andrew Cooper wrote:
>>> On 22/02/16 21:02, Joao Martins wrote:
>>>> Hey!
>>>>
>>>> This series are a follow-up on the thread about the performance
>>>> of hard-pinned HVM guests. Here we propose allowing libxl to
>>>> change how the CPU topology looks like for the HVM guest, which can 
>>>> favor certain workloads as depicted by Elena on this thread [0]. 
>>>> It shows around 22-23% gain on io bound workloads having the guest
>>>> vCPUs hard pinned to the pCPUs with a matching core+thread.
>>>>
>>>> This series is divided as following:
>>>> * Patch 1     : Sets initial apicid to be the vcpuid as opposed
>>>>                 to vcpuid * 2 for each core;
>>>> * Patch 2     : Whitespace cleanup
>>>> * Patch 3     : Adds new leafs to describe Intel/AMD cache
>>>>                 topology. Though it's only internal to libxl;
>>>> * Patch 4     : Internal call to set per package CPUID values.
>>>> * Patch 5 - 8 : Interfaces for xl and libxl for setting topology.
>>>>
>>>> I couldn't quite figure out which user interface was better so I
>>>> included both our "smt" option and full description of the topology
>>>> i.e. "sockets", "cores", "threads" option same as the "-smp"
>>>> option on QEMU. Note that the latter could also be used on
>>>> libvirt since topology is described in their XML configs.
>>>>
>>>> It's also an RFC as AMD support isn't implemented yet.
>>>>
>>>> Any comments are appreciated!
>>> Hey.  Sorry I am late getting to this - I am currently swamped.  Some
>>> general observations.
>> Hey Andrew, Thanks for the pointers!
>>
>>> The cpuid policy code in Xen was never re-thought through after
>>> multi-vcpu guests were introduced, which means they have no
>>> understanding of per-package, per-core and per-thread values.
>>>
>>> As part of my further cpuid work, I will need to fix this.  I was
>>> planning to fix it by requiring full cpu topology information to be
>>> passed as part of the domaincreate or max_vcpus hypercall  (not chosen
>>> which yet).  This would include cores-per-package, threads-per-core etc,
>>> and allow Xen to correctly fill in the per-core cpuid values in leaves
>>> 4, 0xB and 80000008.
>> FWIW CPU topology on domaincreate sounds nice. Or would max_vcpus hypercall
>> serve other purposes too? (CPU hotplug, migration)
> 
> With cpu hotplug, a guest is still limited at max_vcpus, and this
> hypercall is the second action during domain creation.
OK

> 
> With migration, an empty domain must already be created for the contents
> of the stream to be inserted into.  At a minimum, this is createdomain
> and max_vcpus, usually with a max_mem to avoid it getting arbitrarily large.
> 
> One (mis)feature I want to fix is that currently, the cpuid policy is
> regenerated by the toolstack on the destination of the migration, after
> the cpu state has been reloaded in Xen.  This causes a chicken and egg
> problem between checking the validity of guest state, such as %cr4
> against the guest cpuid policy.
> 
> I wish to fix this by putting the domain cpuid policy at the head of the
> migration stream, which allows the receiving side to first verify that
> the domains cpuid policy is compatible with the host, and then verify
> all further migration state against the policy.
> 
> Even with this, there will be a chicken and egg situation when it comes
> to specifying topology.  The best that we can do is let the toolstack
> recreate it from scratch (from what is hopefully the same domain
> configuration at a higher level), then verify consistency when the
> policy is loaded.
/nods Thanks for educating on this.

> 
>>
>>> In particular, I am concerned about giving the toolstack the ability to
>>> blindly control the APIC IDs.  Their layout is very closely linked to
>>> topology, and in particular to the HTT flag.
>>>
>>> Overall, I want to avoid any possibility of generating APIC layouts
>>> (including the emulated IOAPIC with HVM guests) which don't conform to
>>> the appropriate AMD/Intel manuals.
>> I see so overall having Xen control the topology would be a better approach 
>> that
>> "mangling" the APICIDs in the cpuid policy as I am proposing. One good thing
>> about Xen handling the topology bits would be for Intel CPUs with CPUID 
>> faulting
>> support where PV guests could also see the topology info. And given that the
>> word 10 of hw_caps won't be exposed (as per your CPUID), handling the PV 
>> case on
>> cpuid policy wouldn't be as clean.
> 
> Which word do you mean here?  Even before my series, Xen only had 9
> words in hw_cap.
Hm, I used the wrong nomenclature here: what I meant was the 10th feature word
from x86_boot_capability (since the sysctl/libxl are capped to 8 words only)
which in the header files is word 9 on your series (previously moved from word
3). It's the one meant for "Other features, Linux-defined mapping", where
X86_FEATURE_CPUID_FAULTING is defined.

Joao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.