[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-API] VCPUs-at-startup and VCPUs-max with NUMA node affinity


  • To: "xen-api@xxxxxxxxxxxxx" <xen-api@xxxxxxxxxxxxx>
  • From: James Bulpin <James.Bulpin@xxxxxxxxxxxxx>
  • Date: Tue, 29 May 2012 13:00:08 +0100
  • Accept-language: en-US
  • Acceptlanguage: en-US
  • Delivery-date: Tue, 29 May 2012 12:00:33 +0000
  • List-id: User and development list for XCP and XAPI <xen-api.lists.xen.org>
  • Thread-index: Ac081xDWKDPUgv0eQOe9upOioCwsEgAu3EBQ
  • Thread-topic: VCPUs-at-startup and VCPUs-max with NUMA node affinity

I'm thinking about the interaction of xapi's vCPU management and the future Xen 
automatic NUMA placement 
(http://blog.xen.org/index.php/2012/05/16/numa-and-xen-part-ii-scheduling-and-placement/).
 If a VM has an equal or smaller number of vCPUs than a NUMA node has pCPUs 
then it makes sense for that VM to have NUMA node affinity. But then what 
happens if vCPUs are hotplugged to the VM and it now has more vCPUs than the 
node has pCPUs? I can see several options here:

  1. The node is over-provisioned in that the VM's vCPUs contend with each 
other for the pCPUs - not good

  2. The CPU affinity is dropped allowing vCPUs to run on any node - the memory 
is still on the original node so now we've got a poor placement for vCPUs that 
happen to end up running on other nodes. This also leads to additional 
interconnect traffic and possible cache line ping-pong.

  3. The vCPUs that cannot fit on the node are given no affinity but those that 
can retain their node affinity - leads to some vCPUs being better performing 
than others due to memory (non-)locality. This also leads to some additional 
interconnect traffic and possible cache line ping-pong.

  4. We never let this happen because we only allow node affinity to be set for 
the maximum vCPU count a VM may have during this boot (VCPUs-max; options 1 to 
3 above use VCPUs-at-startup to decide whether to use node affinity).

I'm tempted by #4 because it avoids having to make difficult and workload 
dependent decisions when changing vCPU counts. My guess is that many users will 
have VMs with VCPUs-at-startup==VCPUs-max so it becomes a non-issue anyway. My 
only real concern is that if users regularly run VMs with small 
VCPUs-at-startup but with VCPUs-max being the number of pCPUs in the box, i.e. 
allowing them to hotplug up to the full resource of the box.

And a related question: when xapi/xenopsd builds a domain does it have to tell 
Xen about VCPUs-max or just the number of vCPUs required right now?

Any thoughts?

Thanks,
James


_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.