[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] RFC: Still TODO for 4.2? xl domain numa memory allocation vs xm/xend
On Fri, 2012-01-20 at 10:55 +0000, Stefano Stabellini wrote: > On Fri, 20 Jan 2012, Ian Campbell wrote: > > On Thu, 2012-01-19 at 21:14 +0000, Pasi KÃÆÃÆÃâÃÂrkkÃÆÃÆÃâÃÂinen wrote: > > > On Wed, Jan 04, 2012 at 04:29:22PM +0000, Ian Campbell wrote: > > > > > > > > Has anybody got anything else? I'm sure I've missed stuff. Are there any > > > > must haves e.g. in the paging/sharing spaces? > > > > > > > > > > Something that I just remembered: > > > http://wiki.xen.org/xenwiki/Xen4.1 > > > > > > "NUMA-aware memory allocation for VMs. xl in Xen 4.1 will allocate > > > equal amount of memory from every NUMA node for the VM. xm/xend > > > allocates all the memory from the same NUMA node." > > > > I'm not that familiar with the NUMA support but my understanding was > > that memory was allocated by libxc/the-hypervisor and not by the > > toolstack and that the default was to allocate from the same numa nodes > > as domains the processor's were pinned to i.e. if you pin the processors > > appropriately the Right Thing just happens. Do you believe this is not > > the case and/or not working right with xl? > > It seems that xend is retrieving numa info about the platform, see > pyxc_numainfo, then using those info to pin vcpus to pcpus, see > _setCPUAffinity. > Still it seems to me more of an hack than the right way to solve the > problem. Right, so in the absence of any explicit configuration it basically picks a NUMA node (via some heuristic) and automatically puts the guest into it. It seems to me that xl's behaviour isn't wrong as such, it's just different. I think the important thing is that xl should honour user's explicit requests to use a particular node, either via vcpu pinning or cpupools etc. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |