[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC: Still TODO for 4.2? xl domain numa memory allocation vs xm/xend

On Thu, 2012-01-19 at 21:14 +0000, Pasi Kärkkäinen wrote:
> On Wed, Jan 04, 2012 at 04:29:22PM +0000, Ian Campbell wrote:
> > 
> > Has anybody got anything else? I'm sure I've missed stuff. Are there any
> > must haves e.g. in the paging/sharing spaces?
> > 
> Something that I just remembered:
> http://wiki.xen.org/xenwiki/Xen4.1
> "NUMA-aware memory allocation for VMs. xl in Xen 4.1 will allocate
> equal amount of memory from every NUMA node for the VM. xm/xend
> allocates all the memory from the same NUMA node."

I'm not that familiar with the NUMA support but my understanding was
that memory was allocated by libxc/the-hypervisor and not by the
toolstack and that the default was to allocate from the same numa nodes
as domains the processor's were pinned to i.e. if you pin the processors
appropriately the Right Thing just happens. Do you believe this is not
the case and/or not working right with xl?

CCing Juergen since he added the cpupool support and in particular the
cpupool-numa-split option so I'm hoping he knows something about NUMA
more generally.

> Is this something that should be looked at?
Probably, but is anyone doing so?

> Should the numa memory allocation be an option so it can be controlled
> per domain? 

What options did xm provide in this regard?

Does xl's cpupool (with the cpupool-numa-split option) server the same

> The default libxl behaviour might cause unexpected performance issues
> on multi-socket systems? 

I'm not convinced libxl is behaving any different to xend but perhaps
someone can show me the error of my ways.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.