[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC: Still TODO for 4.2? xl domain numa memory allocation vs xm/xend

On Fri, 2012-01-20 at 11:44 +0000, Dario Faggioli wrote:

> And I agree again, honouring explicit user requests is key point. I
> think the issue here is what should be dona, say by default, i.e., if
> the user doesn't say anything about CPU/memory allocation. My idea was
> to have Xen supporting a "NUMA-aware operational mode" where (and this
> will actually be the first step!) it does exactly what xend is doing
> right now --- that is, choosing a node and putting the new guest there,
> both memory and CPU-wise. However, having this logic in the hypervisor
> would allow Xen itself, for example, while investigating which node to
> use for a new guest, or during a sort of periodic load balancing or
> whatever, to change its mind and move a guest to a different node from
> where it was put in the first place, as well as a bunch of other things.
> I'm not sure the same can be done within the toolstack but I think I can
> say that if it can, it would be way more complex and probably less
> effective... Am I wrong?

This might be doable for HVM guests but for PV guests pretty much the
only way would be a kind of local migration which would need tool
support. For the PV case hybrid support would help (by introducing HAP
for PV guests). Not saying it's not worthwhile but might just be harder
than it sounds.

> Of course, even in such mode, if the user explicitly tells us what he
> wants, e.g., by means of cpupools, pinning, etc., we should still honour
> such request.

Do we get this right now?

> Then the question is whether or nod this mode would be the default, or
> would need to be explicitly requested (boot parameter or something), but
> that would become important only when we will have it up and
> running... :-)

Yeah, I think we can defer that decision ;-)


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.