[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 02/23] xen: move NUMA_NO_NODE to public memory.h as XEN_NUMA_NO_NODE



On Tue, 2015-03-03 at 08:55 +0000, Jan Beulich wrote:
> >>> On 03.03.15 at 04:42, <raistlin.df@xxxxxxxxx> wrote:

> > Indeed. It tells Xen: <<hey Xen, toolstack here: we either don't care or
> > could not come up with any sane vnode-to-pnode mapping, please figure
> > that out yourself>>.
> > 
> > That makes the code, IMO, simpler at any level. In fact, at Xen level,
> > there is a default way to deal with the situation (the striping)
> > already. At the toolstack level, we can only care about trying to come
> > up with some super-cool and super-good (for performance) configuration
> > and just give up, if anything like what David and Andrew said occurs.
> 
> See my earlier reply - the tool stack at least giving hints to the
> hypervisor in such a case would likely still be better (for the final
> result) than leaving it entirely up to the hypervisor: "No node"
> really means allocate from anywhere, whereas some specific
> node passed in still allows the hypervisor to find second best fits
> when having to fall back.
> 
Yes, at the cost of more complex algorithms, both in the hypervisor (as
you say in your other email) and in the toolstack. HV may not be an
issue, toolstack, I'm not sure...

I already have a draft implementation for that, which I'll rebase and
submit on top of Wei's series, as soon as that one will be in.

I think I agree with Wei when he says that it's probably better to drop
the argument for now... We'll see later whether we really need a way to
tell NO_NODE to the hypervisor, and add it then, or not.

Regards,
Dario


Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.