[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RESEND 05/12] xen: numa-sched: make space for per-vcpu node-affinity



On 11/05/2013 05:24 PM, Jan Beulich wrote:
On 05.11.13 at 17:56, George Dunlap <george.dunlap@xxxxxxxxxxxxx> wrote:
On 11/05/2013 03:39 PM, George Dunlap wrote:
A potential problem with that is the "auto domain numa" thing.  In this
patch, if the domain numa affinity is not set but vcpu numa affinity is,
the domain numa affinity (which will be used to allocate memory for the
domain) will be set based on the vcpu numa affinity.  That seems like a
useful feature (though perhaps it's starting to violate the "policy
should be in the tools" principle).  If we change this to just "hard
affinity" and "soft affinity", we'll lose the natural logical connection
there.  It might have impacts on how we end up doing vNUMA as well.  So
I'm a bit torn ATM.

[Coming back after going through the whole series]

This is basically the main architectural question that needs to be
sorted out with the series: Do we bake in that the "soft affinity" is
specifically for NUMA-ness, or not?

I think that it would be beneficial if we could keep the soft
affinity as a more abstract construct than just representing
NUMAness, even if that means that domain node affinity
won't be able to be fully "automated" anymore. But I'm
saying that without having a specific use case in mind.

People seem to have found all kinds of fun use cases for the "hard" pinning we have; it's just a handy tool to have around. Allowing an arbitrary "soft" pinning just seems like a similarly handy tool. I'm sure between our users and our downstreams (Suse, Oracle, XenServer, QubesOS, &c &c) someone will find a use for it. :-)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.