[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RESEND 05/12] xen: numa-sched: make space for per-vcpu node-affinity



On mar, 2013-11-05 at 15:39 +0000, George Dunlap wrote:
> On 11/05/2013 03:23 PM, Jan Beulich wrote:
> >>>> On 05.11.13 at 16:11, George Dunlap <george.dunlap@xxxxxxxxxxxxx> wrote:
> >> Or, we could internally change the names to "cpu_hard_affinity" and
> >> "cpu_soft_affinity", since that's effectively what the scheduler will
> >> do.  It's possible someone might want to set soft affinities for some
> >> other reason besides NUMA performance.
> >
> > I like that.
> 
I like that too.

> A potential problem with that is the "auto domain numa" thing.  In this 
> patch, if the domain numa affinity is not set but vcpu numa affinity is, 
> the domain numa affinity (which will be used to allocate memory for the 
> domain) will be set based on the vcpu numa affinity.
>
That would indeed be an issue, but, hopefully, even if we want to go for
the soft!=NUMA way, that would mean that, at the toolstack level, two
calls instead than one are required to properly place a guest on
one/some NUMA node: one for vcpus (setting the soft affinity) and one
for memory (setting the node affinity, which would then affect mem
only).

Really, let's see if I succeed in coding it up tomorrow morning, but I
don't expect too serious issues.

The only thing that puzzles me is what the 'default behaviour' should
be. I mean, if in a VM config file I find the following:

 cpus = "2,9"

And nothing else, related to hard, soft or memory affinity, what do I
do? I'd say, even if only for consistency with what we're doing right
now (even without this patch), we should go all the way through both the
calls above, and set both hard affinity and memory node affinity. If
not, people doing this in 4.4 (or 4.5, if we miss the train) would
experience different behaviour wrt current one, which would be bad!
Thoughts?

Similar question, provided I implement something like
libxl_set_vcpu_hardaffinity(), libxl_set_vcpu_softaffinity() and
libxl_set_mem_nodeaffinity(), what should a call to the existent
libxl_set_vcpuaffinity() do? As above, right now it affects both
vcpu-pinning and node-affinity, so I'd say it should call both
libxl_set_vcpu_hardaffinity() and libxl_set_mem_nodeaffinity(). Is that
ok?

BTW, does the interface sketched above look reasonable? Someone else up
for suggesting more fancy names? :-)

> That seems like a 
> useful feature (though perhaps it's starting to violate the "policy 
> should be in the tools" principle).  
>
The current situation being that d->node_affinity is set based on
vcpu-pinning, having it retain the same meaning and behavior, apart from
being set based on vcpu node-affinity looked like the natural extension
of it to me. However...

> If we change this to just "hard 
> affinity" and "soft affinity", we'll lose the natural logical connection 
> there.
>
...I think I see what you mean and I agree that it wasn't that much of a
violation if we stick to soft==NUMA (as implemented in the series). If
we go for your proposal (which, again, I like), it'd indeed become hard
to decide whether node affinity should derive from hard or soft
affinity. :-)

> It might have impacts on how we end up doing vNUMA as well.  So 
> I'm a bit torn ATM.
> 
> Dario, any thoughts?
> 
As said above, I like the idea of separating soft affinities and
NUMA-ness... I'll give it a try and either submit an updated series
implementing such behaviour, or follow-up to this conversation with any
issue I find while doing so. :-)

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.