[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RESEND 05/12] xen: numa-sched: make space for per-vcpu node-affinity



On mer, 2013-11-06 at 16:12 +0000, George Dunlap wrote:
> We have two "affinities" at the moment:
> * per-vcpu cpu affinity.  This is this is a scheduling construct, and is 
> used to restrict vcpus to run on specific pcpus.  This is what we would 
> call hard affinity -- "hard" because vcpus are *not allowed* to run on 
> cpus not in this mask.
> * Domain NUMA affinity.  As of 4.3 this has two functions: both a memory 
> allocation function, and a scheduling function.  For the scheduling 
> function, it acts as a "soft affinity", but it's domain-wide, rather 
> than being per-vcpu.  It's "soft" because the scheduler will *try* to 
> run it on pcpus in that mask, but if it cannot, it *will allow* them to 
> run on pcpus not in that mask.
> 
> At the moment you can set this manually, or if it's not set, then Xen 
> will set it automatically based on the vcpu hard affinities (and I think 
> the cpupool that it's in).
> 
> What we're proposing is to remove the domain-wide soft affinity and 
> replace it with a per-vcpu soft affinity.  That way each vcpu has some 
> pcpus that it prefers (in the soft affinity *and* the hard affinity), 
> some that it doesn't prefer but is OK with (hard affinity but not soft 
> affinity), and some that it cannot run on at all (not in the hard affinity).
> 
Great explanation... A bit longer but *much* better than mine. :-)

> The question Dario has is this: given that we now have per-vcpu hard and 
> soft scheduling affinity, how should we automatically construct the 
> per-domain memory allocation affinity, if at all?  Should we construct 
> it from the "hard" scheduling affinities, or from the "soft" scheduling 
> affinities?
> 
Exactly.

> I said that I thought we should use the soft affinity; but I really 
> meant the "effective soft affinity" -- i.e., the union of soft, hard, 
> and cpupools.
> 
Aha... I see it now... Only one more thing: union or intersection? Union
would be nice, especially because there are very few risks of it being
empty, but it may be too broad (if one of the tree is 'all cpus/nodes',
so will the memory affinity). I'd go for the intersection, with some
extra care to avoid degeneration into the empty set.

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.