[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 08/14] xen: derive NUMA node affinity from hard and soft CPU affinity



On mar, 2013-11-19 at 16:20 +0000, Jan Beulich wrote:
> >>> On 18.11.13 at 19:17, Dario Faggioli <dario.faggioli@xxxxxxxxxx> wrote:
> > if a domain's NUMA node-affinity (which is what controls
> > memory allocations) is provided by the user/toolstack, it
> > just is not touched. However, if the user does not say
> > anything, leaving it all to Xen, let's compute it in the
> > following way:
> > 
> >  1. cpupool's cpus & hard-affinity & soft-affinity
> >  2. if (1) is empty: cpupool's cpus & hard-affinity
> 
> Is this really guaranteed to always be non-empty? At least an
> ASSERT() to that effect would be nice, as it's not immediately
> obvious.
> 
I think it is, basing on how cpupools and hard affinity interact, even
before this series (where hard affinity is v->cpu_affinity, the only
per-vcpu affinity we have).

For instance, when you move a domain to a new cpupool, it always reset
v->cpu_affinity to "all" for all the domain's vcpus (see
sched_move_domain()). Similarly, when removing cpus from a cpupools, if
some v->cpu_affinity become empty, they get reset to "all" too (see
cpu_disable_scheduler()). It also uses "all" as v->cpu_affinity for all
the vcpus that, at domain creation time, have an affinity which has an
empty intersection with the cpupool where the domain is being created.

So, yes, I really think (2.) is guaranteed to be non empty, and yes, I
can add an ASSERT there.

> > -    if ( !zalloc_cpumask_var(&cpumask) )
> > +    if ( !zalloc_cpumask_var(&dom_cpumask) )
> >          return;
> > -    if ( !alloc_cpumask_var(&online_affinity) )
> > +    if ( !zalloc_cpumask_var(&dom_cpumask_soft) )
> 
> So you use zalloc_cpumask_var() here ...
> 
> >      if ( d->auto_node_affinity )
> >      {
> > +        /*
> > +         * We want the narrowest possible set of pcpus (to get the narowest
> > +         * possible set of nodes). What we need is the cpumask of where the
> > +         * domain can run (the union of the hard affinity of all its 
> > vcpus),
> > +         * and the full mask of where it would prefer to run (the union of
> > +         * the soft affinity of all its various vcpus). Let's build them.
> > +         */
> > +        cpumask_clear(dom_cpumask);
> > +        cpumask_clear(dom_cpumask_soft);
> 
> ... and then clear the masks explicitly here?
> 
AhA, right... I probably got a bit lost while reshuffling things. :-)

I'll ditch these two cpumask_clear().

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.