[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 08/14] xen: derive NUMA node affinity from hard and soft CPU affinity



>>> On 18.11.13 at 19:17, Dario Faggioli <dario.faggioli@xxxxxxxxxx> wrote:
> if a domain's NUMA node-affinity (which is what controls
> memory allocations) is provided by the user/toolstack, it
> just is not touched. However, if the user does not say
> anything, leaving it all to Xen, let's compute it in the
> following way:
> 
>  1. cpupool's cpus & hard-affinity & soft-affinity
>  2. if (1) is empty: cpupool's cpus & hard-affinity

Is this really guaranteed to always be non-empty? At least an
ASSERT() to that effect would be nice, as it's not immediately
obvious.

> -    if ( !zalloc_cpumask_var(&cpumask) )
> +    if ( !zalloc_cpumask_var(&dom_cpumask) )
>          return;
> -    if ( !alloc_cpumask_var(&online_affinity) )
> +    if ( !zalloc_cpumask_var(&dom_cpumask_soft) )

So you use zalloc_cpumask_var() here ...

>      if ( d->auto_node_affinity )
>      {
> +        /*
> +         * We want the narrowest possible set of pcpus (to get the narowest
> +         * possible set of nodes). What we need is the cpumask of where the
> +         * domain can run (the union of the hard affinity of all its vcpus),
> +         * and the full mask of where it would prefer to run (the union of
> +         * the soft affinity of all its various vcpus). Let's build them.
> +         */
> +        cpumask_clear(dom_cpumask);
> +        cpumask_clear(dom_cpumask_soft);

... and then clear the masks explicitly here?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.