[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH RESEND 06/12] xen: numa-sched: domain node-affinity always comes from vcpu node-affinity



Now that we have per-vcpu node-affinity we can do the
following:

 * always consider the domain's node-affinity as
   'automaically computed';

 * always construct it out of the domain's vcpus' own
   node-affinity.

That means, if someone wants to change the node-affinity of
a domain, it is the node-affinities of all the domain's vcpus
that need to be modified.

This change modifies domain_set_node_affinity() in such a way
that it does right the above, i.e., it goes through all the
domain's vcpus and set their node-affinity to some specified
mask. This means that, "seen from the outside", nothing
changes: you call domain_set_node_affinity(), passing a
nodemask_t to it, and you get, (1) that mask to be the
node-affinity for the domain (which basically means on what
NUMA nodes memory is allocated), and (2) all the vcpus of the
domains prefer to run on the pcpus from the nodes in that
mask, exactly as it was before this commit.

Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
---
 xen/common/domain.c       |   48 ++++++++++++++-------------------------------
 xen/common/sched_credit.c |    3 +--
 xen/include/xen/sched.h   |    2 --
 3 files changed, 16 insertions(+), 37 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 8d2ff49..366d9b9 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -228,7 +228,6 @@ struct domain *domain_create(
 
     spin_lock_init(&d->node_affinity_lock);
     d->node_affinity = NODE_MASK_ALL;
-    d->auto_node_affinity = 1;
 
     spin_lock_init(&d->shutdown_lock);
     d->shutdown_code = -1;
@@ -403,18 +402,13 @@ void domain_update_node_affinity(struct domain *d)
     }
 
     /*
-     * If d->auto_node_affinity is true, the domain's node-affinity mask
-     * (d->node_affinity) is automaically computed from all the domain's
-     * vcpus' vcpu-affinity masks (the union of which we have just built
-     * above in cpumask). OTOH, if d->auto_node_affinity is false, we
-     * must leave the node-affinity of the domain alone.
+     * A domain's node-affinity is just the union of all the domain's vcpus'
+     * numa-affinity masks, which is exactly what we have in cpumask
+     * (although, we need to convert it from cpumask to nodemask, of course).
      */
-    if ( d->auto_node_affinity )
-    {
-        nodes_clear(d->node_affinity);
-        for_each_cpu ( cpu, cpumask )
-            node_set(cpu_to_node(cpu), d->node_affinity);
-    }
+    nodes_clear(d->node_affinity);
+    for_each_cpu ( cpu, cpumask )
+        node_set(cpu_to_node(cpu), d->node_affinity);
 
     sched_set_node_affinity(d, &d->node_affinity);
 
@@ -425,33 +419,21 @@ void domain_update_node_affinity(struct domain *d)
 }
 
 
+/* Sets the numa-affinity (via vcpu_set_node_affinity() ) for all
+ * the vcpus of the domain. */
 int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity)
 {
-    /* Being affine with no nodes is just wrong */
-    if ( nodes_empty(*affinity) )
-        return -EINVAL;
-
-    spin_lock(&d->node_affinity_lock);
+    struct vcpu *v;
+    int rc = 0;
 
-    /*
-     * Being/becoming explicitly affine to all nodes is not particularly
-     * useful. Let's take it as the `reset node affinity` command.
-     */
-    if ( nodes_full(*affinity) )
+    for_each_vcpu ( d, v )
     {
-        d->auto_node_affinity = 1;
-        goto out;
+        rc = vcpu_set_node_affinity(v, affinity);
+        if ( rc )
+            break;
     }
 
-    d->auto_node_affinity = 0;
-    d->node_affinity = *affinity;
-
-out:
-    spin_unlock(&d->node_affinity_lock);
-
-    domain_update_node_affinity(d);
-
-    return 0;
+    return rc;
 }
 
 
diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index 28dafcf..c53a36b 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -311,8 +311,7 @@ static inline int __vcpu_has_node_affinity(const struct 
vcpu *vc,
     const struct domain *d = vc->domain;
     const struct csched_dom *sdom = CSCHED_DOM(d);
 
-    if ( d->auto_node_affinity
-         || cpumask_full(sdom->node_affinity_cpumask)
+    if (cpumask_full(sdom->node_affinity_cpumask)
          || !cpumask_intersects(sdom->node_affinity_cpumask, mask) )
         return 0;
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 732d6b6..d8e4735 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -325,8 +325,6 @@ struct domain
     /* Does this guest need iommu mappings? */
     bool_t           need_iommu;
 #endif
-    /* is node-affinity automatically computed? */
-    bool_t           auto_node_affinity;
     /* Is this guest fully privileged (aka dom0)? */
     bool_t           is_privileged;
     /* Which guest this guest has privileges on */


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.