[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 06/16] xen: sched: make space for cpu_soft_affinity



On 13/11/13 19:11, Dario Faggioli wrote:
Before this change, each vcpu had its own vcpu-affinity
(in v->cpu_affinity), representing the set of pcpus where
the vcpu is allowed to run. Since when NUMA-aware scheduling
was introduced the (credit1 only, for now) scheduler also
tries as much as it can to run all the vcpus of a domain
on one of the nodes that constitutes the domain's
node-affinity.

The idea here is making the mechanism more general by:
  * allowing for this 'preference' for some pcpus/nodes to be
    expressed on a per-vcpu basis, instead than for the domain
    as a whole. That is to say, each vcpu should have its own
    set of preferred pcpus/nodes, instead than it being the
    very same for all the vcpus of the domain;
  * generalizing the idea of 'preferred pcpus' to not only NUMA
    awareness and support. That is to say, independently from
    it being or not (mostly) useful on NUMA systems, it should
    be possible to specify, for each vcpu, a set of pcpus where
    it prefers to run (in addition, and possibly unrelated to,
    the set of pcpus where it is allowed to run).

We will be calling this set of *preferred* pcpus the vcpu's
soft affinity, and this change introduces, allocates, frees
and initializes the data structure required to host that in
struct vcpu (cpu_soft_affinity).

Also, the new field is not used anywhere yet, so no real
functional change yet.

Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>

The breakdown of this in the series doesn't make much sense to me -- I would have folded this one and patch 10 (use soft affinity instead of node affinity) together, and put it in after patch 07 (s/affinity/hard_affinity/g;).

But the code itself is fine, and time is short, so:

Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>

---
Changes from v1:
  * this patch does something similar to what, in v1, was
    being done in "5/12 xen: numa-sched: make space for
    per-vcpu node-affinity"
---
  xen/common/domain.c     |    3 +++
  xen/common/keyhandler.c |    2 ++
  xen/common/schedule.c   |    2 ++
  xen/include/xen/sched.h |    3 +++
  4 files changed, 10 insertions(+)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 2cbc489..c33b876 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -128,6 +128,7 @@ struct vcpu *alloc_vcpu(
      if ( !zalloc_cpumask_var(&v->cpu_affinity) ||
           !zalloc_cpumask_var(&v->cpu_affinity_tmp) ||
           !zalloc_cpumask_var(&v->cpu_affinity_saved) ||
+         !zalloc_cpumask_var(&v->cpu_soft_affinity) ||
           !zalloc_cpumask_var(&v->vcpu_dirty_cpumask) )
          goto fail_free;
@@ -159,6 +160,7 @@ struct vcpu *alloc_vcpu(
          free_cpumask_var(v->cpu_affinity);
          free_cpumask_var(v->cpu_affinity_tmp);
          free_cpumask_var(v->cpu_affinity_saved);
+        free_cpumask_var(v->cpu_soft_affinity);
          free_cpumask_var(v->vcpu_dirty_cpumask);
          free_vcpu_struct(v);
          return NULL;
@@ -737,6 +739,7 @@ static void complete_domain_destroy(struct rcu_head *head)
              free_cpumask_var(v->cpu_affinity);
              free_cpumask_var(v->cpu_affinity_tmp);
              free_cpumask_var(v->cpu_affinity_saved);
+            free_cpumask_var(v->cpu_soft_affinity);
              free_cpumask_var(v->vcpu_dirty_cpumask);
              free_vcpu_struct(v);
          }
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index 8e4b3f8..33c9a37 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -298,6 +298,8 @@ static void dump_domains(unsigned char key)
              printk("dirty_cpus=%s ", tmpstr);
              cpuset_print(tmpstr, sizeof(tmpstr), v->cpu_affinity);
              printk("cpu_affinity=%s\n", tmpstr);
+            cpuset_print(tmpstr, sizeof(tmpstr), v->cpu_soft_affinity);
+            printk("cpu_soft_affinity=%s\n", tmpstr);
              printk("    pause_count=%d pause_flags=%lx\n",
                     atomic_read(&v->pause_count), v->pause_flags);
              arch_dump_vcpu_info(v);
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 0f45f07..5731622 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -198,6 +198,8 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor)
      else
          cpumask_setall(v->cpu_affinity);
+ cpumask_setall(v->cpu_soft_affinity);
+
      /* Initialise the per-vcpu timers. */
      init_timer(&v->periodic_timer, vcpu_periodic_timer_fn,
                 v, v->processor);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index cbdf377..7e00caf 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -198,6 +198,9 @@ struct vcpu
      /* Used to restore affinity across S3. */
      cpumask_var_t    cpu_affinity_saved;
+ /* Bitmask of CPUs on which this VCPU prefers to run. */
+    cpumask_var_t    cpu_soft_affinity;
+
      /* Bitmask of CPUs which are holding onto this VCPU's state. */
      cpumask_var_t    vcpu_dirty_cpumask;


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.