[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/7] xen: sched: better handle (not) inserting idle vCPUs in runqueues



On 10/08/2015 02:52 PM, Dario Faggioli wrote:
Idle vCPUs should never really be explicitly inserted
in any of the schedulers' runqueue. In fact, they are
just put in execution immediately (either on boot, or
when a pCPU is assigned to a cpupool), and it will be
the first scheduling decision that --by preempting
them-- will 'park' them in the queue.

In fact, avoiding inserting those vCPUs in runqueues
is something that Credit2 and RTDS intercept and
forbid already (when implementing insert_vcpu()).
Credit1 does that implicitly, as the idle vCPU will
always be already running when insert_vcpu() is called,
and hence not put in the runqueue.

Let's make it *always* explicit, as that simplifies
things by quite a bit. For instance, we can now
introduce some BUG_ON() guards, with the twofold
objective of making this assumption clear, and of
catching misuse and bugs.

The check for whether we'd be inserting an idle vCPU
in a queue, now, happens, once and for all schedulers,
in generic code, at vCPU initialization time, while
we can just avoid trying (and always failing!) doing
so when doing cpupools manipulations.

Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>

Reviewed-by: Juergen Gross <jgross@xxxxxxxx>

---
Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
Cc: Juergen Gross <jgross@xxxxxxxx>
---
  xen/common/sched_credit.c  |    2 ++
  xen/common/sched_credit2.c |   25 ++++++++++---------------
  xen/common/sched_rt.c      |    4 +---
  xen/common/schedule.c      |   21 +++++++++++----------
  4 files changed, 24 insertions(+), 28 deletions(-)

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index fccb368..fc447a7 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -905,6 +905,8 @@ csched_vcpu_insert(const struct scheduler *ops, struct vcpu 
*vc)
      struct csched_vcpu *svc = vc->sched_priv;
      spinlock_t *lock;

+    BUG_ON( is_idle_vcpu(vc) );
+
      lock = vcpu_schedule_lock_irq(vc);

      if ( !__vcpu_on_runq(svc) && vcpu_runnable(vc) && !vc->is_running )
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 912e1a2..234f798 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -870,30 +870,25 @@ csched2_vcpu_insert(const struct scheduler *ops, struct 
vcpu *vc)
  {
      struct csched2_vcpu *svc = vc->sched_priv;
      struct csched2_dom * const sdom = svc->sdom;
+    spinlock_t *lock;

      printk("%s: Inserting %pv\n", __func__, vc);

-    /* NB: On boot, idle vcpus are inserted before alloc_pdata() has
-     * been called for that cpu.
-     */
-    if ( ! is_idle_vcpu(vc) )
-    {
-        spinlock_t *lock;
+    BUG_ON(is_idle_vcpu(vc));

-        /* FIXME: Do we need the private lock here? */
-        list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
+    /* FIXME: Do we need the private lock here? */
+    list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);

-        /* Add vcpu to runqueue of initial processor */
-        lock = vcpu_schedule_lock_irq(vc);
+    /* Add vcpu to runqueue of initial processor */
+    lock = vcpu_schedule_lock_irq(vc);

-        runq_assign(ops, vc);
+    runq_assign(ops, vc);

-        vcpu_schedule_unlock_irq(lock, vc);
+    vcpu_schedule_unlock_irq(lock, vc);

-        sdom->nr_vcpus++;
+    sdom->nr_vcpus++;

-        SCHED_STAT_CRANK(vcpu_insert);
-    }
+    SCHED_STAT_CRANK(vcpu_insert);

      CSCHED2_VCPU_CHECK(vc);
  }
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 1086399..37a32a4 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -624,9 +624,7 @@ rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
      s_time_t now = NOW();
      spinlock_t *lock;

-    /* not addlocate idle vcpu to dom vcpu list */
-    if ( is_idle_vcpu(vc) )
-        return;
+    BUG_ON( is_idle_vcpu(vc) );

      lock = vcpu_schedule_lock_irq(vc);
      if ( now >= svc->cur_deadline )
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 9aa209d..55daf73 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -240,20 +240,22 @@ int sched_init_vcpu(struct vcpu *v, unsigned int 
processor)
      init_timer(&v->poll_timer, poll_timer_fn,
                 v, v->processor);

-    /* Idle VCPUs are scheduled immediately. */
+    v->sched_priv = SCHED_OP(DOM2OP(d), alloc_vdata, v, d->sched_priv);
+    if ( v->sched_priv == NULL )
+        return 1;
+
+    TRACE_2D(TRC_SCHED_DOM_ADD, v->domain->domain_id, v->vcpu_id);
+
+    /* Idle VCPUs are scheduled immediately, not inserting them in runqueue. */
      if ( is_idle_domain(d) )
      {
          per_cpu(schedule_data, v->processor).curr = v;
          v->is_running = 1;
      }
-
-    TRACE_2D(TRC_SCHED_DOM_ADD, v->domain->domain_id, v->vcpu_id);
-
-    v->sched_priv = SCHED_OP(DOM2OP(d), alloc_vdata, v, d->sched_priv);
-    if ( v->sched_priv == NULL )
-        return 1;
-
-    SCHED_OP(DOM2OP(d), insert_vcpu, v);
+    else
+    {
+        SCHED_OP(DOM2OP(d), insert_vcpu, v);
+    }

      return 0;
  }
@@ -1514,7 +1516,6 @@ int schedule_cpu_switch(unsigned int cpu, struct cpupool 
*c)
      ppriv_old = per_cpu(schedule_data, cpu).sched_priv;
      per_cpu(schedule_data, cpu).sched_priv = ppriv;
      SCHED_OP(new_ops, tick_resume, cpu);
-    SCHED_OP(new_ops, insert_vcpu, idle);

      SCHED_OP(old_ops, free_vdata, vpriv_old);
      SCHED_OP(old_ops, free_pdata, ppriv_old, cpu);


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.