[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v5] xen: rtds: only tickle non-already tickled CPUs



When more than one idle VCPUs that have the same PCPU as their
previous running core invoke runq_tickle(), they will tickle the same
PCPU. The tickled PCPU will only pick at most one VCPU, i.e., the
highest-priority one, to execute. The other VCPUs will not be
scheduled for a period, even when there is an idle core, making these
VCPUs unnecessarily starve for one period.

Therefore, always make sure that we only tickle PCPUs that have not
been tickled already.

Signed-off-by: Haoran Li <naroahlee@xxxxxxxxx>
Signed-off-by: Meng Xu <mengxu@xxxxxxxxxxxxx>
Reviewed-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>

---
The initial discussion of this patch can be found at
https://lists.xenproject.org/archives/html/xen-devel/2017-02/msg02857.html

Changes in v5:
Revise comments as Dario suggested

Changes in v4:
1) Take Dario's suggestions:
   Search the new->cpu first for the cpu to tickle.
   This get rid of the if statement in previous versions.
2) Reword the comments and commit messages.
3) Rebased on staging branch.

Issues in v2 and v3:
Did not rebase on the latest staging branch.
Did not solve the comments/issues in v1.
Please ignore the v2 and v3.
---
 xen/common/sched_rt.c | 29 ++++++++++++++---------------
 1 file changed, 14 insertions(+), 15 deletions(-)

diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 39f6bee..0ac5816 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -1147,9 +1147,9 @@ rt_vcpu_sleep(const struct scheduler *ops, struct vcpu 
*vc)
  * Called by wake() and context_saved()
  * We have a running candidate here, the kick logic is:
  * Among all the cpus that are within the cpu affinity
- * 1) if the new->cpu is idle, kick it. This could benefit cache hit
- * 2) if there are any idle vcpu, kick it.
- * 3) now all pcpus are busy;
+ * 1) if there are any idle CPUs, kick one.
+      For cache benefit, we check new->cpu as first
+ * 2) now all pcpus are busy;
  *    among all the running vcpus, pick lowest priority one
  *    if snext has higher priority, kick it.
  *
@@ -1177,17 +1177,13 @@ runq_tickle(const struct scheduler *ops, struct rt_vcpu 
*new)
     cpumask_and(&not_tickled, online, new->vcpu->cpu_hard_affinity);
     cpumask_andnot(&not_tickled, &not_tickled, &prv->tickled);
 
-    /* 1) if new's previous cpu is idle, kick it for cache benefit */
-    if ( is_idle_vcpu(curr_on_cpu(new->vcpu->processor)) )
-    {
-        SCHED_STAT_CRANK(tickled_idle_cpu);
-        cpu_to_tickle = new->vcpu->processor;
-        goto out;
-    }
-
-    /* 2) if there are any idle pcpu, kick it */
-    /* The same loop also find the one with lowest priority */
-    for_each_cpu(cpu, &not_tickled)
+    /*
+     * 1) If there are any idle CPUs, kick one.
+     *    For cache benefit,we first search new->cpu.
+     *    The same loop also find the one with lowest priority.
+     */
+    cpu = cpumask_test_or_cycle(new->vcpu->processor, &not_tickled);
+    while ( cpu!= nr_cpu_ids )
     {
         iter_vc = curr_on_cpu(cpu);
         if ( is_idle_vcpu(iter_vc) )
@@ -1200,9 +1196,12 @@ runq_tickle(const struct scheduler *ops, struct rt_vcpu 
*new)
         if ( latest_deadline_vcpu == NULL ||
              iter_svc->cur_deadline > latest_deadline_vcpu->cur_deadline )
             latest_deadline_vcpu = iter_svc;
+
+        cpumask_clear_cpu(cpu, &not_tickled);
+        cpu = cpumask_cycle(cpu, &not_tickled);
     }
 
-    /* 3) candicate has higher priority, kick out lowest priority vcpu */
+    /* 2) candicate has higher priority, kick out lowest priority vcpu */
     if ( latest_deadline_vcpu != NULL &&
          new->cur_deadline < latest_deadline_vcpu->cur_deadline )
     {
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.