|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [RTDS Patch v2 for Xen4.8] xen: rtds: only tickle non-already tickled CPUs
From: naroahlee <naroahlee@xxxxxxxxx>
Bug Analysis:
When more than one idle VCPUs that have the same PCPU as their
previous running core invoke runq_tickle(), they will tickle the same
PCPU. The tickled PCPU will only pick at most one VCPU, i.e., the
highest-priority one, to execute. The other VCPUs will not be
scheduled for a period, even when there is an idle core, making these
VCPUs unnecessarily starve for one period.
Therefore, always make sure that we only tickle PCPUs that have not
been tickled already.
---
xen/common/sched_rt.c | 26 ++++++++++++--------------
1 file changed, 12 insertions(+), 14 deletions(-)
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 1b30014..012975c 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -1144,9 +1144,10 @@ rt_vcpu_sleep(const struct scheduler *ops, struct vcpu
*vc)
* Called by wake() and context_saved()
* We have a running candidate here, the kick logic is:
* Among all the cpus that are within the cpu affinity
- * 1) if the new->cpu is idle, kick it. This could benefit cache hit
- * 2) if there are any idle vcpu, kick it.
- * 3) now all pcpus are busy;
+ * 1) if there are any idle vcpu, kick it.
+ * For cache benefit, we first search new->cpu.
+ *
+ * 2) now all pcpus are busy;
* among all the running vcpus, pick lowest priority one
* if snext has higher priority, kick it.
*
@@ -1174,17 +1175,11 @@ runq_tickle(const struct scheduler *ops, struct rt_vcpu
*new)
cpumask_and(¬_tickled, online, new->vcpu->cpu_hard_affinity);
cpumask_andnot(¬_tickled, ¬_tickled, &prv->tickled);
- /* 1) if new's previous cpu is idle, kick it for cache benefit */
- if ( is_idle_vcpu(curr_on_cpu(new->vcpu->processor)) )
- {
- SCHED_STAT_CRANK(tickled_idle_cpu);
- cpu_to_tickle = new->vcpu->processor;
- goto out;
- }
-
- /* 2) if there are any idle pcpu, kick it */
+ /* 1) if there are any idle pcpu, kick it */
/* The same loop also find the one with lowest priority */
- for_each_cpu(cpu, ¬_tickled)
+ /* For cache benefit, we search new->cpu first */
+ cpu = cpumask_test_or_cycle(new->vcpu->processor, ¬_tickled);
+ while ( cpu != nr_cpu_ids )
{
iter_vc = curr_on_cpu(cpu);
if ( is_idle_vcpu(iter_vc) )
@@ -1197,9 +1192,12 @@ runq_tickle(const struct scheduler *ops, struct rt_vcpu
*new)
if ( latest_deadline_vcpu == NULL ||
iter_svc->cur_deadline > latest_deadline_vcpu->cur_deadline )
latest_deadline_vcpu = iter_svc;
+
+ cpumask_clear_cpu(cpu, ¬_tickled);
+ cpu = cpumask_cycle(cpu, ¬_tickled);
}
- /* 3) candicate has higher priority, kick out lowest priority vcpu */
+ /* 2) candicate has higher priority, kick out lowest priority vcpu */
if ( latest_deadline_vcpu != NULL &&
new->cur_deadline < latest_deadline_vcpu->cur_deadline )
{
--
1.9.1
---
CC: <mengxu@xxxxxxxxxxxxx>
CC: <dario.faggioli@xxxxxxxxxx>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |