|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen master] xen: sched: simplify (and speedup) checking soft-affinity
commit f5eff146efb72ee8724fd13073e6c796fc8d0701
Author: Dario Faggioli <dfaggioli@xxxxxxxx>
AuthorDate: Wed Mar 21 17:17:47 2018 +0000
Commit: George Dunlap <george.dunlap@xxxxxxxxxx>
CommitDate: Wed Mar 21 17:19:08 2018 +0000
xen: sched: simplify (and speedup) checking soft-affinity
The fact of whether or not a vCPU has a soft-affinity
which is effective, i.e., with the power of actually
affecting the scheduling of the vCPU itself rarely
changes. Very, very rarely, as compared to how often
we need to check for the same thing (basically, at
every scheduling decision!).
That can be improved by storing in a (per-vCPU) flag
(it's actually a boolean field in struct vcpu) whether
or not, considering how hard-affinity and soft-affinity
look like, soft-affinity should or not be taken into
account during scheduling decisions.
This saves some cpumask manipulations, which is nice,
considering how frequently they were being done. Note
that we can't get rid of 100% of the cpumask operations
involved in the check, because soft-affinity being
effective or not, not only depends on the relationship
between the hard and soft-affinity masks of a vCPU, but
also of the online pCPUs and/or of what pCPUs are part
of the cpupool where the vCPU lives, and that's rather
impractical to store in a per-vCPU flag. Still the
overhead is reduced to "just" one cpumask_subset() (and
only if the newly introduced flag is 'true')!
Signed-off-by: Dario Faggioli <dfaggioli@xxxxxxxx>
Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>
---
xen/common/schedule.c | 5 +++++
xen/include/xen/sched-if.h | 7 +++----
xen/include/xen/sched.h | 3 +++
3 files changed, 11 insertions(+), 4 deletions(-)
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 8bea9a203e..343ab6306e 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -869,6 +869,11 @@ void sched_set_affinity(
cpumask_copy(v->cpu_hard_affinity, hard);
if ( soft )
cpumask_copy(v->cpu_soft_affinity, soft);
+
+ v->soft_aff_effective = !cpumask_subset(v->cpu_hard_affinity,
+ v->cpu_soft_affinity) &&
+ cpumask_intersects(v->cpu_soft_affinity,
+ v->cpu_hard_affinity);
}
static int vcpu_set_affinity(
diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h
index 65b4538114..9596eae1e2 100644
--- a/xen/include/xen/sched-if.h
+++ b/xen/include/xen/sched-if.h
@@ -270,10 +270,9 @@ static inline cpumask_t* cpupool_domain_cpumask(struct
domain *d)
*/
static inline int has_soft_affinity(const struct vcpu *v)
{
- return !cpumask_subset(cpupool_domain_cpumask(v->domain),
- v->cpu_soft_affinity) &&
- !cpumask_subset(v->cpu_hard_affinity, v->cpu_soft_affinity) &&
- cpumask_intersects(v->cpu_soft_affinity, v->cpu_hard_affinity);
+ return v->soft_aff_effective &&
+ !cpumask_subset(cpupool_domain_cpumask(v->domain),
+ v->cpu_soft_affinity);
}
/*
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index cbd50e9867..3303fd9803 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -210,6 +210,9 @@ struct vcpu
bool hcall_compat;
#endif
+ /* Does soft affinity actually play a role (given hard affinity)? */
+ bool soft_aff_effective;
+
/* The CPU, if any, which is holding onto this VCPU's state. */
#define VCPU_CPU_CLEAN (~0u)
unsigned int dirty_cpu;
--
generated by git-patchbot for /home/xen/git/xen.git#master
_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |