|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 2/2] xen: credit2: limit the max number of CPUs in a runqueue
On 29.04.20 19:36, Dario Faggioli wrote: In Credit2 CPUs (can) share runqueues, depending on the topology. For instance, with per-socket runqueues (the default) all the CPUs that are part of the same socket share a runqueue. On platform with a huge number of CPUs per socket, that could be a problem. An example is AMD EPYC2 servers, where we can have up to 128 CPUs in a socket. It is of course possible to define other, still topology-based, runqueue arrangements (e.g., per-LLC, per-DIE, etc). But that may still result in runqueues with too many CPUs on other/future platforms. Therefore, let's set a limit to the max number of CPUs that can share a Credit2 runqueue. The actual value is configurable (at boot time), the default being 16. If, for instance, there are more than 16 CPUs in a socket, they'll be split among two (or more) runqueues. Did you think about balancing the runqueues regarding the number of cpus? E.g. in case of max being 16 and having 20 cpus to put 10 in each runqueue? I know this will need more logic as cpus are added one by one, but the result would be much better IMO. Note: with core scheduling enabled, this parameter sets the max number of *scheduling resources* that can share a runqueue. Therefore, with granularity set to core (and assumint 2 threads per core), we will have at most 16 cores per runqueue, which corresponds to 32 threads. But that is fine, considering how core scheduling works. Signed-off-by: Dario Faggioli <dfaggioli@xxxxxxxx> --- Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Cc: George Dunlap <george.dunlap@xxxxxxxxxx> Cc: Jan Beulich <jbeulich@xxxxxxxx> Cc: Juergen Gross <jgross@xxxxxxxx> --- xen/common/sched/cpupool.c | 2 - xen/common/sched/credit2.c | 104 ++++++++++++++++++++++++++++++++++++++++++-- xen/common/sched/private.h | 2 + 3 files changed, 103 insertions(+), 5 deletions(-) diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index d40345b585..0227457285 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -37,7 +37,7 @@ static cpumask_t cpupool_locked_cpus;static DEFINE_SPINLOCK(cpupool_lock); -static enum sched_gran __read_mostly opt_sched_granularity = SCHED_GRAN_cpu;+enum sched_gran __read_mostly opt_sched_granularity = SCHED_GRAN_cpu; Please don't use the global option value, but the per-cpupool one. static unsigned int __read_mostly sched_granularity = 1;#ifdef CONFIG_HAS_SCHED_GRANULARITY Shouldn't you mask away siblings not in the cpupool? Again, local cpupool only! Juergen
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |