[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v2] Xen sched: Fix multiple runqueues in credit2
This patch attempts to address the issue of the Xen Credit 2 Scheduler only creating one vCPU run queue on multiple physical processor systems. It should be creating one run queue per physical processor. CPU 0 does not get a starting callback, so it is hard coded to run queue 0. At the time this happens, socket information is not available for CPU 0. Socket information is available for each individual CPU when each gets the STARTING callback (socket information is also available for CPU 0 by that time). Each are assigned to a run queue based on their socket. --- Changes from v1: * moved comments to the top of the section in one long comment block * collapsed code to improve readability * fixed else if indentation style * updated comment about the runqueue plan --- xen/common/sched_credit2.c | 41 +++++++++++++++++++++++++++-------------- 1 file changed, 27 insertions(+), 14 deletions(-) diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 4e68375..3ff46a3 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -85,8 +85,8 @@ * to a small value, and a fixed credit is added to everyone. * * The plan is for all cores that share an L2 will share the same - * runqueue. At the moment, there is one global runqueue for all - * cores. + * runqueue. At the moment, all cores that share a socket share the same + * runqueue. */ /* @@ -1945,6 +1945,8 @@ static void deactivate_runqueue(struct csched_private *prv, int rqi) static void init_pcpu(const struct scheduler *ops, int cpu) { int rqi; + int cpu0_socket; + int cpu_socket; unsigned long flags; struct csched_private *prv = CSCHED_PRIV(ops); struct csched_runqueue_data *rqd; @@ -1959,15 +1961,26 @@ static void init_pcpu(const struct scheduler *ops, int cpu) return; } - /* Figure out which runqueue to put it in */ + /* + * Choose which run queue to add cpu to based on its socket. + * If it's CPU 0, hard code it to run queue 0 (it doesn't get a STARTING + * callback and socket information is not yet available for it). + * If cpu is on the same socket as CPU 0, add it to run queue 0 with CPU 0. + * Else If cpu is on socket 0, add it to a run queue based on the socket + * CPU 0 is actually on. + * Else add it to a run queue based on its own socket. + */ + rqi = 0; + cpu_socket = cpu_to_socket(cpu); + cpu0_socket = cpu_to_socket(0); - /* Figure out which runqueue to put it in */ - /* NB: cpu 0 doesn't get a STARTING callback, so we hard-code it to runqueue 0. */ - if ( cpu == 0 ) - rqi = 0; + if ( cpu == 0 || cpu_socket == cpu0_socket ) + rqi = 0; + else if ( cpu_socket == 0 ) + rqi = cpu0_socket; else - rqi = cpu_to_socket(cpu); + rqi = cpu_socket; if ( rqi < 0 ) { @@ -2010,13 +2023,11 @@ static void init_pcpu(const struct scheduler *ops, int cpu) static void * csched_alloc_pdata(const struct scheduler *ops, int cpu) { - /* Check to see if the cpu is online yet */ - /* Note: cpu 0 doesn't get a STARTING callback */ - if ( cpu == 0 || cpu_to_socket(cpu) >= 0 ) + /* This function is only for calling init_pcpu on CPU 0 + * because it does not get a STARTING callback */ + + if ( cpu == 0 ) init_pcpu(ops, cpu); - else - printk("%s: cpu %d not online yet, deferring initializatgion\n", - __func__, cpu); return (void *)1; } @@ -2072,6 +2083,8 @@ csched_free_pdata(const struct scheduler *ops, void *pcpu, int cpu) static int csched_cpu_starting(int cpu) { + /* This function is for calling init_pcpu on every CPU, except for CPU 0 */ + struct scheduler *ops; /* Hope this is safe from cpupools switching things around. :-) */ -- 1.7.10.4 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |