[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 2/3] sched_credit2.c : runqueue_per_core code



On 03/12/2015 02:57 PM, Uma Sharma wrote:
> This patch do the following things:
> -Insertion of runqueue_per_core code
> -Boot paarmeter creation to select runqueue
> 
> Signed-off-by : Uma Sharma <uma.sharma523@xxxxxxxxx>
> ---
>  xen/common/sched_credit2.c | 39 ++++++++++++++++++++++++++++++++-------
>  1 file changed, 32 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> index ad0a5d4..c45df87 100644
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -85,8 +85,8 @@
>   * to a small value, and a fixed credit is added to everyone.
>   *
>   * The plan is for all cores that share an L2 will share the same
> - * runqueue.  At the moment, there is one global runqueue for all
> - * cores.
> + * runqueue.  At the moment, the code allows the user to choose runqueue
> + * to be used. Default used core runqueue.
>   */
>  
>  /*
> @@ -161,10 +161,16 @@
>   */
>  #define __CSFLAG_runq_migrate_request 3
>  #define CSFLAG_runq_migrate_request (1<<__CSFLAG_runq_migrate_request)
> -
> +/* CREDIT2_OPT_RUNQUEUE: Used to define the runqueue used
> + */
> +#define CREDIT2_OPT_RUNQUEUE_CORE 1
> +#define CREDIT2_OPT_RUNQUEUE_SOCKET 2
>  
>  int opt_migrate_resist=500;
>  integer_param("sched_credit2_migrate_resist", opt_migrate_resist);
> +static char __initdata opt_credit2_runqueue_string[10] = "core";
> +string_param("credit2_runqueue", opt_credit2_runqueue_string);
> +int opt_credit2_runqueue=CREDIT2_OPT_RUNQUEUE_CORE;
>  
>  /*
>   * Useful macros
> @@ -1940,10 +1946,14 @@ static void init_pcpu(const struct scheduler *ops, 
> int cpu)
>  
>      /* Figure out which runqueue to put it in */
>      /* NB: cpu 0 doesn't get a STARTING callback, so we hard-code it to 
> runqueue 0. */
> -    if ( cpu == 0 )
> -        rqi = 0;
> +    if ( opt_credit2_runqueue == CREDIT2_OPT_RUNQUEUE_SOCKET )
> +    {
> +        rqi = (cpu) ? cpu_to_socket(cpu) : boot_cpu_to_socket();
> +    }
>      else
> -        rqi = cpu_to_socket(cpu);
> +    {
> +        rqi = (cpu) ? cpu_to_core(cpu) : boot_cpu_to_core();
> +    }
>  
>      if ( rqi < 0 )
>      {
> @@ -1988,7 +1998,7 @@ csched2_alloc_pdata(const struct scheduler *ops, int 
> cpu)
>  {
>      /* Check to see if the cpu is online yet */
>      /* Note: cpu 0 doesn't get a STARTING callback */
> -    if ( cpu == 0 || cpu_to_socket(cpu) >= 0 )
> +    if ( cpu == 0 || cpu_to_socket(cpu) >= 0 || cpu_to_core(cpu) >= 0 )
>          init_pcpu(ops, cpu);
>      else
>          printk("%s: cpu %d not online yet, deferring initializatgion\n",
> @@ -2109,6 +2119,21 @@ csched2_init(struct scheduler *ops)
>          opt_load_window_shift = LOADAVG_WINDOW_SHIFT_MIN;
>      }
>  
> +    /* Defines the runqueue used. */
> +    if ( !strcmp(opt_credit2_runqueue_string, "socket") )
> +    {
> +        opt_credit2_runqueue=CREDIT2_OPT_RUNQUEUE_SOCKET;
> +        printk("Runqueue : runqueue_per_socket\n");
> +    }
> +    else if ( !strcmp(opt_credit2_runqueue_string, "core") )
> +    {
> +        opt_credit2_runqueue=CREDIT2_OPT_RUNQUEUE_CORE;
> +        printk("Runqueue : runqueue_per_core\n");
> +    }
> +    else {
> +        printk("Runqueue: credit2_runqueue entered incorrect Continuing with 
> core\n");
> +    }

I think I would do something like this here:

opt2_credit_runqueue=CREDIT2_OPT_RUNQUEUE_CORE;
if ( !strcmp(opt_credit2_runqueue_string, "socket") )
  opt_credit2_runqueue=CREDIT2_OPT_RUNQUEUE_SOCKET;
else if ( strcmp(opt_credit2_runqueue_string, "core") )
  printk("WARNING, unrecognized credit2_runqueue option %s, using
core\n", opt_credit2_runqueue_string);

prink("Runqueue:Using %s\n", opt_credit2_runqueue==CORE?"core":"string");

Other than that, and with Jan's comments, looks good to me.  Thanks, Uma!

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.