[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/4] xen: sched: avoid dumping duplicate information



On Thu, Jun 25, 2015 at 1:15 PM, Dario Faggioli
<dario.faggioli@xxxxxxxxxx> wrote:
> When dumping scheduling information (debug key 'r'), what
> we print as 'Idle cpupool' is pretty much the same of what
> we print immediately after as 'Cpupool0'. In fact, if there
> are no pCPUs outside of any cpupools, it is exactly the
> same.
>
> If there are free pCPUs, there is some valuable information,
> but still a lot of duplication:
>
>  (XEN) Online Cpus: 0-15
>  (XEN) Free Cpus: 8
>  (XEN) Idle cpupool:
>  (XEN) Scheduler: SMP Credit Scheduler (credit)
>  (XEN) info:
>  (XEN)   ncpus              = 13
>  (XEN)   master             = 0
>  (XEN)   credit             = 3900
>  (XEN)   credit balance     = 45
>  (XEN)   weight             = 1280
>  (XEN)   runq_sort          = 11820
>  (XEN)   default-weight     = 256
>  (XEN)   tslice             = 30ms
>  (XEN)   ratelimit          = 1000us
>  (XEN)   credits per msec   = 10
>  (XEN)   ticks per tslice   = 3
>  (XEN)   migration delay    = 0us
>  (XEN) idlers: 00000000,00006d29
>  (XEN) active vcpus:
>  (XEN)     1: [1.7] pri=-1 flags=0 cpu=15 credit=-116 [w=256,cap=0] (84+300) 
> {a/i=22/21 m=18+5 (k=0)}
>  (XEN)     2: [1.3] pri=0 flags=0 cpu=1 credit=-113 [w=256,cap=0] (87+300) 
> {a/i=37/36 m=11+544 (k=0)}
>  (XEN)     3: [0.15] pri=-1 flags=0 cpu=4 credit=95 [w=256,cap=0] (210+300) 
> {a/i=127/126 m=108+9 (k=0)}
>  (XEN)     4: [0.10] pri=-2 flags=0 cpu=12 credit=-287 [w=256,cap=0] 
> (-84+300) {a/i=163/162 m=36+568 (k=0)}
>  (XEN)     5: [0.7] pri=-2 flags=0 cpu=2 credit=-242 [w=256,cap=0] (-42+300) 
> {a/i=129/128 m=16+50 (k=0)}
>  (XEN) CPU[08]  sort=5791, sibling=00000000,00000300, core=00000000,0000ff00
>  (XEN)   run: [32767.8] pri=-64 flags=0 cpu=8
>  (XEN) Cpupool 0:
>  (XEN) Cpus: 0-5,10-15
>  (XEN) Scheduler: SMP Credit Scheduler (credit)
>  (XEN) info:
>  (XEN)   ncpus              = 13
>  (XEN)   master             = 0
>  (XEN)   credit             = 3900
>  (XEN)   credit balance     = 45
>  (XEN)   weight             = 1280
>  (XEN)   runq_sort          = 11820
>  (XEN)   default-weight     = 256
>  (XEN)   tslice             = 30ms
>  (XEN)   ratelimit          = 1000us
>  (XEN)   credits per msec   = 10
>  (XEN)   ticks per tslice   = 3
>  (XEN)   migration delay    = 0us
>  (XEN) idlers: 00000000,00006d29
>  (XEN) active vcpus:
>  (XEN)     1: [1.7] pri=-1 flags=0 cpu=15 credit=-116 [w=256,cap=0] (84+300) 
> {a/i=22/21 m=18+5 (k=0)}
>  (XEN)     2: [1.3] pri=0 flags=0 cpu=1 credit=-113 [w=256,cap=0] (87+300) 
> {a/i=37/36 m=11+544 (k=0)}
>  (XEN)     3: [0.15] pri=-1 flags=0 cpu=4 credit=95 [w=256,cap=0] (210+300) 
> {a/i=127/126 m=108+9 (k=0)}
>  (XEN)     4: [0.10] pri=-2 flags=0 cpu=12 credit=-287 [w=256,cap=0] 
> (-84+300) {a/i=163/162 m=36+568 (k=0)}
>  (XEN)     5: [0.7] pri=-2 flags=0 cpu=2 credit=-242 [w=256,cap=0] (-42+300) 
> {a/i=129/128 m=16+50 (k=0)}
>  (XEN) CPU[00]  sort=11801, sibling=00000000,00000003, core=00000000,000000ff
>  (XEN)   run: [32767.0] pri=-64 flags=0 cpu=0
>  ... ... ...
>  (XEN) CPU[15]  sort=11820, sibling=00000000,0000c000, core=00000000,0000ff00
>  (XEN)   run: [1.7] pri=-1 flags=0 cpu=15 credit=-116 [w=256,cap=0] (84+300) 
> {a/i=22/21 m=18+5 (k=0)}
>  (XEN)     1: [32767.15] pri=-64 flags=0 cpu=15
>  (XEN) Cpupool 1:
>  (XEN) Cpus: 6-7,9
>  (XEN) Scheduler: SMP RTDS Scheduler (rtds)
>  (XEN) CPU[06]
>  (XEN) CPU[07]
>  (XEN) CPU[09]
>
> With this change, we get rid of the redundancy, and retain
> only the information about the free pCPUs.
>
> (While there, turn a loop index variable from `int' to
> `unsigned int' in schedule_dump().)
>
> Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>

Acked-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>

> ---
> Cc: Juergen Gross <jgross@xxxxxxxx>
> Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
> ---
>  xen/common/cpupool.c  |    6 +++---
>  xen/common/schedule.c |   18 +++++++++++++-----
>  2 files changed, 16 insertions(+), 8 deletions(-)
>
> diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c
> index 563864d..5471f93 100644
> --- a/xen/common/cpupool.c
> +++ b/xen/common/cpupool.c
> @@ -728,10 +728,10 @@ void dump_runq(unsigned char key)
>
>      print_cpumap("Online Cpus", &cpu_online_map);
>      if ( !cpumask_empty(&cpupool_free_cpus) )
> +    {
>          print_cpumap("Free Cpus", &cpupool_free_cpus);
> -
> -    printk("Idle cpupool:\n");
> -    schedule_dump(NULL);
> +        schedule_dump(NULL);
> +    }
>
>      for_each_cpupool(c)
>      {
> diff --git a/xen/common/schedule.c b/xen/common/schedule.c
> index ecf1545..4ffcd98 100644
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -1473,16 +1473,24 @@ void scheduler_free(struct scheduler *sched)
>
>  void schedule_dump(struct cpupool *c)
>  {
> -    int               i;
> +    unsigned int      i;
>      struct scheduler *sched;
>      cpumask_t        *cpus;
>
>      /* Locking, if necessary, must be handled withing each scheduler */
>
> -    sched = (c == NULL) ? &ops : c->sched;
> -    cpus = cpupool_scheduler_cpumask(c);
> -    printk("Scheduler: %s (%s)\n", sched->name, sched->opt_name);
> -    SCHED_OP(sched, dump_settings);
> +    if ( c != NULL )
> +    {
> +        sched = c->sched;
> +        cpus = c->cpu_valid;
> +        printk("Scheduler: %s (%s)\n", sched->name, sched->opt_name);
> +        SCHED_OP(sched, dump_settings);
> +    }
> +    else
> +    {
> +        sched = &ops;
> +        cpus = &cpupool_free_cpus;
> +    }
>
>      for_each_cpu (i, cpus)
>      {
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.