[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 7/7] xen: sched_rt: print useful affinity info when dumping



On Mon, 2015-03-16 at 19:05 +0000, George Dunlap wrote:
> On 03/16/2015 05:05 PM, Dario Faggioli wrote:

> > @@ -218,7 +224,6 @@ __q_elem(struct list_head *elem)
> >  static void
> >  rt_dump_vcpu(const struct scheduler *ops, const struct rt_vcpu *svc)
> >  {
> > -    char cpustr[1024];
> >      cpumask_t *cpupool_mask;
> >  
> >      ASSERT(svc != NULL);
> > @@ -229,10 +234,22 @@ rt_dump_vcpu(const struct scheduler *ops, const 
> > struct rt_vcpu *svc)
> >          return;
> >      }
> >  
> > -    cpumask_scnprintf(cpustr, sizeof(cpustr), 
> > svc->vcpu->cpu_hard_affinity);
> > +    cpupool_mask = cpupool_scheduler_cpumask(svc->vcpu->domain->cpupool);
> > +    /*
> > +     * We can't just use 'cpumask_scratch' because the dumping can
> > +     * happen from a pCPU outside of this scheduler's cpupool, and
> > +     * hence it's not right to use the pCPU's scratch mask (which
> > +     * may even not exist!). On the other hand, it is safe to use
> > +     * svc->vcpu->processor's own scratch space, since we own the
> > +     * runqueue lock.
> 
> Since we *hold* the lock.
> 
Right, thanks.

> > +     */
> > +    cpumask_and(_cpumask_scratch[svc->vcpu->processor], cpupool_mask,
> > +                svc->vcpu->cpu_hard_affinity);
> > +    cpulist_scnprintf(keyhandler_scratch, sizeof(keyhandler_scratch),
> > +                      _cpumask_scratch[svc->vcpu->processor]);
> 
> Just a suggestion, would it be worth making a local variable to avoid
> typing this long thing twice?  
>
It probably would.

> Then you could also put the comment about
> using the svc->vcpu->processor's scratch space above the place where you
> set the local variable, while avoiding breaking up the logic of the
> cpumask operations.
> 
I like this, will do.

Regards,
Dario

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.