[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 1/4] sched: credit2: respect per-vcpu hard affinity



On Tue, 2015-03-31 at 15:37 +0100, George Dunlap wrote:
> On 03/26/2015 09:48 AM, Justin T. Weaver wrote:
> > by making sure that vcpus only run on the pcpu(s) they are allowed to
> > run on based on their hard affinity cpu masks.
> > 
> > Signed-off-by: Justin T. Weaver <jtweaver@xxxxxxxxxx>
> 
> Hey Justin!  Getting close.  A couple of comments:
> 
Hi from here too...

I'll also provide my comments on this series shortly, just the time to
finish a thing I'm busy with. :-/

For now, just a few replies to direct questions...

> > diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> > index 7581731..af716e4 100644
> > --- a/xen/common/sched_credit2.c
> > +++ b/xen/common/sched_credit2.c

> > @@ -2024,6 +2096,13 @@ csched2_alloc_pdata(const struct scheduler *ops, int 
> > cpu)
> >          printk("%s: cpu %d not online yet, deferring initializatgion\n",
> >                 __func__, cpu);
> >  
> > +    /*
> > +     * For each new pcpu, allocate a cpumask_t for use throughout the
> > +     * scheduler to avoid putting any cpumask_t structs on the stack.
> > +     */
> > +    if ( !zalloc_cpumask_var(&scratch_mask[cpu]) )
> 
> Any reason not to use "scratch_mask + cpu" here rather than
> "&scratch_mask[cpu]"?
> 
With the renaming you suggested, that would be "_scratch_mask + cpu",
wouldn't it? I mean, it has to be the actual variable, not the #define,
since this is (IIRC) called from another CPU, and hence the macro, which
does smp_processor_id(), would give us the wrong element of the per-cpu
array.

That being said, I personally find the array syntax easier to read and
more consistent, especially if we add this:

> It might not be a bad idea to ad ASSERT(scratch_mask[cpu] == NULL)
> before this, just to be paranoid...
> 
But, no big deal, I'm fine with the '+'.

> > @@ -2159,6 +2240,10 @@ csched2_init(struct scheduler *ops)
> >  
> >      prv->load_window_shift = opt_load_window_shift;
> >  
> > +    scratch_mask = xmalloc_array(cpumask_t *, nr_cpu_ids);
> 
> I realize Dario recommended using xmalloc_array() instead of
> xzalloc_array(), but I don't understand why he thinks that's OK.  
>
Well, I didn't went as far as recommending it, but yes, I'd do it that
way, and I think it is both safe and fine.

> His
> mail says "(see below about why I actually don't
> think we need)", but I don't actually see that addressed in that e-mail.
> 
Right. In thet email (message-id:
<CA+o8iRVZzU++oPxeugNaSEZ8u-pyh7wk7cvaw7OWYkfB-pxNUw@xxxxxxxxxxxxxx> )

I was focusing on why the call to free() in a loop was not necessary,
and we should instead free what have been previously allocated, rather
than always freeing everything, relying on the fact that, at "worse" the
pointer will be NULL anyway.

IOW, if we always free what we allocate, there is no need for the
pointers to be NULL, and this is how I addressed the matter in the
message. I agree it probably doesn't look super clear, this other email,
describing a similar scenario, may contain a better explanation of my
take on this:

<1426601529.32500.94.camel@xxxxxxxxxx>

> I think it's just dangerous to leave uninitialized pointers around.  The
> invariant should be that if the array entry is invalid it's NULL, and if
> it's non-null then it's valid.
> 
I see. I guess this makes things more "defensive programming"-ish
oriented, which is a good thing.

I don't know how well this is enforced around the scheduler code (or in
general), but I'm certainly ok making a step in that direction. This is
no hot-path, so no big deal zeroing the memory... Not zeroing forces one
to think harder at what is being allocated and freed, which is why I
like it, but I'm, of course more than ok with zalloc_*, so go for
it. :-)

> Also -- I think this allocation wants to happen in global_init(), right?
>  Otherwise if you make a second cpupool with the credit2 scheduler this
> will be clobbered.  (I think nr_cpu_ids should be defined at that point.)
> 
Good point, actually. This just made me realize I've done the same
mistake somewhere else... Thanks!! :-P

Regards,
Dario

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.