[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 1/4] sched: credit2: respect per-vcpu hard affinity



On 03/31/2015 06:14 PM, Dario Faggioli wrote:
>>> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
>>> index 7581731..af716e4 100644
>>> --- a/xen/common/sched_credit2.c
>>> +++ b/xen/common/sched_credit2.c
> 
>>> @@ -2024,6 +2096,13 @@ csched2_alloc_pdata(const struct scheduler *ops, int 
>>> cpu)
>>>          printk("%s: cpu %d not online yet, deferring initializatgion\n",
>>>                 __func__, cpu);
>>>  
>>> +    /*
>>> +     * For each new pcpu, allocate a cpumask_t for use throughout the
>>> +     * scheduler to avoid putting any cpumask_t structs on the stack.
>>> +     */
>>> +    if ( !zalloc_cpumask_var(&scratch_mask[cpu]) )
>>
>> Any reason not to use "scratch_mask + cpu" here rather than
>> "&scratch_mask[cpu]"?
>>
> With the renaming you suggested, that would be "_scratch_mask + cpu",
> wouldn't it? I mean, it has to be the actual variable, not the #define,
> since this is (IIRC) called from another CPU, and hence the macro, which
> does smp_processor_id(), would give us the wrong element of the per-cpu
> array.
> 
> That being said, I personally find the array syntax easier to read and
> more consistent, especially if we add this:

Yes, _scratch_mask.  I think I probably wrote this before suggesting the
rename. :-)

I actually find the other syntax easier to read in general; but it's not
too big a deal, and if we add the ASSERT, it certainly makes sense to
keep it an array for consistency.

> IOW, if we always free what we allocate, there is no need for the
> pointers to be NULL, and this is how I addressed the matter in the
> message. I agree it probably doesn't look super clear, this other email,
> describing a similar scenario, may contain a better explanation of my
> take on this:
> 
> <1426601529.32500.94.camel@xxxxxxxxxx>
> 
>> I think it's just dangerous to leave uninitialized pointers around.  The
>> invariant should be that if the array entry is invalid it's NULL, and if
>> it's non-null then it's valid.
>>
> I see. I guess this makes things more "defensive programming"-ish
> oriented, which is a good thing.
> 
> I don't know how well this is enforced around the scheduler code (or in
> general), but I'm certainly ok making a step in that direction. This is
> no hot-path, so no big deal zeroing the memory... Not zeroing forces one
> to think harder at what is being allocated and freed, which is why I
> like it, but I'm, of course more than ok with zalloc_*, so go for
> it. :-)

I'm not sure how it forces you to think harder.  Having a null pointer
deference is bad enough that I already try hard to think carefully about
what's being allocated and freed.  Having a double free or
use-after-free bug is certainly worse than a null pointer dereference,
but at some point worse consequences don't actually lead to better behavior.

It's like those politicians who say they want to double the punishment
for some crime; say, assaulting hospital staff.  Well, assaulting
hospital staff is already a crime that can lead to jail time; if the
original punishment didn't deter the guy, doubling it isn't really going
to do much.  :-)

>> Also -- I think this allocation wants to happen in global_init(), right?
>>  Otherwise if you make a second cpupool with the credit2 scheduler this
>> will be clobbered.  (I think nr_cpu_ids should be defined at that point.)
>>
> Good point, actually. This just made me realize I've done the same
> mistake somewhere else... Thanks!! :-P

That one slipped under my radar too.

...and it's also a good example of why "just think harder about it"
doesn't really work.  I should have made you add ASSERTs when setting
that global variable, that it actually was NULL. :-D

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.