[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 09/16] xen: sched: close potential races when switching scheduler to CPUs



On Thu, 2016-03-24 at 12:14 +0000, George Dunlap wrote:
> On 18/03/16 19:05, Dario Faggioli wrote:
> > 
> > by using the sched_switch hook that we have introduced in
> > the various schedulers.
> > 
> > The key is to let the actual switch of scheduler and the
> > remapping of the scheduler lock for the CPU (if necessary)
> > happen together (in the same critical section) protected
> > (at least) by the old scheduler lock for the CPU.
> Thanks for trying to sort this out -- I've been looking this since
> yesterday afternoon and it certainly makes my head hurt. :-)
> 
> It looks like you want to do the locking inside the sched_switch()
> callback, rather than outside of it, so that you can get the locking
> order right (global private before per-cpu scheduler lock).  
>
Yes, especially considering that such locking order varies with the
scheduler. In fact, in Credit1, it's per-cpu first, global second; in
Credit2, it's the other way round: global first, per-runq second. In
RTDS there is only one global lock and in ARINC only the per-cpu locks.

> Otherwise
> you could just have schedule_cpu_switch grab and release the lock,
> and
> let the sched_switch() callback set the lock as needed (knowing that
> the
> correct lock is already held and will be released).
> 
Yeah, that's after all what is being done even right now. Of course,
right now it's buggy, but even if it probably can be made to work, in
the way you suggest, I thought bringing anything that has to do with
the locking order required by a specific scheduler _inside_ the
specific scheduler code would be a good thing.

I actually still think that, but I recognize the ugliness of accessing
ppriv_old and vpriv_old outside of scheduler locks (although, as I said
in the other email, it should be safe, as switching scheduler is
serialized by cpupool_lock anyway... perhaps I should put this in a
comment), not to mention how ugly it is the interface provided by
sched_switch() (talking about the last two parameters :-/).

> But the ordering between prv->lock and the scheduler lock only needs
> to
> be between the prv lock and scheduler lock *of a specific instance*
> of
> the credit2 scheduler -- i.e., between prv->lock and prv->rqd[].lock.
> 
True...

> And, critically, if we're calling sched_switch, then we already know
> that the current pcpu lock is *not* one of the prv->rqd[].lock's
> because
> we check that at the top of schedule_cpu_switch().
> 
... True again...

> So I think there should be no problem with:
> 1. Grabbing the pcpu schedule lock in schedule_cpu_switch()
> 2. Grabbing prv->lock in csched2_switch_sched()
> 3. Setting the per_cpu schedule lock as the very last thing in
> csched2_switch_sched()
> 4. Releasing the (old) pcpu schedule lock in schedule_cpu_switch().
> 
> What do you think?
> 
I think it should work. We'll be doing the scheduler lock manipulation
protected by the old and wrong (as it's the one of another scheduler)
per-cpu/runq lock, and the correct global private lock. It would look
like the ordering between the two locks is the wrong one, in Credit2,
but it's not because of the fact that the per-runq lock is the other
scheduler's one.

Tricky, but everything is in here! :-/

I'm not sure whether it's better or worse wrt what I have. I'm trying
coding it up, to see how it looks like (and if it allows me to get rid
of the ugliness in sched_switch()).

> That would allow us to read ppriv_old and vpriv_old with the
> schedule_lock held.
> 
Also good. :-)

> Unfortunately I can't off the top of my head think of a good
> assertion
> to put in at #2 to assert that the per-pcpu lock is *not* one of
> runqueue locks in prv, because we don't yet know which runqueue this
> cpu
> will be assigned to.  But we could check when we actually do the lock
> assignment to make sure that it's not already equal.  That way we'll
> either deadlock or ASSERT (which is not as good as always ASSERTing,
> but
> is better than either deadlocking or working fine).
> 
Yep, I don't think we can do much better than that.

> As an aside -- it seems to me that as soon as we change the scheduler
> lock, there's a risk that something else may come along and try to
> grab
> it / access the data.  Does that mean we really ought to use memory
> barriers to make sure that the lock is written only after all changes
> to
> the scheduler data have been appropriately made?
> 
Yes, if it were only for this code, I think you're right, barriers
would be necessary. I once again think this is actually safe, because
it's serialized elsewhere, but thinking more about it, I can well add
both barriers (and a comment).

> > 
> > This also means that, in Credit2 and RTDS, we can get rid
> > of the code that was doing the scheduler lock remapping
> > in csched2_free_pdata() and rt_free_pdata(), and of their
> > triggering ASSERT-s.
> Right -- so to put it a different way, *all* schedulers must now set
> the
> locking scheme they wish to use, even if they want to use the default
> per-cpu locks.  
>
Exactly.

> I think that means we have to do that for arinc653 too,
> right?
> 
Mmm... right, I'll have a look at that.

Thanks for looking at this, and sorry for the late reply.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.