[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/7] xen: rework locking for dump of scheduler info (debug-key r)



On Tue, 2015-03-17 at 10:54 +0000, Jan Beulich wrote:
> >>> On 16.03.15 at 18:05, <dario.faggioli@xxxxxxxxxx> wrote:
> > such as it is taken care of by the various schedulers, rather
> > than happening in schedule.c. In fact, it is the schedulers
> > that know better which locks are necessary for the specific
> > dumping operations.
> > 
> > While there, fix a few style issues (indentation, trailing
> > whitespace, parentheses and blank line after var declarations)
> > 
> > Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> > Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
> > Cc: Meng Xu <xumengpanda@xxxxxxxxx>
> > Cc: Jan Beulich <JBeulich@xxxxxxxx>
> > Cc: Keir Fraser <keir@xxxxxxx>
> > ---
> >  xen/common/sched_credit.c  |   42 
> > ++++++++++++++++++++++++++++++++++++++++--
> >  xen/common/sched_credit2.c |   40 ++++++++++++++++++++++++++++++++--------
> >  xen/common/sched_rt.c      |    7 +++++--
> >  xen/common/schedule.c      |    5 ++---
> >  4 files changed, 79 insertions(+), 15 deletions(-)
> 
> Is it really correct that sched_sedf.c doesn't need any change here?
> 
Good point.

I feel like mentioning though that SEDF is broken in many ways, and I
(and I suspect George is in a similar situation) really don't have the
bandwidth to fix it. Therefore, I really think we should start a
discussion on how to deprecate/get rid of it.

That being said, this is simple enough a case that I certainly can have
a quick look.

> > --- a/xen/common/sched_credit.c
> > +++ b/xen/common/sched_credit.c
> > @@ -26,6 +26,23 @@
> >  
> >  
> >  /*
> > + * Locking:
> > + * - Scheduler-lock (a.k.a. runqueue lock):
> > + *  + is per-runqueue, and there is one runqueue per-cpu;
> > + *  + serializes all runqueue manipulation operations;
> > + * - Private data lock (a.k.a. private scheduler lock):
> > + *  + serializes accesses to the scheduler global state (weight,
> > + *    credit, balance_credit, etc);
> > + *  + serializes updates to the domains' scheduling parameters.
> > + *
> > + * Ordering is "private lock always comes first":
> > + *  + if we need both locks, we must acquire the private
> > + *    scheduler lock for first;
> > + *  + if we already own a runqueue lock, we must never acquire
> > + *    the private scheduler lock.
> > + */
> 
> And this is Credit1 specific?
> 
It is similar (although with some differences) in Credit2, but there is
already a comment there, similar to this, stating it. RTDS, as of now,
only use _one_ lock, so not much to say about ordering. :-D

> Regardless of that, even if that's just reflecting current state, isn't
> acquiring a private lock around a generic lock backwards?
> 
It sounds like so, I agree, when describing it in English. But then,
looking at the code, things make sense, IMO.

> Finally, as said in different contexts earlier, I think unconditionally
> acquiring locks in dumping routines isn't the best practice. At least
> in non-debug builds I think these should be try-locks only, skipping
> the dumping when a lock is busy.
> 
That makes sense. It also looks, if not completely orthogonal, something
that can be done in a follow-up patch wrt this series. I.e., we first
move the locking logic where it belongs, and then we rework it toward
using _trylock(), does this make sense?

Regards,
Dario

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.