[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 3/6] xen: sched: clarify use cases of schedule_cpu_switch()



On Thu, 2015-10-15 at 10:25 +0200, Juergen Gross wrote:
> On 10/14/2015 05:54 PM, Dario Faggioli wrote:
> > schedule_cpu_switch() is meant to be only used for moving
> > pCPUs from a cpupool to no cpupool, and from there back
> > to a cpupool, *not* to move them directly from one cpupool
> > to another.
> > 
> > This is something that is reflected in the way it is
> > implemented, and should be kept in mind when looking at
> > it. However, that is not that clear, by just the look of
> > it.
> > 
> > Make it more evident by adding commentary and ASSERT()s.
> > 
> > Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> > ---
> > Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
> > Cc: Juergen Gross <jgross@xxxxxxxx>
> > Cc: Jan Beulich <JBeulich@xxxxxxxx>
> > ---
> > Changes from v1:
> >   * new patch, was not there in v1.
> > ---
> >   xen/common/schedule.c |   28 ++++++++++++++++++++++++++++
> >   1 file changed, 28 insertions(+)
> > 
> > diff --git a/xen/common/schedule.c b/xen/common/schedule.c
> > index 9aa209d..5ebfa33 100644
> > --- a/xen/common/schedule.c
> > +++ b/xen/common/schedule.c
> > @@ -1486,12 +1486,40 @@ void __init scheduler_init(void)
> >           BUG();
> >   }
> > 
> > +/*
> > + * Move a pCPU outside of the influence of the scheduler of its
> > current
> > + * cpupool, or subject it to the scheduler of a new cpupool.
> > + *
> > + * For the pCPUs that are removed from their cpupool, their
> > scheduler becomes
> > + * &ops (the default scheduler, selected at boot, which also
> > services the
> > + * default cpupool). However, as these pCPUs are not really part
> > of any pool,
> > + * there won't be any scheduling event on them, not even from the
> > default
> > + * scheduler. Basically, they will just sit idle until they are
> > explicitly
> > + * added back to a cpupool.
> > + */
> >   int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
> >   {
> >       struct vcpu *idle;
> >       void *ppriv, *ppriv_old, *vpriv, *vpriv_old;
> >       struct scheduler *old_ops = per_cpu(scheduler, cpu);
> >       struct scheduler *new_ops = (c == NULL) ? &ops : c->sched;
> > +    struct cpupool *pool = per_cpu(cpupool, cpu);
> > +
> > +    /*
> > +     * pCPUs only move from a valid cpupool to free (i.e., out of
> > any pool),
> > +     * or from free to a valid cpupool. In the former case (which
> > happens when
> > +     * c is NULL), we want the CPU to have been marked as free
> > already, as
> > +     * well as to not be valid for the source pool any longer,
> > when we get to
> > +     * here. In the latter case (which happens when c is a valid
> > cpupool), we
> > +     * want the CPU to still be marked as free, as well as to not
> > yet be valid
> > +     * for the destination pool.
> > +     * This all is because we do not want any scheduling activity
> > on the CPU
> > +     * while, in here, we switch things over.
> 
> While this is correct I think you should mention that in the "adding
> to
> a pool" case per_cpu(cpupool, cpu) already contains the new pool
> pointer. So c == pool then and both are not NULL.
> 
Eheh, I did have an ASSERT() for this too, but I eventually dropped it,
as it looked a bit too verbose. :-)

I can add it, but I like this suggestion of yours here below, so I
guess I'll try it first...

> Maybe it would be a good idea to move setting of per_cpu(cpupool,
> cpu)
> into schedule_cpu_switch(). Originally I didn't do that to avoid
> spreading too much cpupool related actions outside of cpupool.c. But
> with those ASSERT()s added hiding that action will cause more
> confusion
> than having the modification of per_cpu(cpupool, cpu) here.
> 
Yep, as said, in principle, I like the idea. I'll try and see how the
code end up looking like.

> When doing the code movement the current behaviour regarding sequence
> of changes must be kept, of course. So when adding the cpu to a pool
> the cpupool information must be set _before_ taking the scheduler
> lock,
> while when removing this must happen after release of the lock.
> 
I'm not entirely sure what you mean when you refer to "taking the
scheduler lock", but yes, I'll keep the ordering.

Thanks and Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.