[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/9] xen: sched: make locking for {insert, remove}_vcpu consistent



On 29/09/15 17:55, Dario Faggioli wrote:
> The insert_vcpu() scheduler hook is called with an
> inconsistent locking strategy. In fact, it is sometimes
> invoked while holding the runqueue lock and sometimes
> when that is not the case.
>
> In other words, some call sites seems to imply that
> locking should be handled in the callers, in schedule.c
> --e.g., in schedule_cpu_switch(), which acquires the
> runqueue lock before calling the hook; others that
> specific schedulers should be responsible for locking
> themselves --e.g., in sched_move_domain(), which does
> not acquire any lock for calling the hook.
>
> The right thing to do seems to always defer locking to
> the specific schedulers, as it's them that know what, how
> and when it is best to lock (as in: runqueue locks, vs.
> private scheduler locks, vs. both, etc.)
>
> This patch, therefore:
>  - removes any locking around insert_vcpu() from
>    generic code (schedule.c);
>  - add the _proper_ locking in the hook implementations,
>    depending on the scheduler (for instance, credit2
>    does that already, credit1 and RTDS need to grab
>    the runqueue lock while manipulating runqueues).
>
> In case of credit1, remove_vcpu() handling needs some
> fixing remove_vcpu() too, i.e.:
>  - it manipulates runqueues, so the runqueue lock must
>    be acquired;
>  - *_lock_irq() is enough, there is no need to do
>    _irqsave()

Nothing in any of generic scheduling code should need interrupts
disabled at all.

One of the problem-areas identified by Jenny during the ticketlock
performance work was that the SCHEDULE_SOFTIRQ was a large consumer of
time with interrupts disabled.  (The other large one being the time
calibration rendezvous, but that is a wildly different can of worms to fix.)

Is the use of _lock_irq() here to cover another issue expecting
interrupts to be disabled, or could it be replaced with a plain spin_lock()?

Also, a style nit...

>
> Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> ---
> Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
> Cc: Meng Xu <mengxu@xxxxxxxxxxxxx>
> ---
>  xen/common/sched_credit.c |   24 +++++++++++++++++++-----
>  xen/common/sched_rt.c     |   10 +++++++++-
>  xen/common/schedule.c     |    6 ------
>  3 files changed, 28 insertions(+), 12 deletions(-)
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> index a1945ac..557efaa 100644
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -905,8 +905,19 @@ csched_vcpu_insert(const struct scheduler *ops, struct 
> vcpu *vc)
>  {
>      struct csched_vcpu *svc = vc->sched_priv;
>  
> -    if ( !__vcpu_on_runq(svc) && vcpu_runnable(vc) && !vc->is_running )
> -        __runq_insert(vc->processor, svc);
> +    /*
> +     * For the idle domain, this is called, during boot, before
> +     * than alloc_pdata() has been called for the pCPU.
> +     */
> +    if ( !is_idle_vcpu(vc) )
> +    {
> +        spinlock_t *lock = vcpu_schedule_lock_irq(vc);
> +
> +        if ( !__vcpu_on_runq(svc) && vcpu_runnable(vc) && !vc->is_running )
> +            __runq_insert(vc->processor, svc);
> +
> +        vcpu_schedule_unlock_irq(lock, vc);
> +    }
>  }
>  
>  static void
> @@ -925,7 +936,7 @@ csched_vcpu_remove(const struct scheduler *ops, struct 
> vcpu *vc)
>      struct csched_private *prv = CSCHED_PRIV(ops);
>      struct csched_vcpu * const svc = CSCHED_VCPU(vc);
>      struct csched_dom * const sdom = svc->sdom;
> -    unsigned long flags;
> +    spinlock_t *lock;
>  
>      SCHED_STAT_CRANK(vcpu_destroy);
>  
> @@ -935,15 +946,18 @@ csched_vcpu_remove(const struct scheduler *ops, struct 
> vcpu *vc)
>          vcpu_unpause(svc->vcpu);
>      }
>  
> +    lock = vcpu_schedule_lock_irq(vc);
> +
>      if ( __vcpu_on_runq(svc) )
>          __runq_remove(svc);
>  
> -    spin_lock_irqsave(&(prv->lock), flags);
> +    spin_lock(&(prv->lock));

Please drop the superfluous brackets as you are already changing the line.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.