|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 1/3] xen: sched: introduce the 'null' semi-static scheduler
On Mon, 2017-03-20 at 16:21 -0700, Stefano Stabellini wrote:
> On Fri, 17 Mar 2017, Dario Faggioli wrote:
> >
> > --- /dev/null
> > +++ b/xen/common/sched_null.c
> > +/*
> > + * Virtual CPU
> > + */
> > +struct null_vcpu {
> > + struct list_head waitq_elem;
> > + struct null_dom *ndom;
>
> This field is redundant, given that from struct vcpu you can get
> struct
> domain and from struct domain you can get struct null_dom. I would
> remove it.
>
It's kind of a paradigm, followed by pretty much all schedulers. So
it's good to uniform to that, and it's often quite useful (or it may be
in future)... I'll have a look, though, at whether it is really
important to have it in this simple scheduler too, and do remove it if
it's not worth.
> > + struct vcpu *vcpu;
> > + int pcpu; /* To what pCPU the vCPU is assigned (-1 if
> > none) */
>
> Isn't this always the same as struct vcpu->processor?
> If it's only different when the vcpu is waiting in the waitq, then
> you
> can just remove this field and replace the pcpu == -1 test with
> list_empty(waitq_elem).
>
I'll think about it. Right now, it's useful for ASSERTing and
consistency checking, which is something I want, at least in this
phase. It is also useful for reporting to what pcpu a vcpu is currently
assigned, for which thing you can't trust v->processor. That only
happens in `xl debug-key r' for now, but we may want to have less
tricky way of exporting such information in future.
Anyway, as I said, I'll ponder whether I can get rid of it.
> > +static void null_vcpu_remove(const struct scheduler *ops, struct
> > vcpu *v)
> > +{
> > + struct null_private *prv = null_priv(ops);
> > + struct null_vcpu *wvc, *nvc = null_vcpu(v);
> > + unsigned int cpu;
> > + spinlock_t *lock;
> > +
> > + ASSERT(!is_idle_vcpu(v));
> > +
> > + lock = vcpu_schedule_lock_irq(v);
> > +
> > + cpu = v->processor;
> > +
> > + /* If v is in waitqueue, just get it out of there and bail */
> > + if ( unlikely(nvc->pcpu == -1) )
> > + {
> > + spin_lock(&prv->waitq_lock);
> > +
> > + ASSERT(!list_empty(&null_vcpu(v)->waitq_elem));
> > + list_del_init(&nvc->waitq_elem);
> > +
> > + spin_unlock(&prv->waitq_lock);
> > +
> > + goto out;
> > + }
> > +
> > + /*
> > + * If v is assigned to a pCPU, let's see if there is someone
> > waiting.
> > + * If yes, we assign it to cpu, in spite of v. If no, we just
> > set
> > + * cpu free.
> > + */
> > +
> > + ASSERT(per_cpu(npc, cpu).vcpu == v);
> > + ASSERT(!cpumask_test_cpu(cpu, &prv->cpus_free));
> > +
> > + spin_lock(&prv->waitq_lock);
> > + wvc = list_first_entry_or_null(&prv->waitq, struct null_vcpu,
> > waitq_elem);
> > + if ( wvc )
> > + {
> > + vcpu_assign(prv, wvc->vcpu, cpu);
>
> The vcpu_assign in null_vcpu_insert is protected by the pcpu runq
> lock,
> while this call is protected by the waitq_lock lock. Is that safe?
>
vcpu assignment is always protected by the runqueue lock. Both in
null_vcpu_insert and() in here, we take it with:
lock = vcpu_schedule_lock_irq(v);
at the beginning of the function (left in context, see above).
Taking the waitq_lock here serializes access to the waitqueue
(prv->waitqueue), i.e., the list_first_entry_or_null() call above, and
the list_del_init() call below.
> > + list_del_init(&wvc->waitq_elem);
> > + cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ);
> > + }
> > + else
> > + {
> > + vcpu_deassign(prv, v, cpu);
> > + }
> > + spin_unlock(&prv->waitq_lock);
> > +
> > + out:
> > + vcpu_schedule_unlock_irq(lock, v);
> > +
> > + SCHED_STAT_CRANK(vcpu_remove);
> > +}
> > +
> > +static void null_vcpu_wake(const struct scheduler *ops, struct
> > vcpu *v)
> > +{
> > + ASSERT(!is_idle_vcpu(v));
> > +
> > + if ( unlikely(curr_on_cpu(v->processor) == v) )
> > + {
> > + SCHED_STAT_CRANK(vcpu_wake_running);
> > + return;
> > + }
> > +
> > + if ( null_vcpu(v)->pcpu == -1 )
> > + {
> > + /* Not exactly "on runq", but close enough for reusing the
> > counter */
> > + SCHED_STAT_CRANK(vcpu_wake_onrunq);
> > + return;
>
> coding style
>
Yeah... I need to double check the style of all the file (patch
series?). I mostly wrote this while travelling, with an editor set
differently from what I use when at home. I thought I had done that,
but I clearly missed a couple of sports. Sorry.
> > +static void null_vcpu_migrate(const struct scheduler *ops, struct
> > vcpu *v,
> > + unsigned int new_cpu)
> > +{
> > + struct null_private *prv = null_priv(ops);
> > + struct null_vcpu *nvc = null_vcpu(v);
> > + unsigned int cpu = v->processor;
> > +
> > + ASSERT(!is_idle_vcpu(v));
> > +
> > + if ( v->processor == new_cpu )
> > + return;
> > +
> > + /*
> > + * v is either in the waitqueue, or assigned to a pCPU.
> > + *
> > + * In the former case, there is nothing to do.
> > + *
> > + * In the latter, the pCPU to which it was assigned would
> > become free,
> > + * and we, therefore, should check whether there is anyone in
> > the
> > + * waitqueue that can be assigned to it.
> > + */
> > + if ( likely(nvc->pcpu != -1) )
> > + {
> > + struct null_vcpu *wvc;
> > +
> > + spin_lock(&prv->waitq_lock);
> > + wvc = list_first_entry_or_null(&prv->waitq, struct
> > null_vcpu, waitq_elem);
> > + if ( wvc && cpumask_test_cpu(cpu,
> > cpupool_domain_cpumask(v->domain)) )
> > + {
> > + vcpu_assign(prv, wvc->vcpu, cpu);
> > + list_del_init(&wvc->waitq_elem);
> > + cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ);
> > + }
> > + else
> > + {
> > + vcpu_deassign(prv, v, cpu);
> > + }
> > + spin_unlock(&prv->waitq_lock);
>
> This looks very similar to null_vcpu_remove, maybe you want to
> refactor
> the code and call a single shared service function.
>
Yes, maybe I can.
> > + SCHED_STAT_CRANK(migrate_running);
> > + }
> > + else
> > + SCHED_STAT_CRANK(migrate_on_runq);
> > +
> > + SCHED_STAT_CRANK(migrated);
> > +
> > + /*
> > + * Let's now consider new_cpu, which is where v is being sent.
> > It can be
> > + * either free, or have a vCPU already assigned to it.
> > + *
> > + * In the former case, we should assign v to it, and try to
> > get it to run.
> > + *
> > + * In latter, all we can do is to park v in the waitqueue.
> > + */
> > + if ( per_cpu(npc, new_cpu).vcpu == NULL )
> > + {
> > + /* We don't know whether v was in the waitqueue. If yes,
> > remove it */
> > + spin_lock(&prv->waitq_lock);
> > + list_del_init(&nvc->waitq_elem);
> > + spin_unlock(&prv->waitq_lock);
> > +
> > + vcpu_assign(prv, v, new_cpu);
>
> This vcpu_assign call seems to be unprotected. Should it be within a
> spin_lock'ed area?
>
Lock is taken by the caller. Check calls to SCHED_OP(...,migrate) in
xen/common/schedule.c.
That's by design, and it's like that for most functions (with the sole
exceptions of debug dump and insert/remove, IIRC), for all schedulers.
> > diff --git a/xen/common/schedule.c b/xen/common/schedule.c
> > index 223a120..b482037 100644
> > --- a/xen/common/schedule.c
> > +++ b/xen/common/schedule.c
> > @@ -1785,6 +1785,8 @@ int schedule_cpu_switch(unsigned int cpu,
> > struct cpupool *c)
> >
> > out:
> > per_cpu(cpupool, cpu) = c;
> > + /* Trigger a reschedule so the CPU can pick up some work ASAP.
> > */
> > + cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ);
> >
> > return 0;
> > }
>
> Why? This change is not explained in the commit message.
>
You're right. This may well go into it's own patch, actually. I'll see
how I like it better.
Thanks and Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |