|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/3] xen: sched_null: support for hard affinity
On Mon, 2017-03-20 at 16:46 -0700, Stefano Stabellini wrote:
> On Fri, 17 Mar 2017, Dario Faggioli wrote:
> >
> > --- a/xen/common/sched_null.c
> > +++ b/xen/common/sched_null.c
> > @@ -117,6 +117,14 @@ static inline struct null_dom *null_dom(const
> > struct domain *d)
> > return d->sched_priv;
> > }
> >
> > +static inline bool check_nvc_affinity(struct null_vcpu *nvc,
> > unsigned int cpu)
> > +{
> > + cpumask_and(cpumask_scratch_cpu(cpu), nvc->vcpu-
> > >cpu_hard_affinity,
> > + cpupool_domain_cpumask(nvc->vcpu->domain));
> > +
> > + return cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu));
> > +}
>
> If you make it take a struct vcpu* as first argument, it will be more
> generally usable
>
Yes, that's probably a good idea. Thanks.
> > return cpu;
> >
> > - /* If not, just go for a valid free pCPU, if any */
> > + /* If not, just go for a free pCPU, within our affinity, if
> > any */
> > cpumask_and(cpumask_scratch_cpu(cpu), &prv->cpus_free, cpus);
> > + cpumask_and(cpumask_scratch_cpu(cpu),
> > cpumask_scratch_cpu(cpu),
> > + v->cpu_hard_affinity);
>
> You can do this with one cpumask_and (in addition to the one above):
>
> cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu),
> &prv->cpus_free);
>
Mmm... right. Wow... Quite an overlook on my side! :-P
> > @@ -308,7 +320,10 @@ static unsigned int pick_cpu(struct
> > null_private *prv, struct vcpu *v)
> > * only if the pCPU is free.
> > */
> > if ( unlikely(cpu == nr_cpu_ids) )
> > - cpu = cpumask_any(cpus);
> > + {
> > + cpumask_and(cpumask_scratch_cpu(cpu), cpus, v-
> > >cpu_hard_affinity);
>
> Could the intersection be 0?
>
Not really. Because vcpu_set_hard_affinity() fails when trying to set
the affinity to something that has a zero intersection with the set of
cpus of the pool the domain is in.
Other schedulers relies on this too.
However, I need to re-check what happens if everything is ok when
changing the affinity, but then all the cpus in the affinity itself are
removed from the pool... I will do that as, as I said, this is
something general (and this area is really a can of worms... And I keep
finding weird corner cases! :-O)
> > @@ -408,8 +426,7 @@ static void null_vcpu_insert(const struct
> > scheduler *ops, struct vcpu *v)
> > */
> > vcpu_assign(prv, v, cpu);
> > }
> > - else if ( cpumask_intersects(&prv->cpus_free,
> > - cpupool_domain_cpumask(v-
> > >domain)) )
> > + else if ( cpumask_intersects(&prv->cpus_free,
> > cpumask_scratch_cpu(cpu)) )
> > {
> > spin_unlock(lock);
> > goto retry;
> > @@ -462,7 +479,7 @@ static void null_vcpu_remove(const struct
> > scheduler *ops, struct vcpu *v)
> >
> > spin_lock(&prv->waitq_lock);
> > wvc = list_first_entry_or_null(&prv->waitq, struct null_vcpu,
> > waitq_elem);
> > - if ( wvc )
> > + if ( wvc && cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu)) )
>
> shouldn't this be
>
> check_nvc_affinity(wvc, cpu)
>
> ?
>
Mmm... well, considering that I never touch cpumask_scratch_cpu() in
this function, this is clearly a bug. Will fix. :-/
Thanks and Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |