|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] introduce and used relaxed cpumask operations
On 01/19/2015 03:58 PM, Jan Beulich wrote:
> Using atomic (LOCKed on x86) bitops for certain of the operations on
> cpumask_t is overkill when the variables aren't concurrently accessible
> (e.g. local function variables, or due to explicit locking). Introduce
> alternatives using non-atomic bitops and use them where appropriate.
>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
I'm wondering if it might be sensible to have more informative names for
these, that would make it easier for coders / reviewers to see what
aspect makes the cpumask suitable for the relaxed access; for instance,
"local_cpumask_set_cpu()" for local variables, and
"locked_cpumask_set_cpu()" for cpumasks which we know are locked. (Or
perhaps cpumask_set_cpu_local and cpumask_set_cpu_locked.)
At some point some clever person might even be able to write some
Coverity rules that check to make sure cpumask_set_cpu_local only
accesses local variables, &c.
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -372,7 +372,7 @@ __runq_tickle(unsigned int cpu, struct c
> {
> if ( cur->pri != CSCHED_PRI_IDLE )
> SCHED_STAT_CRANK(tickle_idlers_none);
> - cpumask_set_cpu(cpu, &mask);
> + __cpumask_set_cpu(cpu, &mask);
> }
> else if ( !idlers_empty )
> {
> @@ -422,7 +422,7 @@ __runq_tickle(unsigned int cpu, struct c
> SCHED_VCPU_STAT_CRANK(cur, migrate_r);
> SCHED_STAT_CRANK(migrate_kicked_away);
> set_bit(_VPF_migrating, &cur->vcpu->pause_flags);
> - cpumask_set_cpu(cpu, &mask);
> + __cpumask_set_cpu(cpu, &mask);
> }
> else if ( !new_idlers_empty )
> {
> @@ -432,7 +432,7 @@ __runq_tickle(unsigned int cpu, struct c
> {
> this_cpu(last_tickle_cpu) =
> cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
> - cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
> + __cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
> }
> else
> cpumask_or(&mask, &mask, &idle_mask);
> @@ -675,7 +675,7 @@ _csched_cpu_pick(const struct scheduler
> */
> cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
> if ( vc->processor == cpu && IS_RUNQ_IDLE(cpu) )
> - cpumask_set_cpu(cpu, &idlers);
> + __cpumask_set_cpu(cpu, &idlers);
> cpumask_and(&cpus, &cpus, &idlers);
>
> /*
> @@ -692,7 +692,7 @@ _csched_cpu_pick(const struct scheduler
> */
> if ( !cpumask_test_cpu(cpu, &cpus) && !cpumask_empty(&cpus) )
> cpu = cpumask_cycle(cpu, &cpus);
> - cpumask_clear_cpu(cpu, &cpus);
> + __cpumask_clr_cpu(cpu, &cpus);
>
> while ( !cpumask_empty(&cpus) )
> {
> @@ -1536,7 +1536,7 @@ csched_load_balance(struct csched_privat
> /* Find out what the !idle are in this node */
> cpumask_andnot(&workers, online, prv->idlers);
> cpumask_and(&workers, &workers, &node_to_cpumask(peer_node));
> - cpumask_clear_cpu(cpu, &workers);
> + __cpumask_clr_cpu(cpu, &workers);
>
> peer_cpu = cpumask_first(&workers);
> if ( peer_cpu >= nr_cpu_ids )
All these look good, but...
> --- a/xen/common/sched_rt.c
> +++ b/xen/common/sched_rt.c
> @@ -663,7 +663,7 @@ burn_budget(const struct scheduler *ops,
> * lock is grabbed before calling this function
> */
> static struct rt_vcpu *
> -__runq_pick(const struct scheduler *ops, cpumask_t *mask)
> +__runq_pick(const struct scheduler *ops, const cpumask_t *mask)
> {
> struct list_head *runq = rt_runq(ops);
> struct list_head *iter;
> @@ -780,10 +780,7 @@ rt_schedule(const struct scheduler *ops,
> }
> else
> {
> - cpumask_t cur_cpu;
> - cpumask_clear(&cur_cpu);
> - cpumask_set_cpu(cpu, &cur_cpu);
> - snext = __runq_pick(ops, &cur_cpu);
> + snext = __runq_pick(ops, cpumask_of(cpu));
> if ( snext == NULL )
> snext = rt_vcpu(idle_vcpu[cpu]);
>
This bit really needs explicit mention in the changelog.
-George
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |