|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] xen: remove on-stack cpumask from stop_machine_run()
>>> On 31.05.19 at 13:53, <jgross@xxxxxxxx> wrote:
> The "allbutself" cpumask in stop_machine_run() is not needed. Instead
> of allocating it on the stack it can easily be avoided.
>
> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
with one further remark:
> --- a/xen/common/stop_machine.c
> +++ b/xen/common/stop_machine.c
> @@ -69,8 +69,8 @@ static void stopmachine_wait_state(void)
>
> int stop_machine_run(int (*fn)(void *), void *data, unsigned int cpu)
> {
> - cpumask_t allbutself;
> unsigned int i, nr_cpus;
> + unsigned int this = smp_processor_id();
> int ret;
>
> BUG_ON(!local_irq_is_enabled());
> @@ -79,9 +79,9 @@ int stop_machine_run(int (*fn)(void *), void *data,
> unsigned int cpu)
> if ( !get_cpu_maps() )
> return -EBUSY;
>
> - cpumask_andnot(&allbutself, &cpu_online_map,
> - cpumask_of(smp_processor_id()));
> - nr_cpus = cpumask_weight(&allbutself);
> + nr_cpus = num_online_cpus();
> + if ( cpu_online(this) )
> + nr_cpus--;
>
> /* Must not spin here as the holder will expect us to be descheduled. */
> if ( !spin_trylock(&stopmachine_lock) )
> @@ -100,8 +100,9 @@ int stop_machine_run(int (*fn)(void *), void *data,
> unsigned int cpu)
>
> smp_wmb();
>
> - for_each_cpu ( i, &allbutself )
> - tasklet_schedule_on_cpu(&per_cpu(stopmachine_tasklet, i), i);
> + for_each_online_cpu ( i )
> + if ( i != this )
> + tasklet_schedule_on_cpu(&per_cpu(stopmachine_tasklet, i), i);
>
> stopmachine_set_state(STOPMACHINE_PREPARE);
> stopmachine_wait_state();
A few lines down from here there's another use of smp_processor_id().
If I end up committing this I may take the liberty and also change that
to "this", unless you object.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |