|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/6] xen/sched: Use %*pb[l] instead of cpumask_scn{, list}printf()
>>> On 06.09.18 at 14:08, <andrew.cooper3@xxxxxxxxxx> wrote:
> @@ -2059,11 +2058,10 @@ csched_dump_pcpu(const struct scheduler *ops, int cpu)
> spc = CSCHED_PCPU(cpu);
> runq = &spc->runq;
>
> - cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_sibling_mask,
> cpu));
> - printk("CPU[%02d] nr_run=%d, sort=%d, sibling=%s, ",
> - cpu, spc->nr_runnable, spc->runq_sort_last, cpustr);
> - cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_core_mask, cpu));
> - printk("core=%s\n", cpustr);
> + printk("CPU[%02d] nr_run=%d, sort=%d, sibling=%*pb, core=%*pb\n",
> + cpu, spc->nr_runnable, spc->runq_sort_last,
> + nr_cpu_ids, per_cpu(cpu_sibling_mask, cpu),
> + nr_cpu_ids, per_cpu(cpu_core_mask, cpu));
Strictly speaking here and elsewhere you should wrap the CPU mask
accesses in cpumask_bits(). Then again I wonder whether a special
case for CPU masks wouldn't be warranted, making it unnecessary for
callers to pass in nr_cpu_ids explicitly.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |