[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3] x86: correct socket_cpumask allocation
On Fri, 2015-07-10 at 16:25 +0100, Jan Beulich wrote: > >>> On 10.07.15 at 17:13, <JBeulich@xxxxxxxx> wrote: > > cpu_down() > > stop_machine_run(take_cpu_down, ...) > > notifier_call_chain(&cpu_chain, CPU_DYING, ...) > > __cpu_disable() > > remove_siblinginfo() > > __cpu_die() > > notifier_call_chain(&cpu_chain, CPU_DEAD, ...) > > cpu_smpboot_free() > > > > I.e. a clear use-after-invalidate. > > And I can't see a reason why we shouldn't be able to defer invoking > remove_siblinginfo() until cpu_smpboot_free(): Other than its > counterpart (set_cpu_sibling_map()) it doesn't require to be run on > the subject CPU (that function itself doesn't appear to depend on > that either, but it depends on identify_cpu() having run). Which > would at once allow reducing code: The clearing of > cpu_{core,sibling}_mask then becomes redundant with the freeing > of these masks. > > Of course there may be hidden dependencies, so maybe a safer > approach would be to just move the zapping of the three IDs > (and maybe the clearing of the CPU's cpu_sibling_setup_map bit) > into cpu_smpboot_free(). > cpu_smpboot_free FWIW, I've tried the patch below (call remove_siblinginfo() from cpu_smpboot_free()), and it works for me. I've tested both shutdown and ACPI S3 suspend. I've got to go now, so I guess I'm leaving it to Chao whether to pick it up, or go with the other approach Jan suggested (or something else). Dario --- diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c index 0f03364..9fb027c 100644 --- a/xen/arch/x86/smpboot.c +++ b/xen/arch/x86/smpboot.c @@ -663,6 +663,30 @@ void cpu_exit_clear(unsigned int cpu) set_cpu_state(CPU_STATE_DEAD); } +static void +remove_siblinginfo(int cpu) +{ + int sibling; + struct cpuinfo_x86 *c = cpu_data; + + cpumask_clear_cpu(cpu, socket_cpumask[cpu_to_socket(cpu)]); + + for_each_cpu ( sibling, per_cpu(cpu_core_mask, cpu) ) + { + cpumask_clear_cpu(cpu, per_cpu(cpu_core_mask, sibling)); + /* Last thread sibling in this cpu core going down. */ + if ( cpumask_weight(per_cpu(cpu_sibling_mask, cpu)) == 1 ) + c[sibling].booted_cores--; + } +cpu_smpboot_free + for_each_cpu(sibling, per_cpu(cpu_sibling_mask, cpu)) + cpumask_clear_cpu(cpu, per_cpu(cpu_sibling_mask, sibling)); + c[cpu].phys_proc_id = XEN_INVALID_SOCKET_ID; + c[cpu].cpu_core_id = XEN_INVALID_CORE_ID; + c[cpu].compute_unit_id = INVALID_CUID; + cpumask_clear_cpu(cpu, &cpu_sibling_setup_map); +} + static void cpu_smpboot_free(unsigned int cpu) { unsigned int order, socket = cpu_to_socket(cpu); @@ -673,6 +697,8 @@ static void cpu_smpboot_free(unsigned int cpu) socket_cpumask[socket] = NULL; } + remove_siblinginfo(cpu); + free_cpumask_var(per_cpu(cpu_sibling_mask, cpu)); free_cpumask_var(per_cpu(cpu_core_mask, cpu)); @@ -878,32 +904,6 @@ void __init smp_prepare_boot_cpu(void) cpumask_set_cpu(smp_processor_id(), &cpu_present_map); } -static void -remove_siblinginfo(int cpu) -{ - int sibling; - struct cpuinfo_x86 *c = cpu_data; - - cpumask_clear_cpu(cpu, socket_cpumask[cpu_to_socket(cpu)]); - - for_each_cpu ( sibling, per_cpu(cpu_core_mask, cpu) ) - { - cpumask_clear_cpu(cpu, per_cpu(cpu_core_mask, sibling)); - /* Last thread sibling in this cpu core going down. */ - if ( cpumask_weight(per_cpu(cpu_sibling_mask, cpu)) == 1 ) - c[sibling].booted_cores--; - } - - for_each_cpu(sibling, per_cpu(cpu_sibling_mask, cpu)) - cpumask_clear_cpu(cpu, per_cpu(cpu_sibling_mask, sibling)); - cpumask_clear(per_cpu(cpu_sibling_mask, cpu)); - cpumask_clear(per_cpu(cpu_core_mask, cpu)); - c[cpu].phys_proc_id = XEN_INVALID_SOCKET_ID; - c[cpu].cpu_core_id = XEN_INVALID_CORE_ID; - c[cpu].compute_unit_id = INVALID_CUID; - cpumask_clear_cpu(cpu, &cpu_sibling_setup_map); -} - void __cpu_disable(void) { int cpu = smp_processor_id(); @@ -919,8 +919,6 @@ void __cpu_disable(void) time_suspend(); - remove_siblinginfo(cpu); - /* It's now safe to remove this processor from the online map */ cpumask_clear_cpu(cpu, &cpu_online_map); fixup_irqs(); -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |