[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 08/11] xen: arm: rewrite start of day page table and cpu bring up
On Fri, 2013-09-27 at 14:30 +0100, Julien Grall wrote: > > @@ -581,6 +589,12 @@ static void __init init_cpus_maps(void) > > } > > } > > > > + if ( (rc = arch_cpu_init(hwid, cpu)) < 0 ) > > As I understand your patch #6, arch_cpu_init take a logical cpu id (on > ARM64 it's used as an index in an array). Yes, I thought I wanted to pass the hwid here, but it looks like I've got my wires crossed. > So you should used j here. You mean cpuidx I think, after having moved the call after the cpuidx++ I wanted to handle the case where the function failed by having possible_map not contain failed cpus. I think I'll handle this by making tmp_map[i] == INVALID_MIDR in that case and checking that in the loop which sets bits in cpu_possible_map. > Also, do we really need to call arch_cpu_init on the boot CPU? Good question. I suppose not. On the other hand it gives the platform the option of doing per-cpu init not related to bring up. I think I'll keep this hook in place. > > }; > > > > /* Shared state for coordinating CPU bringup */ > > -unsigned long smp_up_cpu = 0; > > +unsigned long smp_up_cpu = ~0UL; > > MPIDR_INVALID? yes, for all of those. > > @@ -176,6 +147,7 @@ void __cpuinit start_secondary(unsigned long > > boot_phys_offset, > > wmb(); > > > > /* Now report this CPU is up */ > > + smp_up_cpu = ~0UL; > > smp_up_cpu = MPIDR_INVALID? > > Also, perhaps a dsb is needed here to ensure to update smp_up_cpu before > cpumask_set_cpu is updated. Does anything rely on the order of these two writes? There is a wmb right after the writes so they will become visible together. I think it's probably OK to see one slightly before the other in either order. > > > cpumask_set_cpu(cpuid, &cpu_online_map); > > wmb(); > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |