[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC v2 1/2] linux/vnuma: vNUMA for PV domu guest
On Wed, Sep 18, 2013 at 3:33 AM, Dario Faggioli <dario.faggioli@xxxxxxxxxx> wrote: > On mer, 2013-09-18 at 02:30 -0400, Elena Ufimtseva wrote: >> On Tue, Sep 17, 2013 at 10:21 AM, Boris Ostrovsky >> >> >> +int __init xen_numa_init(void) >> >> +{ > [snip] >> >> + setup_nr_node_ids(); >> >> + /* Setting the cpu, apicid to node */ >> >> + for_each_cpu(cpu, cpu_possible_mask) { >> >> + set_apicid_to_node(cpu, cpu_to_node[cpu]); >> >> + numa_set_node(cpu, cpu_to_node[cpu]); >> >> + __apicid_to_node[cpu] = cpu_to_node[cpu]; >> > >> > >> > Isn't this what set_apicid_to_node() above will do? >> >> Yes, exactly the same ) will fix. >> > I seem to recall that something strange was happening if we do not do > this in this way (i.e., calling the same stuff twice, or something like > that), isn't it so Elena? > I think its in the past as proper ordering made sense and that second call was removed :) > If it is, please, explain that in a comment. However, it may well be > possible that my recollection is wrong.. In which case, sorry for the > noise. > > Anyway, in general, and both for this series and for the Xen one, I > think the code could use a little bit more of commenting (along with the > breaking up in paragraphs, as many have already pointed out). > > I know, I know, too much comments is also bad... Actually, finding the > right balance between too few and too much is as much important as it is > difficult! :-P I see that ) I will learn. > > Dario > > -- > <<This happens because I choose it to happen!>> (Raistlin Majere) > ----------------------------------------------------------------- > Dario Faggioli, Ph.D, http://about.me/dario.faggioli > Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) > -- Elena _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |