[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH][4.17] x86/NUMA: correct off-by-1 in node map population
As it turns out populate_memnodemap() so far "relied" on extract_lsb_from_nodes() setting memnodemapsize one too high in edge cases. Correct the issue there as well, by changing "epdx" to be an inclusive PDX and adjusting the respective relational operators. While there also limit the scope of both related variables. Fixes: b1f4b45d02ca ("x86/NUMA: correct off-by-1 in node map size calculation") Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- For symmetry it might be helpful to change "epdx" to be inclusive in extract_lsb_from_nodes() as well. I actually had it that way first in the earlier change, but then thought the smaller diff would be better. --- a/xen/arch/x86/numa.c +++ b/xen/arch/x86/numa.c @@ -65,15 +65,15 @@ int srat_disabled(void) static int __init populate_memnodemap(const struct node *nodes, int numnodes, int shift, nodeid_t *nodeids) { - unsigned long spdx, epdx; int i, res = -1; memset(memnodemap, NUMA_NO_NODE, memnodemapsize * sizeof(*memnodemap)); for ( i = 0; i < numnodes; i++ ) { - spdx = paddr_to_pdx(nodes[i].start); - epdx = paddr_to_pdx(nodes[i].end - 1) + 1; - if ( spdx >= epdx ) + unsigned long spdx = paddr_to_pdx(nodes[i].start); + unsigned long epdx = paddr_to_pdx(nodes[i].end - 1); + + if ( spdx > epdx ) continue; if ( (epdx >> shift) >= memnodemapsize ) return 0; @@ -88,7 +88,7 @@ static int __init populate_memnodemap(co memnodemap[spdx >> shift] = nodeids[i]; spdx += (1UL << shift); - } while ( spdx < epdx ); + } while ( spdx <= epdx ); res = 1; }
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |