[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 1/2] x86/mm: avoid phys_to_nid() calls for invalid addresses
Hi Jan, On 2022/12/13 19:36, Jan Beulich wrote: With phys_to_nid() now actively checking that a valid node ID is on record, the two uses in paging_init() can actually trigger at least the 2nd of the assertions there. They're used to calculate allocation flags, but the calculated flags wouldn't be used when dealing with an invalid (unpopulated) address range. Defer the calculations such that they can be done with a validated MFN in hands. This also does away with the artificial calculations of an address to pass to phys_to_nid(). Note that while the variable is provably written before use, at least some compiler versions can't actually verify that. Hence the variable also needs to gain a (dead) initializer. Fixes: e9c72d524fbd ("xen/x86: Use ASSERT instead of VIRTUAL_BUG_ON for phys_to_nid") Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- RFC: With small enough a NUMA hash shift it would still be possible to hit an SRAT hole, despite mfn_valid() passing. Hence, like was the original plan, it may still be necessary to relax the checking in phys_to_nid() (or its designated replacements). At which point the value of this change here would shrink to merely reducing the chance of unintentionally doing NUMA_NO_NODE allocations. I think it's better to place the last sentence or the whole RFC to thecommit log. Without the RFC content, after a while, when I check this commit again, I will be confused about what problem this commit solved. Because just looking at the changes, as your said in RFC, it doesn't completely solve the problem. Cheers, Wei Chen --- a/xen/arch/x86/x86_64/mm.c +++ b/xen/arch/x86/x86_64/mm.c @@ -498,7 +498,7 @@ error: void __init paging_init(void) { unsigned long i, mpt_size, va; - unsigned int n, memflags; + unsigned int n, memflags = 0; l3_pgentry_t *l3_ro_mpt; l2_pgentry_t *pl2e = NULL, *l2_ro_mpt = NULL; struct page_info *l1_pg; @@ -547,8 +547,6 @@ void __init paging_init(void) { BUILD_BUG_ON(RO_MPT_VIRT_START & ((1UL << L3_PAGETABLE_SHIFT) - 1)); va = RO_MPT_VIRT_START + (i << L2_PAGETABLE_SHIFT); - memflags = MEMF_node(phys_to_nid(i << - (L2_PAGETABLE_SHIFT - 3 + PAGE_SHIFT)));if ( cpu_has_page1gb &&!((unsigned long)pl2e & ~PAGE_MASK) && @@ -559,10 +557,15 @@ void __init paging_init(void) for ( holes = k = 0; k < 1 << PAGETABLE_ORDER; ++k) { for ( n = 0; n < CNT; ++n) - if ( mfn_valid(_mfn(MFN(i + k) + n * PDX_GROUP_COUNT)) ) + { + mfn = _mfn(MFN(i + k) + n * PDX_GROUP_COUNT); + if ( mfn_valid(mfn) ) break; + } if ( n == CNT ) ++holes; + else if ( k == holes ) + memflags = MEMF_node(phys_to_nid(mfn_to_maddr(mfn))); } if ( k == holes ) { @@ -593,8 +596,14 @@ void __init paging_init(void) }for ( n = 0; n < CNT; ++n)- if ( mfn_valid(_mfn(MFN(i) + n * PDX_GROUP_COUNT)) ) + { + mfn = _mfn(MFN(i) + n * PDX_GROUP_COUNT); + if ( mfn_valid(mfn) ) + { + memflags = MEMF_node(phys_to_nid(mfn_to_maddr(mfn))); break; + } + } if ( n == CNT ) l1_pg = NULL; else if ( (l1_pg = alloc_domheap_pages(NULL, PAGETABLE_ORDER, @@ -663,15 +672,19 @@ void __init paging_init(void) sizeof(*compat_machine_to_phys_mapping)); for ( i = 0; i < (mpt_size >> L2_PAGETABLE_SHIFT); i++, pl2e++ ) { - memflags = MEMF_node(phys_to_nid(i << - (L2_PAGETABLE_SHIFT - 2 + PAGE_SHIFT))); for ( n = 0; n < CNT; ++n) - if ( mfn_valid(_mfn(MFN(i) + n * PDX_GROUP_COUNT)) ) + { + mfn = _mfn(MFN(i) + n * PDX_GROUP_COUNT); + if ( mfn_valid(mfn) ) + { + memflags = MEMF_node(phys_to_nid(mfn_to_maddr(mfn))); break; + } + } if ( n == CNT ) continue; if ( (l1_pg = alloc_domheap_pages(NULL, PAGETABLE_ORDER, - memflags)) == NULL ) + memflags)) == NULL ) goto nomem; map_pages_to_xen( RDWR_COMPAT_MPT_VIRT_START + (i << L2_PAGETABLE_SHIFT),
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |