|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v1] mm/page_alloc: fix MEMF_no_dma allocations for single NUMA
>>> On 07.01.19 at 12:27, <sergey.dyasli@xxxxxxxxxx> wrote:
> Currently dma_bitsize is zero by default on single NUMA node machines.
> This makes all alloc_domheap_pages() calls with MEMF_no_dma return NULL.
>
> There is only 1 user of MEMF_no_dma: dom0_memflags, which are used
> during memory allocation for Dom0. Failing allocation with default
> dom0_memflags is especially severe for the PV Dom0 case: it makes
> alloc_chunk() to use suboptimal 2MB allocation algorithm with a search
> for higher memory addresses.
>
> This can lead to the NMI watchdog timeout during PV Dom0 construction
> on some machines, which can be worked around by specifying "dma_bits"
> in Xen's cmdline manually.
>
> Fix the issue by initialising dma_bitsize even on single NUMA machines.
I've not yet looked at why exactly this was done for multi-node
systems only, but in any event this change renders somewhat
stale the comment next to the dma_bitsize definition.
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1863,7 +1863,7 @@ void __init end_boot_allocator(void)
> nr_bootmem_regions = 0;
> init_heap_pages(virt_to_page(bootmem_region_list), 1);
>
> - if ( !dma_bitsize && (num_online_nodes() > 1) )
> + if ( !dma_bitsize )
> dma_bitsize = arch_get_dma_bitsize();
Did you consider the alternative of leaving this code alone and
instead doing
if ( !dma_bitsize )
memflags &= ~MEMF_no_dma;
else if ( (dma_zone = bits_to_zone(dma_bitsize)) < zone_hi )
pg = alloc_heap_pages(dma_zone + 1, zone_hi, order, memflags, d);
in alloc_domheap_pages(), which would also address the same
issue in the case of arch_get_dma_bitsize() returning zero?
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |