[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 2/3] xen/heap: Split init_heap_pages() in two
On 15.07.2022 19:03, Julien Grall wrote: > From: Julien Grall <jgrall@xxxxxxxxxx> > > At the moment, init_heap_pages() will call free_heap_pages() page > by page. To reduce the time to initialize the heap, we will want > to provide multiple pages at the same time. > > init_heap_pages() is now split in two parts: > - init_heap_pages(): will break down the range in multiple set > of contiguous pages. For now, the criteria is the pages should > belong to the same NUMA node. > - _init_heap_pages(): will initialize a set of pages belonging to > the same NUMA node. In a follow-up patch, new requirements will > be added (e.g. pages should belong to the same zone). For now the > pages are still passed one by one to free_heap_pages(). > > Note that the comment before init_heap_pages() is heavily outdated and > does not reflect the current code. So update it. > > This patch is a merge/rework of patches from David Woodhouse and > Hongyan Xia. > > Signed-off-by: Julien Grall <jgrall@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> Albeit maybe with ... > --- a/xen/common/page_alloc.c > +++ b/xen/common/page_alloc.c > @@ -1778,16 +1778,44 @@ int query_page_offline(mfn_t mfn, uint32_t *status) > } > > /* > - * Hand the specified arbitrary page range to the specified heap zone > - * checking the node_id of the previous page. If they differ and the > - * latter is not on a MAX_ORDER boundary, then we reserve the page by > - * not freeing it to the buddy allocator. > + * This function should only be called with valid pages from the same NUMA > + * node. > */ > +static void _init_heap_pages(const struct page_info *pg, > + unsigned long nr_pages, > + bool need_scrub) > +{ > + unsigned long s, e; > + unsigned int nid = phys_to_nid(page_to_maddr(pg)); > + > + s = mfn_x(page_to_mfn(pg)); > + e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1)); > + if ( unlikely(!avail[nid]) ) > + { > + bool use_tail = IS_ALIGNED(s, 1UL << MAX_ORDER) && > + (find_first_set_bit(e) <= find_first_set_bit(s)); > + unsigned long n; > + > + n = init_node_heap(nid, s, nr_pages, &use_tail); > + BUG_ON(n > nr_pages); > + if ( use_tail ) > + e -= n; > + else > + s += n; > + } > + > + while ( s < e ) > + { > + free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub); > + s += 1UL; ... the more conventional s++ or ++s used here? Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |