[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/3] xen/heap: Split init_heap_pages() in two


  • To: Julien Grall <julien@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Mon, 18 Jul 2022 11:31:36 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=hPw+CQPEmTurCtoPIeD/mDTuJpK4KoP8XKbF8Tsnrgw=; b=GLw72F4RA9lnwEQ50zbFkTeda+on1d/HQiDr0f3WKr2yK96TrI4oiFKBF7ykqE1JtRf43wFnz5l9pCm/qJpW2oAO7NUzdr6dU1D97SHuexdpZOhvsP2kTD4e4naWbXcOtLe+0D2Jx0buLL/ReuqYgNLweF9shs08Ip2txuyjb4DHiX2ncZRRMudqoLYfr3lYt15KiePvH6XcTyGumbym64mr7dQ9kRuIn4v7vgAvtQgWDrrdCJRXIE8Bx8SabfQ8MB5KjPU62ZCHBhm8keVIVexrIeH6tXZ2D3RZoKbP26tqb1L3MGWMA6iXyUxKS+aWwUhf9QKSQKhQ1+up4xclaw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=oHA1VolOOwFgImxk6BCQNlkuKNma9lp3Y7pJ3OusiYvEgXS3A7lCWO/74ASCOp3wPG7DLUj22E2g89tx/eY5A7XcIb6efKUxVzLCQO/al22FLbOCVWNBjNC3YfDG4vuNMMny6k4h7/H7WJFV/5871WfclL6w8mkKxn833hUWvPLE69GjlLwBiixfQ+auNQ/AhqgACBNmswz2ddie1WVtOFK9AADVc2dKx2vqKAdms84Y/08RKjr9gZaonP948MLmUlgZ/DlKsbMCjhNNAPs9RWRz7mbSpEt2gs5PW1N2hl7RQhL3NAp4DnlcAjpaEbR51rGxgnRkpZ/+YjhUoFTOqQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Julien Grall <jgrall@xxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Mon, 18 Jul 2022 09:31:47 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 15.07.2022 19:03, Julien Grall wrote:
> From: Julien Grall <jgrall@xxxxxxxxxx>
> 
> At the moment, init_heap_pages() will call free_heap_pages() page
> by page. To reduce the time to initialize the heap, we will want
> to provide multiple pages at the same time.
> 
> init_heap_pages() is now split in two parts:
>     - init_heap_pages(): will break down the range in multiple set
>       of contiguous pages. For now, the criteria is the pages should
>       belong to the same NUMA node.
>     - _init_heap_pages(): will initialize a set of pages belonging to
>       the same NUMA node. In a follow-up patch, new requirements will
>       be added (e.g. pages should belong to the same zone). For now the
>       pages are still passed one by one to free_heap_pages().
> 
> Note that the comment before init_heap_pages() is heavily outdated and
> does not reflect the current code. So update it.
> 
> This patch is a merge/rework of patches from David Woodhouse and
> Hongyan Xia.
> 
> Signed-off-by: Julien Grall <jgrall@xxxxxxxxxx>

Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
Albeit maybe with ...

> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1778,16 +1778,44 @@ int query_page_offline(mfn_t mfn, uint32_t *status)
>  }
>  
>  /*
> - * Hand the specified arbitrary page range to the specified heap zone
> - * checking the node_id of the previous page.  If they differ and the
> - * latter is not on a MAX_ORDER boundary, then we reserve the page by
> - * not freeing it to the buddy allocator.
> + * This function should only be called with valid pages from the same NUMA
> + * node.
>   */
> +static void _init_heap_pages(const struct page_info *pg,
> +                             unsigned long nr_pages,
> +                             bool need_scrub)
> +{
> +    unsigned long s, e;
> +    unsigned int nid = phys_to_nid(page_to_maddr(pg));
> +
> +    s = mfn_x(page_to_mfn(pg));
> +    e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
> +    if ( unlikely(!avail[nid]) )
> +    {
> +        bool use_tail = IS_ALIGNED(s, 1UL << MAX_ORDER) &&
> +                        (find_first_set_bit(e) <= find_first_set_bit(s));
> +        unsigned long n;
> +
> +        n = init_node_heap(nid, s, nr_pages, &use_tail);
> +        BUG_ON(n > nr_pages);
> +        if ( use_tail )
> +            e -= n;
> +        else
> +            s += n;
> +    }
> +
> +    while ( s < e )
> +    {
> +        free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub);
> +        s += 1UL;

... the more conventional s++ or ++s used here?

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.