[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v9 2/8] xen: do not free reserved memory into heap


  • To: Penny Zheng <Penny.Zheng@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Mon, 25 Jul 2022 17:29:31 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=q8sRcLpwRjFEyUxKeqRGhLngYzc44AUkBJ73exqTkPg=; b=dD6huxbmlqJhciEiuWa6+i6xALuq5ODsPI9bCt3aiUq0jJ2fyko2icRSk7M96Uv7nKxidOl3gypSKv7uMGucsADFIGhEYiULuPmELXI+hYu2CFkWDj0E8uf7ZDId81HH3jjTmnTQM8S/K5/qEo7E0rKpjK3zYJtI0K7vj2LImRtyTUUcKxa8TeHTj/EsewrWKcoi9SSQefxr/rVscg5mUk2QE1AGvIxwS9qIszx2MJofIfM07IBGjaZevHb4Oe2QdvhI9CLcqb5JsgYsz4257HdeVHaCff85DEFu81EdgdB9Fw+7KBUM3Kip5s90MtgJY28MZyQsvk6oCVyYTIDnOA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CG2+9DyC9pz8pZvHT/JBkTmZJZ++cJu715FDPjXHwQx4u23JuoKTe6GDtte57nFMc0j4CvUkDo6v5VbwQryVxLvhvEyd18K6v+1cIgoJeicATtF75Oy1hshMRHzy1HnyLKgzBrhLi9qSvyYDiZZ8oYZQTGevg29nLGAbZJmWlIqOMuonyYOB8HjvGVFHi6gJWjq8hWX9vkIi9FUgxZ42Y5rHjmffwq/RUr5XTg6G5PYyxG2qm6oVbAAzetFN43t0Y6iEACCtiqybDTGHxUcxQ47b7XVE6p28+xqyOlb/A+2PxcVcj9I4vsoWZSj1iHXjTG7W5IabiK1yF7hO0rnhWw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: wei.chen@xxxxxxx, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Mon, 25 Jul 2022 15:29:44 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 20.07.2022 07:46, Penny Zheng wrote:
> Pages used as guest RAM for static domain, shall be reserved to this
> domain only.
> So in case reserved pages being used for other purpose, users
> shall not free them back to heap, even when last ref gets dropped.
> 
> This commit introduces a new helper free_domstatic_page to free
> static page in runtime, and free_staticmem_pages will be called by it
> in runtime, so let's drop the __init flag.
> 
> Signed-off-by: Penny Zheng <penny.zheng@xxxxxxx>

Technically
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>

Nevertheless two remarks:

> +void free_domstatic_page(struct page_info *page)
> +{
> +    struct domain *d = page_get_owner(page);
> +    bool drop_dom_ref;
> +
> +    ASSERT(d);

I wonder whether

    if ( unlikely(!d) )
    {
        ASSERT_UNREACHABLE();
        return;
    }

wouldn't be more robust looking forward.

> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -85,13 +85,12 @@ bool scrub_free_pages(void);
>  } while ( false )
>  #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
>  
> -#ifdef CONFIG_STATIC_MEMORY
>  /* These functions are for static memory */
>  void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
>                            bool need_scrub);
> +void free_domstatic_page(struct page_info *page);
>  int acquire_domstatic_pages(struct domain *d, mfn_t smfn, unsigned int 
> nr_mfns,
>                              unsigned int memflags);
> -#endif
>  
>  /* Map machine page range in Xen virtual address space. */
>  int map_pages_to_xen(
> @@ -212,6 +211,10 @@ extern struct domain *dom_cow;
>  
>  #include <asm/mm.h>
>  
> +#ifndef PGC_static
> +#define PGC_static 0
> +#endif

This disconnect from all other PGC_* values isn't very nice. I wonder
as how bad it would be seen if Arm kept its #define to 0 private, with
the generic fallback remaining in page_alloc.c.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.