[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/8] x86/paging: fold most HAP and shadow final teardown


  • To: Jan Beulich <jbeulich@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>
  • Date: Wed, 21 Dec 2022 17:16:39 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=78hdFkhugtndbQfKW2oqZIBhlrh+qFFdBYZc/DoQjuw=; b=b9IkKW7AOFue8lFGTdHLshgNpEflGgE5NfTIWHWuPWc7P5mONAnyy5qOJTLeXoZ+rbysr419aPK6M31t3n0GZfxP1+uQTX/12P0LT5gK8S/WFGVesKrqnOmPEba2wAVIjzTZ5/PCphaqd2byhjgm5RIMYBWr8JxE+bAGfEX4GCI7ID85hlFgaTylw8CbhfSkRaTk2/mB/jSx5tTAUKbtfPIRd5d6x5LZ3VfRyXwW2lg6Nmz/ylYBRWiPRQK6A1aJLX0ErR3EOYj/uP0MrRLFR4eGZea8YAqGg48zWPzZsH6fQNFnAfumP6B6annHbuWs9a32IMEFo26J5UjB42ipjg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FQa2v9LTLH7eor2FeCsx+XKa5EvYaCeX4TfkyIFc+6r1IvykZevbD5XS32+iZl8QNYR8JxfU3a3Pri0Ab0aimNF3y7XRD0QnMpnCehA8Q/4MmZpH9ppw6BkmS0Y8y1xkKL1pDcb4Py5oTrceHfIyrs3rVEf8Y/jcPIeKNjrL4YAHM+mA3BSbu3UMDtOHxvxEpt/fOo5pl1u+z5lAV1GLvaKUszyKSLEAhOIJs0ShfPE2J87u41rNjyNkOj4d9hjCXT7L8awHUxh9auw/kH2Cw+YH5km0m8UoSWAqeMtLPwuEl9Fxp5DgsF7Wwx1N2jJtNJcVwVu9lbRH5Qm69F//nA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Wei Liu <wl@xxxxxxx>, Roger Pau Monne <roger.pau@xxxxxxxxxx>, George Dunlap <George.Dunlap@xxxxxxxxxx>, "Tim (Xen.org)" <tim@xxxxxxx>
  • Delivery-date: Wed, 21 Dec 2022 17:17:07 +0000
  • Ironport-data: A9a23:SGgV3K+oSI2WD6XmdTRxDrUDtn+TJUtcMsCJ2f8bNWPcYEJGY0x3n WAZXD2EPv2DN2GnKN11O9m+900AuJSGndBlTgZt+Hs8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk d7pqojUNUTNNwRcawr40Ire7kIx1BjOkGlA5AZnP6oS5AS2e0Q9V/rzG4ngdxMUfaEMdgKKb 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkl08 6BIKBAUTyuYqMGKwpOkb7Mzi/UKeZyD0IM34hmMzBn/JNN+HdXpZfWP4tVVmjAtmspJAPDSI dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTePilAuuFTuGIO9ltiibMNZhEuH4 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuqAdNDSe3lq5aGhnWV3EwsOkdJCGGpoOW9th+VZ+hYM RwLr39GQa8asRbDosPGdx+yrWOAvxUcc8FNCOB84waIooLE7gDcCmUaQzppbN09qNRwVTEsz kWOnd7iGXpoqrL9YW2Z3qeZq3W1Iyd9BXMDYAcUQA1D5MPsyLzflTrKR9dnVaSz3tv8HGmsx yjQ9HRhwbIOkcQMyqO3u0jdhC6hrYTISQhz4RjLWmWi7UVyY4vNi5GU1GU3JM1odO6xJmRtd lBe8yRCxIji1a2wqRE=
  • Ironport-hdrordr: A9a23:AuQN+6DRLAL5Z+zlHemK55DYdb4zR+YMi2TDtnocdfUxSKelfq +V88jzuSWbtN9pYgBHpTnjAsm9qALnlKKdiLN5Vd3OYOCMghrKEGgN1/qY/xTQXwH46+5Bxe NBXsFFeafN5VURt7ef3OG1eexQpeVu+sqT9IXjJ3gGd3AMV0i41XYBNu9MKDwPeDV7
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHZFT+yfO+11AII6U6krdJn3spgiq54lXqA
  • Thread-topic: [PATCH 2/8] x86/paging: fold most HAP and shadow final teardown

On 21/12/2022 1:25 pm, Jan Beulich wrote:
> --- a/xen/arch/x86/mm/paging.c
> +++ b/xen/arch/x86/mm/paging.c
> @@ -842,10 +842,46 @@ int paging_teardown(struct domain *d)
>  /* Call once all of the references to the domain have gone away */
>  void paging_final_teardown(struct domain *d)
>  {
> -    if ( hap_enabled(d) )
> +    bool hap = hap_enabled(d);
> +
> +    PAGING_PRINTK("%pd final teardown starts.  Pages total = %u, free = %u, 
> p2m = %u\n",

PAGING_PRINTK() already includes __func__, so just "%pd start: total %u,
free %u, p2m %u\n" which is shorter.

> +                  d, d->arch.paging.total_pages,
> +                  d->arch.paging.free_pages, d->arch.paging.p2m_pages);
> +
> +    if ( hap )
>          hap_final_teardown(d);
> +
> +    /*
> +     * Double-check that the domain didn't have any paging memory.
> +     * It is possible for a domain that never got domain_kill()ed
> +     * to get here with its paging allocation intact.

I know you're mostly just moving this comment, but it's misleading.

This path is used for the domain_create() error path, and there will be
a nonzero allocation for HVM guests.

I think we do want to rework this eventually - we will simplify things
massively by splitting the things can can only happen for a domain which
has run into relinquish_resources.

At a minimum, I'd suggest dropping the first sentence.  "double check"
implies it's an extraordinary case, which isn't warranted here IMO.

> +     */
> +    if ( d->arch.paging.total_pages )
> +    {
> +        if ( hap )
> +            hap_teardown(d, NULL);
> +        else
> +            shadow_teardown(d, NULL);
> +    }
> +
> +    /* It is now safe to pull down the p2m map. */
> +    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
> +
> +    /* Free any paging memory that the p2m teardown released. */

I don't think this isn't true any more.  410 also made HAP/shadow free
pages fully for a dying domain, so p2m_teardown() at this point won't
add to the free memory pool.

I think the subsequent *_set_allocation() can be dropped, and the
assertions left.

~Andrew

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.