[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3] x86/domain: adjust limitation on shared_info allocation below 4G


  • To: Roger Pau Monne <roger.pau@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Thu, 5 Feb 2026 18:31:04 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=HBz96SJgPv9cMl5vVj12UY6iNXe/Kom5kWn+EVxny9I=; b=Wi+4QUpiWG1XvL4IREDdwLMjF/B+cxFlLbpGSnT1V8Wkav5MDEv11E71oMxiO9yfbHB/sKaPYJlCN7JLHzNL1trZEus7OfR3NYbmDiuKbFOkI33Fz6OK7vCMhLd+ByjjtUwfodFfA6SmqtftOwhvwDeRUMkS0jjSFTW9qV6vLuylklR/VhMH1aRL9doKZKy1xF8WAt2e7a3l/X8K81Le/fTc5DWNlIs7SGFnMOZ3T79IletkmVC6hhvezVFC6749wuTdUhzPWbgrzF2LxxhugGJezLlK5OWDA0rDwMjG59zrrEWsfhnBku8CJMuIjVeAjy6WCiMFJ+/yYLhj9q+B5w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=FZJUgzteyiHSCgG1zjDd3rZbio0z+K1Hyq7BRzBRzI7EAWJkydOg8D4DjUwzy08QO0PFw1fs4wdL8LmCJzzC5Xsxp36Z4qQUb00oB4J4QYBoj1Q/tzx5+sdKHI80+bDSwvy1whi2Cf9U9c6Z4fIQ7/Szw2XM+35q4+B62R+NJ3g/IihFy2/0jrFaPNndGEY00CylvG49TsGBv1m2/m4MbbIjquETESVjM+bL1a/u9Mqy6x+B6T4BOZXNNp4I7/rksRNPZUsinQ2W9u9znnPycp4LWDOvMAAjSMAfEJInT7fXqLgScvd+oZvabKPxiMuVSrQiXihiz1HEECfIea1fWA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • Delivery-date: Thu, 05 Feb 2026 18:31:41 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 05/02/2026 8:03 am, Roger Pau Monne wrote:
> diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
> index 01499582d2d6..e3273b49269d 100644
> --- a/xen/arch/x86/pv/domain.c
> +++ b/xen/arch/x86/pv/domain.c
> @@ -247,6 +247,34 @@ int switch_compat(struct domain *d)
>      d->arch.has_32bit_shinfo = 1;
>      d->arch.pv.is_32bit = true;
>  
> +    /*
> +     * For 32bit PV guests the shared_info machine address must fit in a 
> 32-bit
> +     * field within the guest's start_info structure.  We might need to free
> +     * the current page and allocate a new one that fulfills this 
> requirement.
> +     */
> +    if ( virt_to_maddr(d->shared_info) >> 32 )
> +    {
> +        shared_info_t *prev = d->shared_info;
> +
> +        d->shared_info = alloc_xenheap_pages(0, MEMF_bits(32));
> +        if ( !d->shared_info )
> +        {
> +            d->shared_info = prev;
> +            rc = -ENOMEM;
> +            goto undo_and_fail;
> +        }
> +        put_page(virt_to_page(prev));
> +        clear_page(d->shared_info);
> +        share_xen_page_with_guest(virt_to_page(d->shared_info), d, SHARE_rw);
> +        /*
> +         * Ensure all pointers to the old shared_info page are replaced.  
> vCPUs
> +         * below XEN_LEGACY_MAX_VCPUS will stash a pointer to
> +         * shared_info->vcpu_info[id].
> +         */
> +        for_each_vcpu ( d, v )
> +            vcpu_info_reset(v);

Sorry, I missed something.  Reading this in full, there's an obvious
(risk of) UAF.

put_page(virt_to_page(prev)) needs to be no earlier than here, or we've
freed a page that we still have pointers pointing at.

In practice, I expect that the global domctl lock protects us from
anything actually going wrong.

Nevertheless, for the sake of reordering the actions in this block, lets
just fix it.

~Andrew

> +    }
> +
>      for_each_vcpu( d, v )
>      {
>          if ( (rc = setup_compat_arg_xlat(v)) ||
>



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.