[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v3] x86/domain: adjust limitation on shared_info allocation below 4G


  • To: xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: Roger Pau Monne <roger.pau@xxxxxxxxxx>
  • Date: Thu, 5 Feb 2026 09:03:08 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=nu+dQd17RgtDvJogBKvj46VI6xNKQrXq46hIvVdkBCA=; b=TUru9tpd0ZJxkSzXMsTkPHQSwMth51y6yW4g5vMPCPc7J110mDbeylRPSOWRSt4sQB2f1hQOyfAY94tuhyr2Oil/9lRTvbZ3LD5423cRAd47TKmLEXC3HFJiXRQYPPBj4o1CIkgSO0dloaPQsXl1uG5BXbgLxLTfElD73cvlTnLi2cB34haMCG/v47HHzwQwyClR0zCMATYpl2s7Atw5LjLKp65e92hlYRDobL+wBXaHz+l95HP6WInABun2k9dgzfkPi1ETWdRz7lRWm2PLtsfRzyxSuS04J9qgB4I2PgXeggTBH/rRdJicTtJKyW7KSWmL26lJRSI3V3jp4Tp9Cg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=wjVdNoNjVC8bqlCil6V+NKIA6bgorbTNhtMDEqfghuH7HFvlkvmD9hiHUsP7DnlFZOL1ERTK78izdX2HeDhxtN8FCfUJ3xWqoXwG0YJHqgyG3sLajUrpL6J+0KtIKhjKBZzDUORWmr+LOle1S4EhDT8bEC0u4SRRPXDrL8LgBp4MD0c2vOqIDEEtLqHAdn6NVw5S7UDBfRTAOmILJxx7MvCmFShuLbpK+2ehy731U4VzUgV/xdXOLSVs4KzVN4OaKWgdJSjFvl0scjiK48l+/hZCvQg45Vnlu/pgBfDe8uOex3YXlF1uYN6K1Xeiunx9ftUcQC5xL1LHwB2QZzXuug==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • Delivery-date: Thu, 05 Feb 2026 08:03:31 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

The limitation of shared_info being allocated below 4G to fit in the
start_info field only applies to 32bit PV guests.  On 64bit PV guests the
start_info field is 64bits wide.  HVM guests don't use start_info at all.

Drop the restriction in arch_domain_create() and instead free and
re-allocate the page from memory below 4G if needed in switch_compat(),
when the guest is set to run in 32bit PV mode.

Fixes: 3cadc0469d5c ("x86_64: shared_info must be allocated below 4GB as it is 
advertised to 32-bit guests via a 32-bit machine address field in start_info.")
Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
---
Changes since v2:
 - Move/reword some comments.
 - Add spaces in for_each_vcpu call.
---
 xen/arch/x86/domain.c    |  6 +-----
 xen/arch/x86/pv/domain.c | 28 ++++++++++++++++++++++++++++
 xen/common/domain.c      |  2 +-
 xen/include/xen/domain.h |  2 ++
 4 files changed, 32 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index edb76366b596..8b2f33f1a06c 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -881,11 +881,7 @@ int arch_domain_create(struct domain *d,
     if ( d->arch.ioport_caps == NULL )
         goto fail;
 
-    /*
-     * The shared_info machine address must fit in a 32-bit field within a
-     * 32-bit guest's start_info structure. Hence we specify MEMF_bits(32).
-     */
-    if ( (d->shared_info = alloc_xenheap_pages(0, MEMF_bits(32))) == NULL )
+    if ( (d->shared_info = alloc_xenheap_page()) == NULL )
         goto fail;
 
     clear_page(d->shared_info);
diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
index 01499582d2d6..e3273b49269d 100644
--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -247,6 +247,34 @@ int switch_compat(struct domain *d)
     d->arch.has_32bit_shinfo = 1;
     d->arch.pv.is_32bit = true;
 
+    /*
+     * For 32bit PV guests the shared_info machine address must fit in a 32-bit
+     * field within the guest's start_info structure.  We might need to free
+     * the current page and allocate a new one that fulfills this requirement.
+     */
+    if ( virt_to_maddr(d->shared_info) >> 32 )
+    {
+        shared_info_t *prev = d->shared_info;
+
+        d->shared_info = alloc_xenheap_pages(0, MEMF_bits(32));
+        if ( !d->shared_info )
+        {
+            d->shared_info = prev;
+            rc = -ENOMEM;
+            goto undo_and_fail;
+        }
+        put_page(virt_to_page(prev));
+        clear_page(d->shared_info);
+        share_xen_page_with_guest(virt_to_page(d->shared_info), d, SHARE_rw);
+        /*
+         * Ensure all pointers to the old shared_info page are replaced.  vCPUs
+         * below XEN_LEGACY_MAX_VCPUS will stash a pointer to
+         * shared_info->vcpu_info[id].
+         */
+        for_each_vcpu ( d, v )
+            vcpu_info_reset(v);
+    }
+
     for_each_vcpu( d, v )
     {
         if ( (rc = setup_compat_arg_xlat(v)) ||
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 163f7fc96658..4f6977c67aa7 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -305,7 +305,7 @@ static void vcpu_check_shutdown(struct vcpu *v)
     spin_unlock(&d->shutdown_lock);
 }
 
-static void vcpu_info_reset(struct vcpu *v)
+void vcpu_info_reset(struct vcpu *v)
 {
     struct domain *d = v->domain;
 
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 273717c31b3f..52cdf012c491 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -83,6 +83,8 @@ void cf_check free_pirq_struct(void *ptr);
 int  arch_vcpu_create(struct vcpu *v);
 void arch_vcpu_destroy(struct vcpu *v);
 
+void vcpu_info_reset(struct vcpu *v);
+
 int map_guest_area(struct vcpu *v, paddr_t gaddr, unsigned int size,
                    struct guest_area *area,
                    void (*populate)(void *dst, struct vcpu *v));
-- 
2.51.0




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.