[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[RFC PATCH 07/19] xen/domain: Alloc enough pages for VCPU struct


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Mykyta Poturai <Mykyta_Poturai@xxxxxxxx>
  • Date: Mon, 2 Feb 2026 16:14:39 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WrajxwpLkTzUo5qZ0TpducYW59HMKLxEeoenAEYJQaI=; b=JKK6VO/tulvUEQ0y6VVjTWSV1Z21Gci4+ZiPI/CDr4YSWQgxBFiRZa/CzLMoU6ZrzXJZgy4hxswI70tFb1PFo5O+wHSnjbdU2Cw5jwTZbuH5n4d2Aroa6Vu8Gzc2Sw3y2EJUejnEjihsDIUBuVP/Q3MwHxb18V1l6Y7E7vpvZCmStTUNin6IiTMl8XZAJGwmGsAuMOuBWq4Do3JlK6QCT5F5TBDFIAht1k8RIROUJ1Z0kDxFZoxGNyO0kGtjygCGwhJhTthtZ32UGqb4ds2JvawuQbozuj6gdImBT8j8IgY9wnswiFbdqEluEdl4EDY8wkunQNd+qj1TA60VMo1lSQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Q450fKKkg4LSlSJSytyfbtUyZNCuCpzAS3FDSOJQjwX3vtU5qUJasUbho55zCwDeJkqB10zPe6JqDRj9q5dY9E33nJ1J/dhkcQLXP6HeuHXY2pM+hyUPHJY/cuO4w1Q35svkAiCSBLHCjjp2RubGPUyflEzw7C03vr4enF5Cro8vnI4dxEFzLP/bqIFEo+d7ztH7cm2jt7rhVrHY9SPJgSomcn/vLMVnjj6cLWF+7n26XnsUvzyW1Enh0M33ap7GQXy6s8owhYghV8WtoQcya9TRvzTrkk0FC/Om9VhiVu0vgLr4w01AU2g/oIt4pol0uAuVx+mxCL/d2cIc6kMo+g==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=epam.com;
  • Cc: "xakep.amatop@xxxxxxxxx" <xakep.amatop@xxxxxxxxx>, Mykyta Poturai <Mykyta_Poturai@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Julien Grall <julien@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • Delivery-date: Mon, 02 Feb 2026 16:14:48 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHclF8IVNXzRYHYv0KDWbwmI64RGA==
  • Thread-topic: [RFC PATCH 07/19] xen/domain: Alloc enough pages for VCPU struct

With introduction of GICv4 the size of struct vcpu can again be more
than one page. Modify struct vcpu allocation to request enough pages
again.

Don't reintroduce the MAX_PAGES_PER_VCPU check.
As per commit b77d774d8274183c2252f5fbc9fa3b3b7022ba06
> It turns out that beyond efficiency, maybe, there is no real technical
> reason this struct has to fit in one page

Since there is no technical reason to limit struct vcpu size to one page,
there also seems to be little reason the fiddle with 1 or 2 page limits.

Signed-off-by: Mykyta Poturai <mykyta_poturai@xxxxxxxx>
---
 xen/common/domain.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 376351b528..c791fb5033 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -342,18 +342,24 @@ static struct vcpu *alloc_vcpu_struct(const struct domain 
*d)
 # define arch_vcpu_struct_memflags(d) ((void)(d), 0)
 #endif
     struct vcpu *v;
+    unsigned int order = get_order_from_bytes(sizeof(*v));
 
-    BUILD_BUG_ON(sizeof(*v) > PAGE_SIZE);
-    v = alloc_xenheap_pages(0, arch_vcpu_struct_memflags(d));
+    v = alloc_xenheap_pages(order, arch_vcpu_struct_memflags(d));
     if ( v )
-        clear_page(v);
+    {
+        unsigned int i;
+
+        for ( i = 0; i < DIV_ROUND_UP(sizeof(*v), PAGE_SIZE); i++ )
+            clear_page((void *)v + i * PAGE_SIZE);
+    }
 
     return v;
 }
 
 static void free_vcpu_struct(struct vcpu *v)
 {
-    free_xenheap_page(v);
+    unsigned int order = get_order_from_bytes(sizeof(*v));
+    free_xenheap_pages(v, order);
 }
 
 static void vmtrace_free_buffer(struct vcpu *v)
-- 
2.51.2



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.