[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v2 14/15] x86/hyperlaunch: add max vcpu parsing of hyperlaunch device tree


  • To: xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: "Daniel P. Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>
  • Date: Thu, 26 Dec 2024 11:57:39 -0500
  • Arc-authentication-results: i=1; mx.zohomail.com; dkim=pass header.i=apertussolutions.com; spf=pass smtp.mailfrom=dpsmith@xxxxxxxxxxxxxxxxxxxx; dmarc=pass header.from=<dpsmith@xxxxxxxxxxxxxxxxxxxx>
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1735232294; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=hXdbAJzSfNTcSeY3krNKNli9zbh1A0XPeYZ31pxiNfM=; b=KkNlPFMSTV2k0JrSlbRi5SCLP6+BFG2AWtz5PGLF/o4GVKgjMb7wFOczd4h8hjLmrsr/fxS1kmdC9sYhsCdeUalbRqckAL7XHGnpZS3ZfSqVv9Ap7VelpXiYwA3ZOBwHHbhdqqajmuQtHC6tM+tiZd5PX+ZFERXjJiNRxkO/d7Y=
  • Arc-seal: i=1; a=rsa-sha256; t=1735232294; cv=none; d=zohomail.com; s=zohoarc; b=bEsb+5V8TjawamDf4z6EdBZYmMuMB8eV7fPNaaqsUTzeVgaDz+pQvztXqs8r6j8ey2D495Jd9DFUjQcv47Pb173T5J1wycjQXE0jjuSKFgCx/5RtiixRofKtLtFPnM9JrEKNDhie4HW03G0U880Gw1bzqp3ubiuXRtLi3taGa80=
  • Cc: "Daniel P. Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>, jason.andryuk@xxxxxxx, christopher.w.clark@xxxxxxxxx, stefano.stabellini@xxxxxxx, Jan Beulich <jbeulich@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Thu, 26 Dec 2024 17:10:36 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Introduce the `cpus` property, named as such for dom0less compatibility, that
represents the maximum number of vpcus to allocate for a domain. In the device
tree, it will be encoded as a u32 value.

Signed-off-by: Daniel P. Smith <dpsmith@xxxxxxxxxxxxxxxxxxxx>
Reviewed-by: Jason Andryuk <jason.andryuk@xxxxxxx>
---
Changes since v1:
- switched from match_fdt to strncmp
- switched to nested else if
- dropped ternary for name selection
---
 xen/arch/x86/dom0_build.c             |  3 +++
 xen/arch/x86/domain-builder/fdt.c     | 11 +++++++++++
 xen/arch/x86/include/asm/bootdomain.h |  2 ++
 3 files changed, 16 insertions(+)

diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index 1c3b7ff0e658..7ff052016bfd 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -617,6 +617,9 @@ int __init construct_dom0(struct boot_domain *bd)
     if ( !get_memsize(&dom0_max_size, LONG_MAX) && bd->max_pages )
         dom0_size.nr_pages = bd->max_pages;
 
+    if ( opt_dom0_max_vcpus_max == UINT_MAX && bd->max_vcpus )
+        opt_dom0_max_vcpus_max = bd->max_vcpus;
+
     if ( is_hvm_domain(d) )
         rc = dom0_construct_pvh(bd);
     else if ( is_pv_domain(d) )
diff --git a/xen/arch/x86/domain-builder/fdt.c 
b/xen/arch/x86/domain-builder/fdt.c
index aff1b8c3235d..70a793db199b 100644
--- a/xen/arch/x86/domain-builder/fdt.c
+++ b/xen/arch/x86/domain-builder/fdt.c
@@ -147,6 +147,17 @@ static int __init process_domain_node(
             bd->max_pages = PFN_DOWN(kb * SZ_1K);
             printk("  max memory: %ld kb\n", kb);
         }
+        else if ( strncmp(prop_name, "cpus", name_len) == 0 )
+        {
+            uint32_t val = UINT_MAX;
+            if ( fdt_prop_as_u32(prop, &val) != 0 )
+            {
+                printk("  failed processing max_vcpus for domain %s\n", name);
+                return -EINVAL;
+            }
+            bd->max_vcpus = val;
+            printk("  max vcpus: %d\n", bd->max_vcpus);
+        }
     }
 
     fdt_for_each_subnode(node, fdt, dom_node)
diff --git a/xen/arch/x86/include/asm/bootdomain.h 
b/xen/arch/x86/include/asm/bootdomain.h
index d7092bc32ad7..1a15273043f5 100644
--- a/xen/arch/x86/include/asm/bootdomain.h
+++ b/xen/arch/x86/include/asm/bootdomain.h
@@ -24,6 +24,8 @@ struct boot_domain {
     unsigned long min_pages;
     unsigned long max_pages;
 
+    unsigned int max_vcpus;
+
     struct boot_module *kernel;
     struct boot_module *ramdisk;
 
-- 
2.30.2




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.