[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] x86: generalise vcpu0 creation for a domain


  • To: Jan Beulich <jbeulich@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Jason Andryuk <jason.andryuk@xxxxxxx>
  • From: Alejandro Vallejo <alejandro.garciavallejo@xxxxxxx>
  • Date: Fri, 18 Jul 2025 11:46:35 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=OSsdYlXXtXYHFiRjzkJZVTQohIrNnqXGtDaHKPBJn1o=; b=YWnWFVTaZ5EuKOrXEQhdAiCc4brCjiULn+WSEMt+tiMxEqTRu2CCJ3F69oD20omAjzHtlqHw6oOEGpHPgL42WgbMpPEc7hkUC1Bt2EUzRrT/zImIUIAuFShz4XFdEf4ENOGxVz6FfD48/WFi77Yh2iSSZge8mNKX9tG6BDkDWt4j0Jc4NCozhjIrXOBv6xI5PTxWQNGnin2mwaM8lVb9vDRlzvmpekQS/0gai/fewDtHYn+Z2vtpcXOSvSDSMNZDb6EMNHuW0joiKz/ZknZz+lxsjWTIbX1z1Oysv5IdJuAESXSCcfJqWyF8gWzAtLui15L2gKibjeqz3Kz5FUbDVg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=oQ9zENYIGCwQ9uONZ1FrQSRcehGMmWqHsH8gsvimpq9B3rZXwvJ2nKWwh+sk18Ow9d5bwyHfwetucLe2rj6WNn/9gm225cBcgbcTslVqXY9frA6T4vf243ly8KKn4T59IiWZfhAEhzxSH4ggYVs6bl9Gqfdgh95SFq9TBjIx3A+R5dkxiozzw0wcuTsO8e/IBch0WjZ7D7LapTKZrOUpX56dAy8fs+nqfnT8iiFdRj3L8Unn6zEndg6mk9IZ3M9ZEgntTR21DYrxnNRVBaT4d/s9nADDDMFNrc71DUhQq+j+xdd6RTwVMezagI3N8LxrDCtalIifPDbzqhC7o/IMAg==
  • Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, "Michal Orzel" <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, "Daniel P . Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 18 Jul 2025 09:46:55 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Fri Jul 18, 2025 at 8:42 AM CEST, Jan Beulich wrote:
> On 18.07.2025 02:04, Stefano Stabellini wrote:
>> On Thu, 17 Jul 2025, Jason Andryuk wrote:
>>> On 2025-07-17 13:51, Alejandro Vallejo wrote:
>>>> Make alloc_dom0_vcpu0() viable as a general vcpu0 allocator. Keep
>>>> behaviour on any hwdom/ctldom identical to that dom0 used to have, and
>>>> make non-dom0 have auto node affinity.
>>>>
>>>> Rename the function to alloc_dom_vcpu0() to reflect this change in
>>>> scope, and move the prototype to asm/domain.h from xen/domain.h as it's
>>>> only used in x86.
>>>>
>>>> Signed-off-by: Daniel P. Smith <dpsmith@xxxxxxxxxxxxxxxxxxxx>
>>>> Signed-off-by: Alejandro Vallejo <alejandro.garciavallejo@xxxxxxx>
>>>> ---
>>>>   xen/arch/x86/dom0_build.c             | 12 ++++++++----
>>>>   xen/arch/x86/include/asm/dom0_build.h |  5 +++++
>>>>   xen/arch/x86/setup.c                  |  6 ++++--
>>>>   xen/include/xen/domain.h              |  1 -
>>>>   4 files changed, 17 insertions(+), 7 deletions(-)
>>>>
>>>> diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
>>>> index 0b467fd4a4..dfae7f888f 100644
>>>> --- a/xen/arch/x86/dom0_build.c
>>>> +++ b/xen/arch/x86/dom0_build.c
>>>> @@ -254,12 +254,16 @@ unsigned int __init dom0_max_vcpus(void)
>>>>       return max_vcpus;
>>>>   }
>>>>   -struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0)
>>>> +struct vcpu *__init alloc_dom_vcpu0(struct domain *d)
>>>>   {
>>>> -    dom0->node_affinity = dom0_nodes;
>>>> -    dom0->auto_node_affinity = !dom0_nr_pxms;
>>>> +    d->auto_node_affinity = true;
>>>> +    if ( is_hardware_domain(d) || is_control_domain(d) )
>>>
>>> Do we want dom0 options to apply to:
>>> hardware or control
>>> just hardware
>>> just dom0 (hardware && control && xenstore)
>>>
>>> ?
>>>
>>> I think "just dom0" may make the most sense.  My next preference is just
>>> hardware.  Control I think should be mostly a domU except for having
>>> is_privileged = true;

I could get behind just hardware, but I don't think the xenstore cap bears much
importance in this.

>> 
>> Great question. Certainly dom0 options, such as dom0_mem, should not
>> apply to Control, and certainly they should apply to regular Dom0.

But what is a regular dom0, when not booted regularly? What is it that makes
dom0 quack like a dom0 and not like a domU? It may very well be the case that
it's the regularity of dom0 that makes it a dom0, and cannot be sensibly
described in the presence of split ctl/hw domains.

>> 
>> The interesting question is whether they should apply to the Hardware
>> Domain. Some of the Dom0 options make sense for the Hardware Domain and
>> there isn't an equivalent config option available via Dom0less bindings.

Well, bindings can be easily created. nmi= in particular can be trivially
directed anywhere via a boolean binding attached to any domain.

The fact that the bindings are less granular than the dom0 cmdline due
to not being needed until now.

>> I am not thinking about the dom0_* options but things like nmi=dom0. For
>> simplicity and ease of use I would say they should apply to the Hardware
>> Domain.

That's a fun case. So when nmi=dom0 the report goes to the hwdom even if it's
not dom0 (i.e: late hwdom).

>
> Interesting indeed. So far we more or less aliased hwdom == dom0.
>
> Jan

Some arguments do want to be adjusted one way or the other spec_ctrl.c makes
heavy assumptions about there not being any hwdom/ctldom separation, and both
having domain_id == 0. There are other cases.

Yet another option is to single-out the Hyperlaunch/dom0less case and never
apply dom0 commandline overrides there (dom0_*=). It'd be a flag in
boot_domain. Might even allow us to compile them out altogether for
dom0less-only configurations (e.g: CONFIG_DOM0LESS_BOOT && !CONFIG_DOM0_BOOT, or
something like that).

Thoughts?

Cheers,
Alejandro



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.