[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5] x86: make Viridian support optional


  • To: Grygorii Strashko <grygorii_strashko@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Tue, 14 Oct 2025 16:38:46 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=swWCo/RIGwSFSNoPmIIHz/090AnswRvfsSs4t7bK/z4=; b=FzjJTf3vf+Z3NSfFzjDXhezi7SdlgoUiXWtQqBuFJ7bvULLriopukWWC3P8yDmXPGwqq+ncI/b4ziIgJqeYHxzp56wrizTY5mW0yedltH7L2/FEwobAkUAZXXv0b9WxLgqDMFpb/flW/j7tuVE+SQ7FQ4y2tYO4Vb3PIZSvlp53PUjpmqWJd3Mmz33R8MMTRb1npKHAAnq0MiS/zBmDnRMx+Vu72sqrhkD0vLvZpCHY3KTQnCQnW78bQp0jBTrlg/t8MtWnBJjv19DtLOSgRZyzfvWXVIhKhzJczNSG2qsBpT/QSkeFwxTgLN5uU/PiXCEW/d0/BPY8iPOXMI2JLpw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=neRA9XpyApMYLn9gCoBRnLqrS+yliGNihznfBmYrEevF2ogWMSBBtYcSNtXsfjtkswnhTyJ6vJKGUCmkGtWK4pvDYKYiACdsUQ/+Rvn/IYmH+hitkwtaAZ2D8olO0cHCHN6r7ngN8JCwKXTayMQf6NSfKEe095PPp5/K1xHJ7voHUtOOOpU93pN1GJfxVd//iIQDVfpODhLoB+AXtchM/FJ8CEkDK6QBkBqZw3RvT/YAt/JLQhJtHWS0+bSBcZ4lJBTtMBgYRb7bNWEEoQHjG1vtXA+RYTaFHFlpxYmvdMPh/bPl47sejgoHS8j88F3p4R9SnUL2Wkjo8lV1pOMn+A==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Sergiy Kibrik <Sergiy_Kibrik@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Alejandro Vallejo <alejandro.garciavallejo@xxxxxxx>
  • Delivery-date: Tue, 14 Oct 2025 14:39:09 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Oct 14, 2025 at 04:24:53PM +0300, Grygorii Strashko wrote:
> On 13.10.25 15:17, Roger Pau Monné wrote:
> > On Tue, Sep 30, 2025 at 12:52:16PM +0000, Grygorii Strashko wrote:
> > > From: Sergiy Kibrik <Sergiy_Kibrik@xxxxxxxx>
> > > +
> > > +   If unsure, say Y.
> > > +
> > >   config MEM_PAGING
> > >           bool "Xen memory paging support (UNSUPPORTED)" if UNSUPPORTED
> > >           depends on VM_EVENT
> > > diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
> > > index 6ec2c8f2db56..736eb3f966e9 100644
> > > --- a/xen/arch/x86/hvm/Makefile
> > > +++ b/xen/arch/x86/hvm/Makefile
> > > @@ -1,6 +1,6 @@
> > >   obj-$(CONFIG_AMD_SVM) += svm/
> > >   obj-$(CONFIG_INTEL_VMX) += vmx/
> > > -obj-y += viridian/
> > > +obj-$(CONFIG_VIRIDIAN) += viridian/
> > >   obj-y += asid.o
> > >   obj-y += dm.o
> > > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > > index 23bd7f078a1d..95a80369b9b8 100644
> > > --- a/xen/arch/x86/hvm/hvm.c
> > > +++ b/xen/arch/x86/hvm/hvm.c
> > > @@ -701,9 +701,12 @@ int hvm_domain_initialise(struct domain *d,
> > >       if ( hvm_tsc_scaling_supported )
> > >           d->arch.hvm.tsc_scaling_ratio = hvm_default_tsc_scaling_ratio;
> > > -    rc = viridian_domain_init(d);
> > > -    if ( rc )
> > > -        goto fail2;
> > > +    if ( is_viridian_domain(d) )
> > > +    {
> > > +        rc = viridian_domain_init(d);
> > > +        if ( rc )
> > > +            goto fail2;
> > > +    }
> > 
> > Are you sure this works as expected?
> > 
> > The viridian_feature_mask() check is implemented using an HVM param,
> > and hence can only be possibly set after the domain object is created.
> > AFAICT is_viridian_domain(d) will unconditionally return false when
> > called from domain_create() context, because the HVM params cannot
> > possibly be set ahead of the domain being created.
> 
> You are right. Thanks for the this catch.
> 
> Taking above into account above, it seems Jan's proposal to convert below
> viridian APIs into wrappers for VIRIDIAN=n case is right way to move forward:
> 
> int viridian_vcpu_init(struct vcpu *v);
> int viridian_domain_init(struct domain *d);
> void viridian_vcpu_deinit(struct vcpu *v);
> void viridian_domain_deinit(struct domain *d);
> 
> Right?

Possibly. If you don't want to introduce a XEN_DOMCTL_createdomain
flag you need to exclusively use the Kconfig option to decide whether
the Viridian related structs must be allocated.  IOW: you could also
solve it by using IS_ENABLED(CONFIG_VIRIDIAN) instead of
is_viridian_domain() for most of the calls here.

The wrapper option might be better IMO, rather than adding
IS_ENABLED(CONFIG_VIRIDIAN) around.

> [1] https://patchwork.kernel.org/comment/26595213/
> 
> > 
> > If you want to do anything like this you will possibly need to
> > introduce a new flag to XEN_DOMCTL_createdomain to signal whether the
> > domain has Viridian extensions are enabled or not, so that it's know
> > in the context where domain_create() gets called.
> 
> In my opinion, it might be good not to go so far within this submission.
> - It's not intended  to change existing behavior of neither Xen nor toolstack
>   for VIRIDIAN=y (default)
> - just optout Viridian support when not needed.

OK, that's fine.

On further request though: if Viridian is build-time disabled in
Kconfig, setting or fetching HVM_PARAM_VIRIDIAN should return -ENODEV
or similar error.  I don't think this is done as part of this patch.

> > 
> > Given that HyperV is available on arm64 also it should be a global
> > flag, as opposed to a per-arch one in xen_arch_domainconfig IMO.
> > >       rc = alternative_call(hvm_funcs.domain_initialise, d);
> > >       if ( rc != 0 )
> > > @@ -739,7 +742,8 @@ void hvm_domain_relinquish_resources(struct domain *d)
> > >       if ( hvm_funcs.nhvm_domain_relinquish_resources )
> > >           alternative_vcall(hvm_funcs.nhvm_domain_relinquish_resources, 
> > > d);
> > > -    viridian_domain_deinit(d);
> > > +    if ( is_viridian_domain(d) )
> > > +        viridian_domain_deinit(d);
> > >       ioreq_server_destroy_all(d);
> > > @@ -1643,9 +1647,12 @@ int hvm_vcpu_initialise(struct vcpu *v)
> > >            && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: 
> > > nestedhvm_vcpu_destroy */
> > >           goto fail5;
> > > -    rc = viridian_vcpu_init(v);
> > > -    if ( rc )
> > > -        goto fail6;
> > > +    if ( is_viridian_domain(d) )
> > > +    {
> > > +        rc = viridian_vcpu_init(v);
> > > +        if ( rc )
> > > +            goto fail6;
> > > +    }
> > >       rc = ioreq_server_add_vcpu_all(d, v);
> > >       if ( rc != 0 )
> > > @@ -1675,13 +1682,15 @@ int hvm_vcpu_initialise(struct vcpu *v)
> > >    fail2:
> > >       hvm_vcpu_cacheattr_destroy(v);
> > >    fail1:
> > > -    viridian_vcpu_deinit(v);
> > > +    if ( is_viridian_domain(d) )
> > > +        viridian_vcpu_deinit(v);
> > >       return rc;
> > >   }
> > >   void hvm_vcpu_destroy(struct vcpu *v)
> > >   {
> > > -    viridian_vcpu_deinit(v);
> > > +    if ( is_viridian_domain(v->domain) )
> > > +        viridian_vcpu_deinit(v);
> > >       ioreq_server_remove_vcpu_all(v->domain, v);
> > > diff --git a/xen/arch/x86/hvm/viridian/viridian.c 
> > > b/xen/arch/x86/hvm/viridian/viridian.c
> > > index c0be24bd2210..1212cc418728 100644
> > > --- a/xen/arch/x86/hvm/viridian/viridian.c
> > > +++ b/xen/arch/x86/hvm/viridian/viridian.c
> > > @@ -1116,14 +1116,14 @@ static int cf_check viridian_save_domain_ctxt(
> > >   {
> > >       const struct domain *d = v->domain;
> > >       const struct viridian_domain *vd = d->arch.hvm.viridian;
> > > -    struct hvm_viridian_domain_context ctxt = {
> > > -        .hypercall_gpa = vd->hypercall_gpa.raw,
> > > -        .guest_os_id = vd->guest_os_id.raw,
> > > -    };
> > > +    struct hvm_viridian_domain_context ctxt = {};
> > >       if ( !is_viridian_domain(d) )
> > >           return 0;
> > > +    ctxt.hypercall_gpa = vd->hypercall_gpa.raw;
> > > +    ctxt.guest_os_id = vd->guest_os_id.raw,
> > > +
> > >       viridian_time_save_domain_ctxt(d, &ctxt);
> > >       viridian_synic_save_domain_ctxt(d, &ctxt);
> > > @@ -1136,6 +1136,9 @@ static int cf_check viridian_load_domain_ctxt(
> > >       struct viridian_domain *vd = d->arch.hvm.viridian;
> > >       struct hvm_viridian_domain_context ctxt;
> > > +    if ( !is_viridian_domain(d) )
> > > +        return -EILSEQ;
> > > +
> > >       if ( hvm_load_entry_zeroextend(VIRIDIAN_DOMAIN, h, &ctxt) != 0 )
> > >           return -EINVAL;
> > > @@ -1172,6 +1175,9 @@ static int cf_check viridian_load_vcpu_ctxt(
> > >       struct vcpu *v;
> > >       struct hvm_viridian_vcpu_context ctxt;
> > > +    if ( !is_viridian_domain(d) )
> > > +        return -EILSEQ;
> > 
> > Nit: we usually use EILSEQ for unreachable conditions, but here it's a
> > toolstack controlled input.  Maybe we could instead use ENODEV here?
> > 
> > As it's not really an illegal restore stream, but the selected guest
> > configuration doesn't match what's then loaded.
> 
> I'm a "working bee" here and can change it once again her to -ENODEV here.
> But It will be really cool if it will be agreed on Maintainers level somehow.
> 
> EILSEQ was used as per [2]

Didn't know it was explicitly requested, then leave it like that and
ignore this comment.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.