[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5] x86: make Viridian support optional


  • To: Grygorii Strashko <grygorii_strashko@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 13 Oct 2025 14:17:54 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=VAieG4/Xjh5kpeI8cRoPY1l6WT4vJQBOu5HqLbP+wTA=; b=bjCWOU7FeKNnuZXNOxFViZ8VUGxh9rL5Ux33fh6jtp0RMtAR35KvBSbKJpa9Zk/SAd7FXe5fw+0RN9D8b2DZmIlef/bnfVB71ZZusZC6PG/F55/hKbKUYwrhmtCBgU7RcG1yTSJFLdJlWeS5kMri62LO0sVclAz23xWdVQTH//Jhy1Mr1XGpauoTKSYQORCgGfAtdRO9KKImRtkmnkD8OPK4GBk9ru7BtzsjpIPua/VEU8TkAXEMDQslUaydQX9ymLrqzgr9H/u6Q5bYSoSvcNO06XMqDZ0rfef/AhwWRuRpXYoYfQ69jI870oE8+iHl0CvMOLbaDkrzpluIWoRX2w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=X+R2ZTZFEJPj8cZKvvUe8gDL3iZbTF/V/TYK1aSNE6RClKJ6+Z7j+We08JOzekaUb5SaIN6lGizRQ1Vugep4SC2xR1lPpNpgdPgnXSuFJayGZdzVw2cQzkL+BoT39TCodvcwQwumRCfzrEcNNEtE464GgxB5FfYb2daazjJV5hbSODg+5y7wp017ULlL3F/+6VzYRRH5xM3EPj0+dbaBQZwg6YHArvh7bHAPrylj9BP3YjJ/Zv1iMaGY4HlPSUyrVdD/9ZpMiKL0NtgaEf+8cZhlMUXZgpkW23cMx3fggyslFg/e2gmaGBGn5hkH+hXvD8GSbtEYR3jNIj/DE4Y4qA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Sergiy Kibrik <Sergiy_Kibrik@xxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Alejandro Vallejo <alejandro.garciavallejo@xxxxxxx>
  • Delivery-date: Mon, 13 Oct 2025 12:18:24 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Sep 30, 2025 at 12:52:16PM +0000, Grygorii Strashko wrote:
> From: Sergiy Kibrik <Sergiy_Kibrik@xxxxxxxx>
> 
> Add config option VIRIDIAN that covers viridian code within HVM.
> Calls to viridian functions guarded by is_viridian_domain() and related 
> macros.
> Having this option may be beneficial by reducing code footprint for systems
> that are not using Hyper-V.
> 
> [grygorii_strashko@xxxxxxxx: fixed NULL pointer deref in
> viridian_save_domain_ctxt()]
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@xxxxxxxx>
> Signed-off-by: Grygorii Strashko <grygorii_strashko@xxxxxxxx>
> ---
> changes in v5:
> - drop "depends on AMD_SVM || INTEL_VMX"
> - return -EILSEQ from viridian_load_x() if !VIRIDIAN
> 
> changes in v4:
> - s/HVM_VIRIDIAN/VIRIDIAN
> - add "depends on AMD_SVM || INTEL_VMX"
> - add guard !is_viridian_vcpu() checks in 
> viridian_load_vcpu_ctxt/viridian_load_domain_ctxt
> 
> changes in v3:
> - fixed NULL pointer deref in viridian_save_domain_ctxt() reported for v2,
>   which caused v2 revert by commit 1fffcf10cd71 ("Revert "x86: make Viridian
>   support optional"")
> 
> v4: 
> https://patchwork.kernel.org/project/xen-devel/patch/20250919163139.2821531-1-grygorii_strashko@xxxxxxxx/
> v3: 
> https://patchwork.kernel.org/project/xen-devel/patch/20250916134114.2214104-1-grygorii_strashko@xxxxxxxx/
> v2: 
> https://patchwork.kernel.org/project/xen-devel/patch/20250321092633.3982645-1-Sergiy_Kibrik@xxxxxxxx/
> 
>  xen/arch/x86/hvm/Kconfig              | 10 ++++++++++
>  xen/arch/x86/hvm/Makefile             |  2 +-
>  xen/arch/x86/hvm/hvm.c                | 27 ++++++++++++++++++---------
>  xen/arch/x86/hvm/viridian/viridian.c  | 14 ++++++++++----
>  xen/arch/x86/hvm/vlapic.c             | 11 +++++++----
>  xen/arch/x86/include/asm/hvm/domain.h |  2 ++
>  xen/arch/x86/include/asm/hvm/hvm.h    |  3 ++-
>  xen/arch/x86/include/asm/hvm/vcpu.h   |  2 ++
>  8 files changed, 52 insertions(+), 19 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/Kconfig b/xen/arch/x86/hvm/Kconfig
> index 5cb9f2904255..928bb5662b89 100644
> --- a/xen/arch/x86/hvm/Kconfig
> +++ b/xen/arch/x86/hvm/Kconfig
> @@ -62,6 +62,16 @@ config ALTP2M
>  
>         If unsure, stay with defaults.
>  
> +config VIRIDIAN
> +     bool "Hyper-V enlightenments for guests" if EXPERT
> +     default y
> +     help
> +       Support optimizations for Hyper-V guests such as faster hypercalls,
> +       efficient timer and interrupt handling, and enhanced paravirtualized
> +       I/O. This is to improve performance and compatibility of Windows VMs.

I would leave "paravirtualized I/O" out of the text, as the hypervisor
Viridian extensions don't provide anything related to I/O.  AFAICT
that would be the vmbus stuff, which I'm not sure is supported when
running as a Xen guest, and would require QEMU to emulate such
interfaces?  IOW: the paravirtualized I/O part is out-of-scope for an
hypervisor-only related config option:

          Support optimizations for Hyper-V guests such as hypercalls,
          efficient timers and interrupt handling. This is to improve
          performance and compatibility of Windows VMs.

Nit: I would also drop the "faster" prefix for hypercalls.  Without
this option enabled there are no Hyper-V hypercalls available,
neither fast nor slow.


> +
> +       If unsure, say Y.
> +
>  config MEM_PAGING
>       bool "Xen memory paging support (UNSUPPORTED)" if UNSUPPORTED
>       depends on VM_EVENT
> diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
> index 6ec2c8f2db56..736eb3f966e9 100644
> --- a/xen/arch/x86/hvm/Makefile
> +++ b/xen/arch/x86/hvm/Makefile
> @@ -1,6 +1,6 @@
>  obj-$(CONFIG_AMD_SVM) += svm/
>  obj-$(CONFIG_INTEL_VMX) += vmx/
> -obj-y += viridian/
> +obj-$(CONFIG_VIRIDIAN) += viridian/
>  
>  obj-y += asid.o
>  obj-y += dm.o
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 23bd7f078a1d..95a80369b9b8 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -701,9 +701,12 @@ int hvm_domain_initialise(struct domain *d,
>      if ( hvm_tsc_scaling_supported )
>          d->arch.hvm.tsc_scaling_ratio = hvm_default_tsc_scaling_ratio;
>  
> -    rc = viridian_domain_init(d);
> -    if ( rc )
> -        goto fail2;
> +    if ( is_viridian_domain(d) )
> +    {
> +        rc = viridian_domain_init(d);
> +        if ( rc )
> +            goto fail2;
> +    }

Are you sure this works as expected?

The viridian_feature_mask() check is implemented using an HVM param,
and hence can only be possibly set after the domain object is created.
AFAICT is_viridian_domain(d) will unconditionally return false when
called from domain_create() context, because the HVM params cannot
possibly be set ahead of the domain being created.

If you want to do anything like this you will possibly need to
introduce a new flag to XEN_DOMCTL_createdomain to signal whether the
domain has Viridian extensions are enabled or not, so that it's know
in the context where domain_create() gets called.

Given that HyperV is available on arm64 also it should be a global
flag, as opposed to a per-arch one in xen_arch_domainconfig IMO.

>  
>      rc = alternative_call(hvm_funcs.domain_initialise, d);
>      if ( rc != 0 )
> @@ -739,7 +742,8 @@ void hvm_domain_relinquish_resources(struct domain *d)
>      if ( hvm_funcs.nhvm_domain_relinquish_resources )
>          alternative_vcall(hvm_funcs.nhvm_domain_relinquish_resources, d);
>  
> -    viridian_domain_deinit(d);
> +    if ( is_viridian_domain(d) )
> +        viridian_domain_deinit(d);
>  
>      ioreq_server_destroy_all(d);
>  
> @@ -1643,9 +1647,12 @@ int hvm_vcpu_initialise(struct vcpu *v)
>           && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: 
> nestedhvm_vcpu_destroy */
>          goto fail5;
>  
> -    rc = viridian_vcpu_init(v);
> -    if ( rc )
> -        goto fail6;
> +    if ( is_viridian_domain(d) )
> +    {
> +        rc = viridian_vcpu_init(v);
> +        if ( rc )
> +            goto fail6;
> +    }
>  
>      rc = ioreq_server_add_vcpu_all(d, v);
>      if ( rc != 0 )
> @@ -1675,13 +1682,15 @@ int hvm_vcpu_initialise(struct vcpu *v)
>   fail2:
>      hvm_vcpu_cacheattr_destroy(v);
>   fail1:
> -    viridian_vcpu_deinit(v);
> +    if ( is_viridian_domain(d) )
> +        viridian_vcpu_deinit(v);
>      return rc;
>  }
>  
>  void hvm_vcpu_destroy(struct vcpu *v)
>  {
> -    viridian_vcpu_deinit(v);
> +    if ( is_viridian_domain(v->domain) )
> +        viridian_vcpu_deinit(v);
>  
>      ioreq_server_remove_vcpu_all(v->domain, v);
>  
> diff --git a/xen/arch/x86/hvm/viridian/viridian.c 
> b/xen/arch/x86/hvm/viridian/viridian.c
> index c0be24bd2210..1212cc418728 100644
> --- a/xen/arch/x86/hvm/viridian/viridian.c
> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> @@ -1116,14 +1116,14 @@ static int cf_check viridian_save_domain_ctxt(
>  {
>      const struct domain *d = v->domain;
>      const struct viridian_domain *vd = d->arch.hvm.viridian;
> -    struct hvm_viridian_domain_context ctxt = {
> -        .hypercall_gpa = vd->hypercall_gpa.raw,
> -        .guest_os_id = vd->guest_os_id.raw,
> -    };
> +    struct hvm_viridian_domain_context ctxt = {};
>  
>      if ( !is_viridian_domain(d) )
>          return 0;
>  
> +    ctxt.hypercall_gpa = vd->hypercall_gpa.raw;
> +    ctxt.guest_os_id = vd->guest_os_id.raw,
> +
>      viridian_time_save_domain_ctxt(d, &ctxt);
>      viridian_synic_save_domain_ctxt(d, &ctxt);
>  
> @@ -1136,6 +1136,9 @@ static int cf_check viridian_load_domain_ctxt(
>      struct viridian_domain *vd = d->arch.hvm.viridian;
>      struct hvm_viridian_domain_context ctxt;
>  
> +    if ( !is_viridian_domain(d) )
> +        return -EILSEQ;
> +
>      if ( hvm_load_entry_zeroextend(VIRIDIAN_DOMAIN, h, &ctxt) != 0 )
>          return -EINVAL;
>  
> @@ -1172,6 +1175,9 @@ static int cf_check viridian_load_vcpu_ctxt(
>      struct vcpu *v;
>      struct hvm_viridian_vcpu_context ctxt;
>  
> +    if ( !is_viridian_domain(d) )
> +        return -EILSEQ;

Nit: we usually use EILSEQ for unreachable conditions, but here it's a
toolstack controlled input.  Maybe we could instead use ENODEV here?

As it's not really an illegal restore stream, but the selected guest
configuration doesn't match what's then loaded.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.