[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] x86/xen: Add support for HVMOP_set_evtchn_upcall_vector


  • To: LKML <linux-kernel@xxxxxxxxxxxxxxx>
  • From: Jane Malalane <Jane.Malalane@xxxxxxxxxx>
  • Date: Tue, 26 Jul 2022 13:23:27 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/NTreFgJG7w9rWLpliFWfM02CrLNgyrr3jJhubszJ0E=; b=FcrRswZmtbJo/Kt1Y63CUf5F1kymDHuV2/cTdg1lzMWg07ip1ULA6VqkR9HYWayTQLEGr9kAetKSZthO7ZlmSkKiW/UAdo22mGpnKekYLzfSqHFf2b+y/oX/r3+UDNJrs4ORO1vPZIx8ImC3nOZO7wlb02RFci7a2VwgJoNPRXl6HmL6BaUxtTZ/+BRxTS4FbxKCx+D1xFiO8fGFM6PfkrQavrKYy/aiMasLqhSi9nEGcS1Qyk48jLqEV/uZXXrNa8N5OPdje0mZR+Bjpz/EdNaSO6agw12JmIMw4qhVriLUUoxRKv0bq3LVckhdaScyeCeCza5KThMarNLT6iD/1g==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VrPjEkUxwKDrjm+R5K2PqZ7kQEBChWxBFoXU5lHVc/dF73h5B9txSExE3RsfDBEfvYvJe5hMTpAas1niMQ5d2uHMh6nb2lxy/jkxBiMa6gqMKvPpH0tbDUHX43sv84YilUDzQAilMEqszksR4QHNOosTeXxXl5xbHl2wMPUyeD6eOzIwjXxbgEl1Z70KJgL32QPLu0fpoEgFv4FjTeYnZDV4+Ir7RahBAF/ZaA3RJrdA3+GuAExqj3t2wmv501W0KdGzhxehuAEpKTpgAgKC7IspmBSAQXjF5QAjtU2Co27e7IT/ZXOi5NAMxkKR2xQS/OYf9KLyOMLmLCc4mcizOg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Juergen Gross <jgross@xxxxxxxx>, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>, Thomas Gleixner <tglx@xxxxxxxxxxxxx>, Ingo Molnar <mingo@xxxxxxxxxx>, Borislav Petkov <bp@xxxxxxxxx>, Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>, "x86@xxxxxxxxxx" <x86@xxxxxxxxxx>, "H. Peter Anvin" <hpa@xxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>, Maximilian Heyne <mheyne@xxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Colin Ian King <colin.king@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 26 Jul 2022 13:23:52 +0000
  • Ironport-data: A9a23:O8IYjaqV+mpVDw8ZBoKnQgUjIOxeBmI0ZRIvgKrLsJaIsI4StFCzt garIBmGOP2NNzOnLtF/Odzno0NTuZTWm4NkTAE4qXtkFn4S9puZCYyVIHmrMnLJJKUvbq7GA +byyDXkBJppJpMJjk71atANlVEliefSAOKU5NfsYkhZXRVjRDoqlSVtkus4hp8AqdWiCkaGt MiaT/f3YTdJ4BYpdDNPg06/gEk35q6q52lF5gZWic1j5zcyqVFEVPrzGonpR5fIatE8NvK3Q e/F0Ia48gvxl/v6Ior4+lpTWhRiro/6ZWBiuFIPM0SRqkEqShgJ+rQ6LJIhhXJ/0F1lqTzTJ OJl7vRcQS9xVkHFdX90vxNwS0mSNoUekFPLzOTWXWV+ACQqflO1q8iCAn3aMqUXpqVzWmJPy cYTC2gzUSqovMms5ry0H7wEasQLdKEHPas5k1Q5lHTzK6ZjRprOBaLX+dVfwTE8wNhUGurTb NYYbjwpawncZxpIOREcD5dWcOWA3yGjNWEH7g/F4/NpswA/zyQouFTpGPjcfNHMYMxRl0KRo G/u9GXlGBAKcteYzFJp91rz2LOTwX2rB+r+EpWyzK9wo2aI/1Y1GT0VBHqW/fWpzUSXDoc3x 0s8v3BGQbIJ3EiqSMTtGh61uniJujYCVNdKVe438geAzuzT+QnxLmoLVDlac/Q9qdQ7Azct0 zehldTzBCcpt6aJU3WD7bSFhTSoMCMRICkJYipsZRcK58nLpIA1kw7VSdBiAOi5g7XdHDD2z DeitiUyh7wPy8UM0s2T90jvijatq56ZCAI4ji3bV3yoqANwYpWoYaSs6F7G/bBBKpqUSh+Ku 31ss9jOssgNAIuLmSjLR/8CdJmp//+tIizAhkQpFJ4knxy24GKqd41U5DB4JW9qP9wCdDuvZ 1Xc0StW4JJQJ3KsYbVAf5OqC88qwK7jEvzoTvnRKNFJZ/BZcQ+K7SdjTUeV1nLqlg4gnMkXO 52WbMKtBnYyErl8wXy9QOJ1+bQswiE4g2DSQ5/TzhK73L7Yb3mQIZ8VPV3LYu0n4aespATO7 80ZJ8aM0w9YUuD1fm/Q64F7ELwRBX0yBJSzocoHcOeGe1NiADt4Va+Xxq49cYt4magTjv3P4 ny2Rk5fzhz4mGHDLgKJLHtkbdsDQKpCkJ7yBgR0VX7A5pTpSdzHAHs3H3fvQYQayQ==
  • Ironport-hdrordr: A9a23:i/OcZqoZ/rQMJcR8JNkTsaQaV5uHL9V00zEX/kB9WHVpm5Oj+v xGzc5w6farsl0ssSkb6Ku90KnpewK+yXbsibNhd4tKLzOWwldAS7sSoLcKogeQUBEWk9Qw6U 4OSdkYNDSdNzlHZIPBkXGF+rUbsZW6GcKT9IHjJh5WJGkEBZ2IrT0JczpzeXcGJjWucKBJcK Z0kfA3wgZIF052Uu2LQl0+G8TTrdzCk5zrJTQcAQQ81QWIhTS0rJbnDhmxxH4lIn1y6IZn1V KAvx3y562lvf3+4ATbzXXv45Nfn8ak4sdfBfaLltMeJlzX+0eVjcVaKv2/VQIO0aOSAWUR4Z zxStAbToBOAkbqDyKISN3Wqk7dOXgVmjnfIBSj8AbeSITCNU4H4ox69M1km1LimjUdlcA536 RR022DsZ1LSRvGgSTm/tDNEwpnj0yuvBMZ4JguZlFkIP8jgYVq3Psi1VIQFI1FEDPx6YghHu UrBMbA5OxOeVffa3zCpGFgzNGlQ3x2R369MwA/k93Q1yITkGFyzkMeysBalnAc9IglQ50B4+ jfKKxnmLxHU8dTZ6NgA+UKR9exFwX2MFvxGXPXJU6iGLAMOnrLpZKy6LIp5PuycJhN15c2kI SpaiIsiYfzQTOdNSSj5uw7zvmWehTCYd3E8LAv27Fp/rvhWbHsLSqPDFgzjsrImYRtPvHm
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHYoO9LZ/coQoD4ekyAV2OZDcw3862Qo+SA
  • Thread-topic: [PATCH v2] x86/xen: Add support for HVMOP_set_evtchn_upcall_vector

On 26/07/2022 13:56, Jane Malalane wrote:
> Implement support for the HVMOP_set_evtchn_upcall_vector hypercall in
> order to set the per-vCPU event channel vector callback on Linux and
> use it in preference of HVM_PARAM_CALLBACK_IRQ.
> 
> If the per-VCPU vector setup is successful on BSP, use this method
> for the APs. If not, fallback to the global vector-type callback.
> 
> Also register callback_irq at per-vCPU event channel setup to trick
> toolstack to think the domain is enlightened.
> 
> Suggested-by: "Roger Pau Monné" <roger.pau@xxxxxxxxxx>
> Signed-off-by: Jane Malalane <jane.malalane@xxxxxxxxxx>
> ---
> CC: Juergen Gross <jgross@xxxxxxxx>
> CC: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
> CC: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> CC: Ingo Molnar <mingo@xxxxxxxxxx>
> CC: Borislav Petkov <bp@xxxxxxxxx>
> CC: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
> CC: x86@xxxxxxxxxx
> CC: "H. Peter Anvin" <hpa@xxxxxxxxx>
> CC: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
> CC: Jane Malalane <jane.malalane@xxxxxxxxxx>
> CC: Maximilian Heyne <mheyne@xxxxxxxxx>
> CC: Jan Beulich <jbeulich@xxxxxxxx>
> CC: Colin Ian King <colin.king@xxxxxxxxx>
> CC: xen-devel@xxxxxxxxxxxxxxxxxxxx
> 
> v2:
>   * remove no_vector_callback
>   * make xen_have_vector_callback a bool
>   * rename xen_ack_upcall to xen_percpu_upcall
>   * fail to bring CPU up on init instead of crashing kernel
>   * add and use xen_set_upcall_vector where suitable
>   * xen_setup_upcall_vector -> xen_init_setup_upcall_vector for clarity
> ---
>   arch/x86/include/asm/xen/cpuid.h   |  2 ++
>   arch/x86/include/asm/xen/events.h  |  3 ++-
>   arch/x86/xen/enlighten.c           |  2 +-
>   arch/x86/xen/enlighten_hvm.c       | 24 ++++++++++++++-----
>   arch/x86/xen/suspend_hvm.c         | 10 +++++++-
>   drivers/xen/events/events_base.c   | 49 
> ++++++++++++++++++++++++++++++++++----
>   include/xen/hvm.h                  |  2 ++
>   include/xen/interface/hvm/hvm_op.h | 15 ++++++++++++
>   8 files changed, 93 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/x86/include/asm/xen/cpuid.h 
> b/arch/x86/include/asm/xen/cpuid.h
> index 78e667a31d6c..6daa9b0c8d11 100644
> --- a/arch/x86/include/asm/xen/cpuid.h
> +++ b/arch/x86/include/asm/xen/cpuid.h
> @@ -107,6 +107,8 @@
>    * ID field from 8 to 15 bits, allowing to target APIC IDs up 32768.
>    */
>   #define XEN_HVM_CPUID_EXT_DEST_ID      (1u << 5)
> +/* Per-vCPU event channel upcalls */
> +#define XEN_HVM_CPUID_UPCALL_VECTOR    (1u << 6)
>   
>   /*
>    * Leaf 6 (0x40000x05)
> diff --git a/arch/x86/include/asm/xen/events.h 
> b/arch/x86/include/asm/xen/events.h
> index 068d9b067c83..62bdceb594f1 100644
> --- a/arch/x86/include/asm/xen/events.h
> +++ b/arch/x86/include/asm/xen/events.h
> @@ -23,7 +23,7 @@ static inline int xen_irqs_disabled(struct pt_regs *regs)
>   /* No need for a barrier -- XCHG is a barrier on x86. */
>   #define xchg_xen_ulong(ptr, val) xchg((ptr), (val))
>   
> -extern int xen_have_vector_callback;
> +extern bool xen_have_vector_callback;
>   
>   /*
>    * Events delivered via platform PCI interrupts are always
> @@ -34,4 +34,5 @@ static inline bool xen_support_evtchn_rebind(void)
>       return (!xen_hvm_domain() || xen_have_vector_callback);
>   }
>   
> +extern bool xen_percpu_upcall;
>   #endif /* _ASM_X86_XEN_EVENTS_H */
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index 30c6e986a6cd..b8db2148c07d 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -51,7 +51,7 @@ EXPORT_SYMBOL_GPL(xen_start_info);
>   
>   struct shared_info xen_dummy_shared_info;
>   
> -__read_mostly int xen_have_vector_callback;
> +__read_mostly bool xen_have_vector_callback = true;
>   EXPORT_SYMBOL_GPL(xen_have_vector_callback);
>   
>   /*
> diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
> index 8b71b1dd7639..198d3cd3e9a5 100644
> --- a/arch/x86/xen/enlighten_hvm.c
> +++ b/arch/x86/xen/enlighten_hvm.c
> @@ -7,6 +7,8 @@
>   
>   #include <xen/features.h>
>   #include <xen/events.h>
> +#include <xen/hvm.h>
> +#include <xen/interface/hvm/hvm_op.h>
>   #include <xen/interface/memory.h>
>   
>   #include <asm/apic.h>
> @@ -30,6 +32,9 @@
>   
>   static unsigned long shared_info_pfn;
>   
> +__ro_after_init bool xen_percpu_upcall;
> +EXPORT_SYMBOL_GPL(xen_percpu_upcall);
> +
>   void xen_hvm_init_shared_info(void)
>   {
>       struct xen_add_to_physmap xatp;
> @@ -125,6 +130,9 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_xen_hvm_callback)
>   {
>       struct pt_regs *old_regs = set_irq_regs(regs);
>   
> +     if (xen_percpu_upcall)
> +             ack_APIC_irq();
> +
>       inc_irq_stat(irq_hv_callback_count);
>   
>       xen_hvm_evtchn_do_upcall();
> @@ -168,6 +176,15 @@ static int xen_cpu_up_prepare_hvm(unsigned int cpu)
>       if (!xen_have_vector_callback)
>               return 0;
>   
> +     if (xen_percpu_upcall) {
> +             rc = xen_set_upcall_vector(cpu);
> +             if (rc) {
> +                     WARN(1, "HVMOP_set_evtchn_upcall_vector"
> +                          " for CPU %d failed: %d\n", cpu, rc);
> +                     return rc;
> +             }
> +     }
> +
>       if (xen_feature(XENFEAT_hvm_safe_pvclock))
>               xen_setup_timer(cpu);
>   
> @@ -188,8 +205,6 @@ static int xen_cpu_dead_hvm(unsigned int cpu)
>       return 0;
>   }
>   
> -static bool no_vector_callback __initdata;
> -
>   static void __init xen_hvm_guest_init(void)
>   {
>       if (xen_pv_domain())
> @@ -211,9 +226,6 @@ static void __init xen_hvm_guest_init(void)
>   
>       xen_panic_handler_init();
>   
> -     if (!no_vector_callback && xen_feature(XENFEAT_hvm_callback_vector))
> -             xen_have_vector_callback = 1;
> -
>       xen_hvm_smp_init();
>       WARN_ON(xen_cpuhp_setup(xen_cpu_up_prepare_hvm, xen_cpu_dead_hvm));
>       xen_unplug_emulated_devices();
> @@ -239,7 +251,7 @@ early_param("xen_nopv", xen_parse_nopv);
>   
>   static __init int xen_parse_no_vector_callback(char *arg)
>   {
> -     no_vector_callback = true;
> +     xen_have_vector_callback = false;
>       return 0;
>   }
>   early_param("xen_no_vector_callback", xen_parse_no_vector_callback);
> diff --git a/arch/x86/xen/suspend_hvm.c b/arch/x86/xen/suspend_hvm.c
> index 9d548b0c772f..0c4f7554b7cc 100644
> --- a/arch/x86/xen/suspend_hvm.c
> +++ b/arch/x86/xen/suspend_hvm.c
> @@ -5,6 +5,7 @@
>   #include <xen/hvm.h>
>   #include <xen/features.h>
>   #include <xen/interface/features.h>
> +#include <xen/events.h>
>   
>   #include "xen-ops.h"
>   
> @@ -14,6 +15,13 @@ void xen_hvm_post_suspend(int suspend_cancelled)
>               xen_hvm_init_shared_info();
>               xen_vcpu_restore();
>       }
> -     xen_setup_callback_vector();
> +     if (xen_percpu_upcall) {
> +             unsigned int cpu;
> +
> +             for_each_online_cpu(cpu)
> +                     BUG_ON(xen_set_upcall_vector(cpu));
> +     } else {
> +             xen_setup_callback_vector();
> +     }
>       xen_unplug_emulated_devices();
>   }
> diff --git a/drivers/xen/events/events_base.c 
> b/drivers/xen/events/events_base.c
> index 46d9295d9a6e..2ad93595d03a 100644
> --- a/drivers/xen/events/events_base.c
> +++ b/drivers/xen/events/events_base.c
> @@ -48,6 +48,7 @@
>   #include <asm/xen/pci.h>
>   #endif
>   #include <asm/sync_bitops.h>
> +#include <asm/xen/cpuid.h>
>   #include <asm/xen/hypercall.h>
>   #include <asm/xen/hypervisor.h>
>   #include <xen/page.h>
> @@ -2195,11 +2196,48 @@ void xen_setup_callback_vector(void)
>               callback_via = HVM_CALLBACK_VECTOR(HYPERVISOR_CALLBACK_VECTOR);
>               if (xen_set_callback_via(callback_via)) {
>                       pr_err("Request for Xen HVM callback vector failed\n");
> -                     xen_have_vector_callback = 0;
> +                     xen_have_vector_callback = false;
>               }
>       }
>   }
>   
> +/* Setup per-vCPU vector-type callbacks and trick toolstack to think
> + * we are enlightened. If this setup is unavailable, fallback to the
> + * global vector-type callback. */
> +static __init void xen_init_setup_upcall_vector(void)
> +{
> +     unsigned int cpu = 0;
> +
> +     if (!xen_have_vector_callback)
> +             return;
> +
> +     if ((cpuid_eax(xen_cpuid_base() + 4) & XEN_HVM_CPUID_UPCALL_VECTOR) &&
> +         !xen_set_upcall_vector(cpu) && !xen_set_callback_via(1))
I should have removed xen_set_callback_via(1) here. Will send another 
version, would just like to wait for any comments before I do.

Jane.

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.