[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[RFC PATCH 5/7] x86/hvm: Flush TLB on vCPU overlaps on the same pCPU


  • To: xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: "Teddy Astie" <teddy.astie@xxxxxxxxxx>
  • Date: Wed, 15 Apr 2026 13:32:22 +0000
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=mte1 header.d=mandrillapp.com header.i="@mandrillapp.com" header.h="From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:Date:MIME-Version:Content-Type:Content-Transfer-Encoding"; dkim=pass header.s=mte1 header.d=vates.tech header.i="teddy.astie@xxxxxxxxxx" header.h="From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:Date:MIME-Version:Content-Type:Content-Transfer-Encoding"
  • Cc: "Teddy Astie" <teddy.astie@xxxxxxxxxx>, "Jan Beulich" <jbeulich@xxxxxxxx>, "Andrew Cooper" <andrew.cooper3@xxxxxxxxxx>, "Roger Pau Monné" <roger.pau@xxxxxxxxxx>, "Jason Andryuk" <jason.andryuk@xxxxxxx>
  • Delivery-date: Wed, 15 Apr 2026 13:54:37 +0000
  • Feedback-id: 30504962:30504962.20260415:md
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

When using the same ASID/VPID for all vCPU of a domain, we need
to make sure that when context switching between 2 vCPUs of the
same domain on the same pCPU don't miss a TLB flush, as they
may be using different CR3.

Flush the TLB if the latest vCPU that ran on this core differ
to the one we're currently on.

Signed-off-by: Teddy Astie <teddy.astie@xxxxxxxxxx>
---
Would it be preferable to move it to common code (or shared logic for x86).

 xen/arch/x86/hvm/svm/svm.c | 8 ++++++++
 xen/arch/x86/hvm/vmx/vmx.c | 7 +++++++
 2 files changed, 15 insertions(+)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 64c08432fd..8714fb18ec 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -981,6 +981,14 @@ void asmlinkage svm_vmenter_helper(void)
 
     svm_sync_vmcb(curr, vmcb_needs_vmsave);
 
+    /**
+     * Check if we were the latest vCPU of this domain that ran on this pCPU.
+     * Flush the TLB if it is not, as the TLB entries are the ones from the 
previous
+     * vCPU. vCPU migration from a CPU to another always imply a TLB flush.
+     */
+    if ( curr->domain->latest_vcpu[cpu] != curr->vcpu_id )
+        curr->needs_tlb_flush = true;
+
     curr->domain->latest_vcpu[cpu] = curr->vcpu_id;
 
     vmcb->rax = regs->rax;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 0e4f9f9c3d..dceff2f221 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -4980,6 +4980,13 @@ bool asmlinkage vmx_vmenter_helper(const struct 
cpu_user_regs *regs)
 
     if ( unlikely(need_flush) )
         vpid_sync_all();
+    /**
+     * Check if we were the latest vCPU of this domain that ran on this pCPU.
+     * Flush the TLB if it is not, as the TLB entries are the ones from the 
previous
+     * vCPU. vCPU migration from a CPU to another always imply a TLB flush.
+     */
+    if ( currd->latest_vcpu[cpu] != curr->vcpu_id )
+        curr->needs_tlb_flush = true;
 
     currd->latest_vcpu[cpu] = curr->vcpu_id;
 
-- 
2.52.0



--
Teddy Astie | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.