[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [RFC PATCH v3 3/3] x86/hvm: Introduce Xen-wide ASID allocator
On 26.06.2025 16:01, Teddy Astie wrote: > From: Vaishali Thakkar <vaishali.thakkar@xxxxxxxx> (formely vates.tech) > > Currently ASID generation and management is done per-PCPU. This > scheme is incompatible with SEV technologies as SEV VMs need to > have a fixed ASID associated with all vcpus of the VM throughout > it's lifetime. > > This commit introduces a Xen-wide allocator which initializes > the asids at the start of xen and allows to have a fixed asids > throughout the lifecycle of all domains. Having a fixed asid > for non-SEV domains also presents us with the opportunity to > further take use of AMD instructions like TLBSYNC and INVLPGB > for broadcasting the TLB invalidations. > > Introduce vcpu->needs_tlb_flush attribute to schedule a guest TLB > flush for the next VMRUN/VMENTER. This will be later be done using > either TLB_CONTROL field (AMD) or INVEPT (Intel). This flush method > is used in place of the current ASID swapping logic. > > Signed-off-by: Teddy Astie <teddy.astie@xxxxxxxxxx> > Signed-off-by: Vaishali Thakkar <vaishali.thakkar@xxxxxxxx> (formely > vates.tech) Not sure whether you may legitimately alter pre-existing S-o-b. In any event the two S-o-b looks to be in the wrong order; like most other tags they want to be sorted chronologically. > --- > CC: Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx> > > Should the ASID/VPID/VMID management logic code being shared accross > x86/ARM/RISC-V ? If all present and future architectures agree on how exactly they want to use these IDs. Which I'm unsure of. RISC-V is now vaguely following the original x86 model. > Is it possible to have multiples vCPUs of a same domain simultaneously > scheduled on top of a single pCPU ? If so, it would need a special > consideration for this corner case, such as we don't miss a TLB flush > in such cases. No, how would two entities be able to run on a single pCPU at any single point in time? > I get various stability when testing shadow paging in these patches, unsure > what's the exact root case. HAP works perfectly fine though. > > TODO: > - Intel: Don't assign the VPID at each VMENTER, though we need > to rethink how we manage VMCS with nested virtualization / altp2m > for changing this behavior. > - AMD: Consider hot-plug of CPU with ERRATA_170. (is it possible ?) > - Consider cases where we don't have enough ASIDs (e.g Xen as nested guest) > - Nested virtualization ASID management For these last two points - maybe we really need a mixed model? > --- > xen/arch/x86/flushtlb.c | 31 +++--- > xen/arch/x86/hvm/asid.c | 148 +++++++++---------------- > xen/arch/x86/hvm/emulate.c | 2 +- > xen/arch/x86/hvm/hvm.c | 14 ++- > xen/arch/x86/hvm/nestedhvm.c | 6 +- > xen/arch/x86/hvm/svm/asid.c | 77 ++++++++----- > xen/arch/x86/hvm/svm/nestedsvm.c | 1 - > xen/arch/x86/hvm/svm/svm.c | 36 +++--- > xen/arch/x86/hvm/svm/svm.h | 4 - > xen/arch/x86/hvm/vmx/vmcs.c | 5 +- > xen/arch/x86/hvm/vmx/vmx.c | 68 ++++++------ > xen/arch/x86/hvm/vmx/vvmx.c | 5 +- > xen/arch/x86/include/asm/flushtlb.h | 7 -- > xen/arch/x86/include/asm/hvm/asid.h | 25 ++--- > xen/arch/x86/include/asm/hvm/domain.h | 1 + > xen/arch/x86/include/asm/hvm/hvm.h | 15 +-- > xen/arch/x86/include/asm/hvm/svm/svm.h | 5 + > xen/arch/x86/include/asm/hvm/vcpu.h | 10 +- > xen/arch/x86/include/asm/hvm/vmx/vmx.h | 4 +- > xen/arch/x86/mm/hap/hap.c | 6 +- > xen/arch/x86/mm/p2m.c | 7 +- > xen/arch/x86/mm/paging.c | 2 +- > xen/arch/x86/mm/shadow/hvm.c | 1 + > xen/arch/x86/mm/shadow/multi.c | 12 +- > xen/include/xen/sched.h | 2 + > 25 files changed, 227 insertions(+), 267 deletions(-) I think I said this to Vaishali already: This really wants splitting up some, to become halfway reviewable. Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |