[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v2 2/6] x86/HVM: restrict guest-induced WBINVD to cache writeback
We allow its use for writeback purposes only anyway, so let's also carry these out that way on capable hardware. With it now known that WBNOINVD uses the same VM exit code as WBINVD for both SVM and VT-x, we can now also expose the feature that way without further distinguishing the specific cases of those VM exits. Note that on SVM this builds upon INSTR_WBINVD also covering WBNOINVD, as the decoder won't set prefix related bits for this encoding in the resulting canonicalized opcode. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- v2: FLUSH_WRITEBACK -> FLUSH_CACHE_WRITEBACK. --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -2364,7 +2364,7 @@ static void svm_vmexit_mce_intercept( static void cf_check svm_wbinvd_intercept(void) { if ( cache_flush_permitted(current->domain) ) - flush_all(FLUSH_CACHE); + flush_all(FLUSH_CACHE_WRITEBACK); } static void svm_vmexit_do_invalidate_cache(struct cpu_user_regs *regs, --- a/xen/arch/x86/hvm/vmx/vmcs.c +++ b/xen/arch/x86/hvm/vmx/vmcs.c @@ -1881,12 +1881,12 @@ void cf_check vmx_do_resume(void) { /* * For pass-through domain, guest PCI-E device driver may leverage the - * "Non-Snoop" I/O, and explicitly WBINVD or CLFLUSH to a RAM space. - * Since migration may occur before WBINVD or CLFLUSH, we need to - * maintain data consistency either by: - * 1: flushing cache (wbinvd) when the guest is scheduled out if + * "Non-Snoop" I/O, and explicitly WB{NO,}INVD or CL{WB,FLUSH} RAM space. + * Since migration may occur before WB{NO,}INVD or CL{WB,FLUSH}, we need + * to maintain data consistency either by: + * 1: flushing cache (wbnoinvd) when the guest is scheduled out if * there is no wbinvd exit, or - * 2: execute wbinvd on all dirty pCPUs when guest wbinvd exits. + * 2: execute wbnoinvd on all dirty pCPUs when guest wbinvd exits. * If VT-d engine can force snooping, we don't need to do these. */ if ( has_arch_pdevs(v->domain) && !iommu_snoop @@ -1894,7 +1894,7 @@ void cf_check vmx_do_resume(void) { int cpu = v->arch.hvm.vmx.active_cpu; if ( cpu != -1 ) - flush_mask(cpumask_of(cpu), FLUSH_CACHE); + flush_mask(cpumask_of(cpu), FLUSH_CACHE_WRITEBACK); } vmx_clear_vmcs(v); --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -3714,9 +3714,9 @@ static void cf_check vmx_wbinvd_intercep return; if ( cpu_has_wbinvd_exiting ) - flush_all(FLUSH_CACHE); + flush_all(FLUSH_CACHE_WRITEBACK); else - wbinvd(); + wbnoinvd(); } static void ept_handle_violation(ept_qual_t q, paddr_t gpa) --- a/xen/include/public/arch-x86/cpufeatureset.h +++ b/xen/include/public/arch-x86/cpufeatureset.h @@ -238,7 +238,7 @@ XEN_CPUFEATURE(EFRO, 7*32+10) / /* AMD-defined CPU features, CPUID level 0x80000008.ebx, word 8 */ XEN_CPUFEATURE(CLZERO, 8*32+ 0) /*A CLZERO instruction */ XEN_CPUFEATURE(RSTR_FP_ERR_PTRS, 8*32+ 2) /*A (F)X{SAVE,RSTOR} always saves/restores FPU Error pointers */ -XEN_CPUFEATURE(WBNOINVD, 8*32+ 9) /* WBNOINVD instruction */ +XEN_CPUFEATURE(WBNOINVD, 8*32+ 9) /*S WBNOINVD instruction */ XEN_CPUFEATURE(IBPB, 8*32+12) /*A IBPB support only (no IBRS, used by AMD) */ XEN_CPUFEATURE(IBRS, 8*32+14) /*S MSR_SPEC_CTRL.IBRS */ XEN_CPUFEATURE(AMD_STIBP, 8*32+15) /*S MSR_SPEC_CTRL.STIBP */
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |