[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 7/9] x86/svm: VMEntry/Exit logic for MSR_SPEC_CTRL
On 31.01.2022 16:36, Andrew Cooper wrote: > Hardware maintains both host and guest versions of MSR_SPEC_CTRL, but guests > run with the logical OR of both values. Therefore, in principle we want to > clear Xen's value before entering the guest. However, for migration > compatibility, I think you've explained this to me before, but I can't seem to put all of it together already now. Could expand on how a non-zero value behind a guest's back can help with migration compatibility? At the first glance I would be inclined to say only what the guest actually gets to see and use can affect its migration. > and for performance reasons with SEV-SNP guests, we want the > ability to use a nonzero value behind the guest's back. Use vcpu_msrs to hold > this value, with the guest value in the VMCB. > > On the VMEntry path, adjusting MSR_SPEC_CTRL must be done after CLGI so as to > be atomic with respect to NMIs/etc. > > Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Preferably with the above expansion and with one further style issue (see below) taken care of Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> > --- a/xen/arch/x86/hvm/svm/entry.S > +++ b/xen/arch/x86/hvm/svm/entry.S > @@ -55,11 +55,23 @@ __UNLIKELY_END(nsvm_hap) > mov %rsp, %rdi > call svm_vmenter_helper > > - mov VCPU_arch_msrs(%rbx), %rax > - mov VCPUMSR_spec_ctrl_raw(%rax), %eax > + clgi > > /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */ > - /* SPEC_CTRL_EXIT_TO_SVM (nothing currently) */ > + /* SPEC_CTRL_EXIT_TO_SVM Req: b=curr %rsp=regs/cpuinfo, Clob: > acd */ > + .macro svm_vmentry_spec_ctrl > + mov VCPU_arch_msrs(%rbx), %rax > + movzbl CPUINFO_last_spec_ctrl(%rsp), %edx > + mov VCPUMSR_spec_ctrl_raw(%rax), %eax > + cmp %edx, %eax > + je 1f /* Skip write if value is correct. */ Wold you mind padding the insn operand properly, in line with all others nearby? Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |