[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [XEN PATCH v2 07/15] x86: guard cpu_has_{svm/vmx} macros with CONFIG_{SVM/VMX}
On Thu, 23 May 2024, Jan Beulich wrote: > On 23.05.2024 15:07, Sergiy Kibrik wrote: > > 16.05.24 14:12, Jan Beulich: > >> On 15.05.2024 11:12, Sergiy Kibrik wrote: > >>> --- a/xen/arch/x86/include/asm/cpufeature.h > >>> +++ b/xen/arch/x86/include/asm/cpufeature.h > >>> @@ -81,7 +81,8 @@ static inline bool boot_cpu_has(unsigned int feat) > >>> #define cpu_has_sse3 boot_cpu_has(X86_FEATURE_SSE3) > >>> #define cpu_has_pclmulqdq boot_cpu_has(X86_FEATURE_PCLMULQDQ) > >>> #define cpu_has_monitor boot_cpu_has(X86_FEATURE_MONITOR) > >>> -#define cpu_has_vmx boot_cpu_has(X86_FEATURE_VMX) > >>> +#define cpu_has_vmx ( IS_ENABLED(CONFIG_VMX) && \ > >>> + boot_cpu_has(X86_FEATURE_VMX)) > >>> #define cpu_has_eist boot_cpu_has(X86_FEATURE_EIST) > >>> #define cpu_has_ssse3 boot_cpu_has(X86_FEATURE_SSSE3) > >>> #define cpu_has_fma boot_cpu_has(X86_FEATURE_FMA) > >>> @@ -109,7 +110,8 @@ static inline bool boot_cpu_has(unsigned int feat) > >>> > >>> /* CPUID level 0x80000001.ecx */ > >>> #define cpu_has_cmp_legacy boot_cpu_has(X86_FEATURE_CMP_LEGACY) > >>> -#define cpu_has_svm boot_cpu_has(X86_FEATURE_SVM) > >>> +#define cpu_has_svm ( IS_ENABLED(CONFIG_SVM) && \ > >>> + boot_cpu_has(X86_FEATURE_SVM)) > >>> #define cpu_has_sse4a boot_cpu_has(X86_FEATURE_SSE4A) > >>> #define cpu_has_xop boot_cpu_has(X86_FEATURE_XOP) > >>> #define cpu_has_skinit boot_cpu_has(X86_FEATURE_SKINIT) > >> > >> Hmm, leaving aside the style issue (stray blanks after opening parentheses, > >> and as a result one-off indentation on the wrapped lines) I'm not really > >> certain we can do this. The description goes into detail why we would want > >> this, but it doesn't cover at all why it is safe for all present (and > >> ideally also future) uses. I wouldn't be surprised if we had VMX/SVM checks > >> just to derive further knowledge from that, without them being directly > >> related to the use of VMX/SVM. Take a look at calculate_hvm_max_policy(), > >> for example. While it looks to be okay there, it may give you an idea of > >> what I mean. > >> > >> Things might become better separated if instead for such checks we used > >> host and raw CPU policies instead of cpuinfo_x86.x86_capability[]. But > >> that's still pretty far out, I'm afraid. > > > > I've followed a suggestion you made for patch in previous series: > > > > https://lore.kernel.org/xen-devel/8fbd604e-5e5d-410c-880f-2ad257bbe08a@xxxxxxxx/ > > See the "If not, ..." that I had put there. Doing the change just mechanically > isn't enough, you also need to make clear (in the description) that you > verified it's safe to have this way. What does it mean to "verified it's safe to have this way"? "Safe" in what way? > > yet if this approach can potentially be unsafe (I'm not completely sure > > it's safe), should we instead fallback to the way it was done in v1 > > series? I.e. guard calls to vmx/svm-specific calls where needed, like in > > these 3 patches: > > > > 1) > > https://lore.kernel.org/xen-devel/20240416063328.3469386-1-Sergiy_Kibrik@xxxxxxxx/ > > > > 2) > > https://lore.kernel.org/xen-devel/20240416063740.3469592-1-Sergiy_Kibrik@xxxxxxxx/ > > > > 3) > > https://lore.kernel.org/xen-devel/20240416063947.3469718-1-Sergiy_Kibrik@xxxxxxxx/ > > I don't like this sprinkling around of IS_ENABLED() very much. Maybe we want > to have two new helpers (say using_svm() and using_vmx()), to be used in place > of most but possibly not all cpu_has_{svm,vmx}? Doing such a transformation > would then kind of implicitly answer the safety question above, as at every > use site you'd need to judge whether the replacement is correct. If it's > correct everywhere, the construct(s) as proposed in this version could then be > considered to be used in this very shape (instead of introducing the two new > helpers). But of course the transition could also be done gradually then, > touching only those uses that previously you touched in 1), 2), and 3).
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |