[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [XEN PATCH v1 10/15] x86/domain: guard svm specific functions with CONFIG_SVM
From: Xenia Ragiadakou <burzalodowa@xxxxxxxxx> The functions svm_load_segs() and svm_load_segs_prefetch() are AMD-V specific so guard their calls in common code with CONFIG_SVM. Since SVM depends on HVM, it can be used alone. No functional change intended. Signed-off-by: Xenia Ragiadakou <burzalodowa@xxxxxxxxx> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@xxxxxxxx> --- xen/arch/x86/domain.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 33a2830d9d..e10e453aa1 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1702,11 +1702,10 @@ static void load_segments(struct vcpu *n) if ( !(n->arch.flags & TF_kernel_mode) ) SWAP(gsb, gss); -#ifdef CONFIG_HVM - if ( cpu_has_svm && (uregs->fs | uregs->gs) <= 3 ) + if ( IS_ENABLED(CONFIG_SVM) && cpu_has_svm && + (uregs->fs | uregs->gs) <= 3 ) fs_gs_done = svm_load_segs(n->arch.pv.ldt_ents, LDT_VIRT_START(n), n->arch.pv.fs_base, gsb, gss); -#endif } if ( !fs_gs_done ) @@ -2019,11 +2018,10 @@ static void __context_switch(void) write_ptbase(n); -#if defined(CONFIG_PV) && defined(CONFIG_HVM) /* Prefetch the VMCB if we expect to use it later in the context switch */ - if ( cpu_has_svm && is_pv_64bit_domain(nd) && !is_idle_domain(nd) ) + if ( IS_ENABLED(CONFIG_PV) && IS_ENABLED(CONFIG_SVM) && + cpu_has_svm && is_pv_64bit_domain(nd) && !is_idle_domain(nd) ) svm_load_segs_prefetch(); -#endif if ( need_full_gdt(nd) && !per_cpu(full_gdt_loaded, cpu) ) load_full_gdt(n, cpu); -- 2.25.1
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |