From d76fc0d2f59ac65bd692adfa5f215da9ecf85d6a Mon Sep 17 00:00:00 2001 From: Mukesh Rathor Date: Mon, 3 Feb 2014 11:45:52 -0500 Subject: [PATCH] pvh: Fix regression due to assumption that HVM paths MUST use io-backend device. The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4 "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation" assumes that the HVM paths are only taken by HVM guests. With the PVH enabled that is no longer the case - which means that we do not have to have the IO-backend device (QEMU) enabled. As such, that patch can crash the hypervisor: Xen call trace: [] nvmx_switch_guest+0x4d/0x903 [] vmx_asm_vmexit_handler+0x4b/0xc0 Pagetable walk from 000000000000001e: L4[0x000] = 0000000000000000 ffffffffffffffff **************************************** Panic on CPU 7: FATAL PAGE FAULT [error_code=0000] Faulting linear address: 000000000000001e **************************************** as we do not have an io based backend. In the case that the PVH guest does run an HVM guest inside it - we need to do further work to suport this - and for now the check will bail us out. We also fix spelling mistakes and the sentence structure. CC: Yang Zhang CC: Jun Nakajima Suggested-by: Jan Beulich Signed-off-by: Mukesh Rathor Signed-off-by: Konrad Rzeszutek Wilk --- xen/arch/x86/hvm/vmx/vvmx.c | 10 +++++++--- 1 files changed, 7 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index d2ba435..71522cf 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -1394,13 +1394,17 @@ void nvmx_switch_guest(void) struct vcpu *v = current; struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v); struct cpu_user_regs *regs = guest_cpu_user_regs(); + ioreq_t *ioreq = get_ioreq(v); /* - * a pending IO emualtion may still no finished. In this case, + * A pending IO emulation may still be not finished. In this case, * no virtual vmswith is allowed. Or else, the following IO - * emulation will handled in a wrong VCPU context. + * emulation will be handled in a wrong VCPU context. If there are + * no IO backends - PVH guest by itself or a PVH guest with an HVM guest + * running inside - we don't want to continue as this setup is not + * implemented nor supported as of right now. */ - if ( get_ioreq(v)->state != STATE_IOREQ_NONE ) + if ( !ioreq || ioreq->state != STATE_IOREQ_NONE ) return; /* * a softirq may interrupt us between a virtual vmentry is -- 1.7.7.6