[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption that HVM paths MUST use io-backend device.



On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
> >>> On 04.02.14 at 15:48, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> 
> >>> wrote:
> > On Tue, Feb 04, 2014 at 08:54:59AM +0000, Jan Beulich wrote:
> >> >>> On 03.02.14 at 18:03, Konrad Rzeszutek Wilk <konrad@xxxxxxxxxx> wrote:
> >> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> >> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> >> > @@ -1400,7 +1400,7 @@ void nvmx_switch_guest(void)
> >> >       * no virtual vmswith is allowed. Or else, the following IO
> >> >       * emulation will handled in a wrong VCPU context.
> >> >       */
> >> > -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> >> > +    if ( get_ioreq(v) && get_ioreq(v)->state != STATE_IOREQ_NONE )
> >> 
> >> As Mukesh pointed out, calling get_ioreq() twice is inefficient.
> >> 
> >> But to me it's not clear whether a PVH vCPU getting here is wrong
> >> in the first place, i.e. I would think the above condition should be
> >> || rather than && (after all, even if nested HVM one day became
> > 
> > I presume you mean like this:
> > 
> >     if ( !get_ioreq(v) || get_ioreq(v)->state != STATE_IOREQ_NONE )
> >             return;
> > 
> > If the Intel maintainers are OK with that I can do it that (and only
> > do one get_ioreq(v) call) and expand the comment.
> > 
> > Or just take the simple route and squash Mukesh's patch in mine and
> > revist this later - as I would prefer to make the minimal amount of
> > changes to any code in during rc3.
> 
> Wasn't it that Mukesh's patch simply was yours with the two
> get_ioreq()s folded by using a local variable?

Yes. As so

From 6864786da87c4d67998a8ef46a3d289ac85074a9 Mon Sep 17 00:00:00 2001
From: Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
Date: Mon, 3 Feb 2014 11:45:52 -0500
Subject: [PATCH] pvh: Fix regression caused by assumption that HVM paths MUST
 use io-backend device.

The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
"Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
assumes that the HVM paths are only taken by HVM guests. With the PVH
enabled that is no longer the case - which means that we do not have
to have the IO-backend device (QEMU) enabled.

As such, that patch can crash the hypervisor:

Xen call trace:
    [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
    [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0

Pagetable walk from 000000000000001e:
  L4[0x000] = 0000000000000000 ffffffffffffffff

****************************************
Panic on CPU 7:
FATAL PAGE FAULT
[error_code=0000]
Faulting linear address: 000000000000001e
****************************************

as we do not have an io based backend.

CC: Yang Zhang <yang.z.zhang@xxxxxxxxx>
CC: Jun Nakajima <jun.nakajima@xxxxxxxxx>
Signed-off-by: Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d2ba435..143b7a2 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
     struct vcpu *v = current;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct cpu_user_regs *regs = guest_cpu_user_regs();
-
+    ioreq_t *p = get_ioreq(v);
     /*
      * a pending IO emualtion may still no finished. In this case,
      * no virtual vmswith is allowed. Or else, the following IO
      * emulation will handled in a wrong VCPU context.
      */
-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
+    if ( p && p->state != STATE_IOREQ_NONE )
         return;
     /*
      * a softirq may interrupt us between a virtual vmentry is
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.