[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption that HVM paths MUST use io-backend device.



On 02/07/2014 03:41 PM, Konrad Rzeszutek Wilk wrote:
On Fri, Feb 07, 2014 at 02:28:07AM +0000, Zhang, Yang Z wrote:
Konrad Rzeszutek Wilk wrote on 2014-02-05:
On Wed, Feb 05, 2014 at 02:35:51PM +0000, George Dunlap wrote:
On 02/04/2014 04:42 PM, Konrad Rzeszutek Wilk wrote:
On Tue, Feb 04, 2014 at 03:46:48PM +0000, Jan Beulich wrote:
On 04.02.14 at 16:32, Konrad Rzeszutek Wilk
<konrad.wilk@xxxxxxxxxx> wrote:
On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
Wasn't it that Mukesh's patch simply was yours with the two
get_ioreq()s folded by using a local variable?
Yes. As so
Thanks. Except that ...

--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
      struct vcpu *v = current;
      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
      struct cpu_user_regs *regs = guest_cpu_user_regs();
-
+    ioreq_t *p = get_ioreq(v);
... you don't want to drop the blank line, and naming the new
variable "ioreq" would seem preferable.

      /*
       * a pending IO emualtion may still no finished. In this case,
       * no virtual vmswith is allowed. Or else, the following IO
       * emulation will handled in a wrong VCPU context.
       */
-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
+    if ( p && p->state != STATE_IOREQ_NONE )
And, as said before, I'd think "!p ||" instead of "p &&" would be
the right thing here. Yang, Jun?
I have two patches - one the simpler one that is pretty
straightfoward and the one you suggested. Either one fixes PVH
guests. I also did bootup tests with HVM guests to make sure they worked.

Attached and inline.
Sorry for the late response. I just back from Chinese new year holiday.

But they do different things -- one does "ioreq && ioreq->state..."
Correct.
and the other does "!ioreq || ioreq->state...".  The first one is
incorrect, AFAICT.
Both of them fix the hypervisor blowing up with any PVH guest.
Both of fixings are right to me.
The only concern is that what we want to do here:
"ioreq && ioreq->state..." will only allow the VCPU that supporting IO request 
emulation mechanism to continue nested check which current means HVM VCPU.
And "!ioreq || ioreq->state..." will check the VCPU that doesn't support the IO 
request emulation mechanism only which current means PVH VCPU.

The purpose of my original patch only wants to allow the HVM VCPU that doesn't 
has pending IO request to continue nested check. Not use it to distinguish 
whether it is HVM or PVH. So here I prefer to only allow HVM VCPU goes to here 
as Jan mentioned before that non-HVM domain should never call nested related 
function at all unless it also supports nested.
So it sounds like the #2 patch is preferable by you.

Can I stick Acked-by on it?


 From d76fc0d2f59ac65bd692adfa5f215da9ecf85d6a Mon Sep 17 00:00:00 2001
From: Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
Date: Mon, 3 Feb 2014 11:45:52 -0500
Subject: [PATCH] pvh: Fix regression due to assumption that HVM paths MUST
  use io-backend device.

The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
"Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
assumes that the HVM paths are only taken by HVM guests. With the PVH
enabled that is no longer the case - which means that we do not have
to have the IO-backend device (QEMU) enabled.

As such, that patch can crash the hypervisor:

Xen call trace:
     [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
     [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0

Pagetable walk from 000000000000001e:
   L4[0x000] = 0000000000000000 ffffffffffffffff

****************************************
Panic on CPU 7:
FATAL PAGE FAULT
[error_code=0000]
Faulting linear address: 000000000000001e
****************************************

as we do not have an io based backend. In the case that the
PVH guest does run an HVM guest inside it - we need to do
further work to suport this - and for now the check will
bail us out.

We also fix spelling mistakes and the sentence structure.

CC: Yang Zhang <yang.z.zhang@xxxxxxxxx>
CC: Jun Nakajima <jun.nakajima@xxxxxxxxx>
Suggested-by: Jan Beulich <JBeulich@xxxxxxxx>
Signed-off-by: Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>

I forget if I release acked this yet, but just in case:

Acked-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>

Release-acked-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>

---
  xen/arch/x86/hvm/vmx/vvmx.c |   10 +++++++---
  1 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d2ba435..71522cf 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,13 +1394,17 @@ void nvmx_switch_guest(void)
      struct vcpu *v = current;
      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
      struct cpu_user_regs *regs = guest_cpu_user_regs();
+    ioreq_t *ioreq = get_ioreq(v);
/*
-     * a pending IO emualtion may still no finished. In this case,
+     * A pending IO emulation may still be not finished. In this case,
       * no virtual vmswith is allowed. Or else, the following IO
-     * emulation will handled in a wrong VCPU context.
+     * emulation will be handled in a wrong VCPU context. If there are
+     * no IO backends - PVH guest by itself or a PVH guest with an HVM guest
+     * running inside - we don't want to continue as this setup is not
+     * implemented nor supported as of right now.
       */
-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
+    if ( !ioreq || ioreq->state != STATE_IOREQ_NONE )
          return;
      /*
       * a softirq may interrupt us between a virtual vmentry is


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.