|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen stable-4.5] x86/HVM: fix forwarding of internally cached requests
commit 0aabc2855c9e10b61ceb75a6b25ecc6d467e99e5
Author: Jan Beulich <jbeulich@xxxxxxxx>
AuthorDate: Mon May 9 13:05:42 2016 +0200
Commit: Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Mon May 9 13:05:42 2016 +0200
x86/HVM: fix forwarding of internally cached requests
Forwarding entire batches to the device model when an individual
iteration of them got rejected by internal device emulation handlers
with X86EMUL_UNHANDLEABLE is wrong: The device model would then handle
all iterations, without the internal handler getting to see any past
the one it returned failure for. This causes misbehavior in at least
the MSI-X and VGA code, which want to see all such requests for
internal tracking/caching purposes. But note that this does not apply
to buffered I/O requests.
This in turn means that the condition in hvm_process_io_intercept() of
when to crash the domain was wrong: Since X86EMUL_UNHANDLEABLE can
validly be returned by the individual device handlers, we mustn't
blindly crash the domain if such occurs on other than the initial
iteration. Instead we need to distinguish hvm_copy_*_guest_phys()
failures from device specific ones, and then the former need to always
be fatal to the domain (i.e. also on the first iteration), since
otherwise we again would end up forwarding a request to qemu which the
internal handler didn't get to see.
The adjustment should be okay even for stdvga's MMIO handling:
- if it is not caching then the accept function would have failed so we
won't get into hvm_process_io_intercept(),
- if it issued the buffered ioreq then we only get to the p->count
reduction if hvm_send_ioreq() actually encountered an error (in which
we don't care about the request getting split up).
Also commit 4faffc41d ("x86/hvm: limit reps to avoid the need to handle
retry") went too far in removing code from hvm_process_io_intercept():
When there were successfully handled iterations, the function should
continue to return success with a clipped repeat count.
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
x86/HVM: fix forwarding of internally cached requests (part 2)
Commit 96ae556569 ("x86/HVM: fix forwarding of internally cached
requests") wasn't quite complete: hvmemul_do_io() also needs to
propagate up the clipped count. (I really should have re-tested the
forward port resulting in the earlier change, instead of relying on the
testing done on the older version of Xen which the fix was first needed
for.)
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
master commit: 96ae556569b8eaedc0bb242932842c3277b515d8
master date: 2016-03-31 14:52:04 +0200
master commit: 670ee15ac1e3de7c15381fdaab0e531489b48939
master date: 2016-04-28 15:09:26 +0200
---
xen/arch/x86/hvm/emulate.c | 8 +++++++-
xen/arch/x86/hvm/intercept.c | 26 ++++++++++++++++++++++----
xen/arch/x86/hvm/io.c | 4 ++--
3 files changed, 31 insertions(+), 7 deletions(-)
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 5cea077..5c6261a 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -202,11 +202,17 @@ static int hvmemul_do_io(
rc = hvm_portio_intercept(&p);
}
+ /*
+ * p.count may have got reduced (see hvm_mmio_access() and
+ * process_portio_intercept()) - inform our callers.
+ */
+ ASSERT(p.count <= *reps);
+ *reps = p.count;
+
switch ( rc )
{
case X86EMUL_OKAY:
case X86EMUL_RETRY:
- *reps = p.count;
p.state = STATE_IORESP_READY;
if ( !vio->mmio_retry )
{
diff --git a/xen/arch/x86/hvm/intercept.c b/xen/arch/x86/hvm/intercept.c
index d52a48c..4c9b170 100644
--- a/xen/arch/x86/hvm/intercept.c
+++ b/xen/arch/x86/hvm/intercept.c
@@ -140,8 +140,8 @@ static int hvm_mmio_access(struct vcpu *v,
ASSERT(0);
/* fall through */
default:
- rc = X86EMUL_UNHANDLEABLE;
- break;
+ domain_crash(v->domain);
+ return X86EMUL_UNHANDLEABLE;
}
if ( rc != X86EMUL_OKAY )
break;
@@ -159,6 +159,15 @@ static int hvm_mmio_access(struct vcpu *v,
p->count = i;
rc = X86EMUL_OKAY;
}
+ else if ( rc == X86EMUL_UNHANDLEABLE )
+ {
+ /*
+ * Don't forward entire batches to the device model: This would
+ * prevent the internal handlers to see subsequent iterations of
+ * the request.
+ */
+ p->count = 1;
+ }
return rc;
}
@@ -302,8 +311,8 @@ static int process_portio_intercept(portio_action_t action,
ioreq_t *p)
ASSERT(0);
/* fall through */
default:
- rc = X86EMUL_UNHANDLEABLE;
- break;
+ domain_crash(current->domain);
+ return X86EMUL_UNHANDLEABLE;
}
if ( rc != X86EMUL_OKAY )
break;
@@ -321,6 +330,15 @@ static int process_portio_intercept(portio_action_t
action, ioreq_t *p)
p->count = i;
rc = X86EMUL_OKAY;
}
+ else if ( rc == X86EMUL_UNHANDLEABLE )
+ {
+ /*
+ * Don't forward entire batches to the device model: This would
+ * prevent the internal handlers to see subsequent iterations of
+ * the request.
+ */
+ p->count = 1;
+ }
return rc;
}
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index 68fb890..b62aab2 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -309,8 +309,8 @@ static int dpci_ioport_write(uint32_t mport, ioreq_t *p)
ASSERT(0);
/* fall through */
default:
- rc = X86EMUL_UNHANDLEABLE;
- break;
+ domain_crash(current->domain);
+ return X86EMUL_UNHANDLEABLE;
}
if ( rc != X86EMUL_OKAY)
break;
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.5
_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |