|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v2 0/3] Skip unneeded VMENTRY & VMEXIT
Changes v2:
Paul Durrant:
I think the two changes only make sense in combination.
folded old #3 and old #5 into #3.
Actually that comment is not right. The operation is not binned;
it's just already been done.
Comment has been dropped.
I think you can ASSERT(s) here, as you should never get here ...
Dropped call to hvm_complete_assist_req(), added ASSERT.
So basically collapses down to just this call.
Yes, so changed to call on hvm_send_assist_req_to_ioreq_server()
and removed hvm_send_assist_req() which had 1 caller.
Jan Beulich:
The change on what to do when hvm_send_assist_req() fails is bad.
That is correct. Made hvm_has_dm() do full checking so
that the extra VMEXIT and VMENTRY can be skipped.
Add Suggested-by to old #4
Done (Now #2)
hvm_complete_assist_req() be a separate function yet having only
a single caller ...
Folded hvm_complete_assist_req() in to hvm_has_dm() which
is the only place it is called.
Using a new enough QEMU that has:
commit 7665d6ba98e20fb05c420de947c1750fd47e5c07
Author: Paul Durrant <paul.durrant@xxxxxxxxxx>
Date: Tue Jan 20 11:06:19 2015 +0000
Xen: Use the ioreq-server API when available
means that hvmloader and the guest will do pci config accesses that
will not be sent to QEMU. However hvm_complete_assist_req() (which
is the routine that does the work) returns a 1 which
hvm_send_assist_req() returns to the caller which means that the
request was sent to QEMU and need to be waited for.
This is not the case. hvm_complete_assist_req() has called on
hvm_io_assist() which has changed the io_state from
HVMIO_awaiting_completion to HVMIO_completed if needed. But
hvmemul_do_io() thinks that it needs to wait on the IOREQ_READ case,
and returns X86EMUL_RETRY.
[PATCH 1/5] DEBUG: xentrace: Add debug of HANDLE_PIO
-- No need to apply
[PATCH 2/5] xentrace: Adjust IOPORT & MMIO format
-- Simple bugfix. xentrace_format is not converting these correctly
[PATCH 3/5] hvmemul_do_io: If the send to the ioreq server failed do
-- Simple bugfix. Do the same thing as when hvm_has_dm() returns
a zero.
[PATCH 4/5] hvm_complete_assist_req: We should not be able to get
-- This ASSERT was sugested by Paul Durrant. Since I was in the
file, just add it.
[PATCH 5/5] hvm_complete_assist_req: Tell caller we failed to send
-- The real fix. Does depend on [PATCH 3/5] hvmemul_do_io: If the send ...
Here is the before xentrace output using the "[PATCH 1/5] DEBUG:
xentrace: Add debug of HANDLE_PIO" (which I do not expect to be
applied. It is first because that is the order you need to use to
reproduce the xentrace output):
CPU0 685992932190 (+ 2292) VMEXIT [ exitcode = 0x0000001e, rIP =
0x00101581 ]
CPU0 0 (+ 0) HANDLE_PIO [ port = 0x0cfe size = 2 dir = 1 io_state = 5
ret = 0 ]
CPU0 685992934938 (+ 2748) vlapic_accept_pic_intr [ i8259_target = 1,
accept_pic_int = 1 ]
CPU0 685992935526 (+ 588) VMENTRY
CPU0 685992937650 (+ 2124) VMEXIT [ exitcode = 0x0000001e, rIP =
0x00101581 ]
CPU0 0 (+ 0) IOPORT_READ [ port = 0x0cfe, data = 0x00000000 ]
CPU0 0 (+ 0) HANDLE_PIO [ port = 0x0cfe size = 2 dir = 1 io_state = 0
ret = 1 ]
CPU0 685992940248 (+ 2598) vlapic_accept_pic_intr [ i8259_target = 1,
accept_pic_int = 1 ]
CPU0 685992940968 (+ 720) VMENTRY
And after:
CPU2 1028654638584 (+ 2388) VMEXIT [ exitcode = 0x0000001e, rIP =
0x00101581 ]
CPU2 0 (+ 0) IOPORT_READ [ port = 0x0cfe, data = 0xffffffff ]
CPU2 0 (+ 0) HANDLE_PIO [ port = 0x0cfe size = 2 dir = 1 io_state = 0
ret = 1 ]
CPU2 1028654641932 (+ 3348) vlapic_accept_pic_intr [ i8259_target = 1,
accept_pic_int = 1 ]
CPU2 1028654642586 (+ 654) VMENTRY
Don Slutz (3):
xentrace: Adjust IOPORT & MMIO format
hvm_complete_assist_req: We should not be able to get here on
IOREQ_TYPE_PCI_CONFIG
hvm_has_dm: Do a full check for backing DM
tools/xentrace/formats | 8 ++++----
xen/arch/x86/hvm/emulate.c | 8 ++++++--
xen/arch/x86/hvm/hvm.c | 26 +++++++++-----------------
xen/include/asm-x86/hvm/hvm.h | 6 ++++--
4 files changed, 23 insertions(+), 25 deletions(-)
--
1.8.4
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |