[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen master] x86/HVM: suppress I/O completion for port output
commit 91afb8139f954a06e564d4915bc7d6a8575e2812 Author: Jan Beulich <jbeulich@xxxxxxxx> AuthorDate: Wed Apr 11 10:42:24 2018 +0200 Commit: Jan Beulich <jbeulich@xxxxxxxx> CommitDate: Wed Apr 11 10:42:24 2018 +0200 x86/HVM: suppress I/O completion for port output We don't break up port requests in case they cross emulation entity boundaries, and a write to an I/O port is necessarily the last operation of an instruction instance, so there's no need to re-invoke the full emulation path upon receiving the result from an external emulator. In case we want to properly split port accesses in the future, this change will need to be reverted, as it would prevent things working correctly when e.g. the first part needs to go to an external emulator, while the second part is to be handled internally. While this addresses the reported problem of Windows paging out the buffer underneath an in-process REP OUTS, it does not address the wider problem of the re-issued insn (to the insn emulator) being prone to raise an exception (#PF) during a replayed, previously successful memory access (we only record prior MMIO accesses). Leaving aside the problem tried to be worked around here, I think the performance aspect alone is a good reason to change the behavior. Also take the opportunity and change bool_t -> bool as hvm_vcpu_io_need_completion()'s return type. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx> Release-acked-by: Juergen Gross <jgross@xxxxxxxx> --- xen/arch/x86/hvm/emulate.c | 6 +++++- xen/include/asm-x86/hvm/vcpu.h | 6 ++++-- 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 213426db59..f8fca57254 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -282,7 +282,11 @@ static int hvmemul_do_io( rc = hvm_send_ioreq(s, &p, 0); if ( rc != X86EMUL_RETRY || currd->is_shutting_down ) vio->io_req.state = STATE_IOREQ_NONE; - else if ( data_is_addr ) + /* + * This effectively is !hvm_vcpu_io_need_completion(vio), slightly + * optimized and using local variables we have available. + */ + else if ( data_is_addr || (!is_mmio && dir == IOREQ_WRITE) ) rc = X86EMUL_OKAY; } break; diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index 2a0403a7f4..fad7b66cca 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -91,10 +91,12 @@ struct hvm_vcpu_io { const struct g2m_ioport *g2m_ioport; }; -static inline bool_t hvm_vcpu_io_need_completion(const struct hvm_vcpu_io *vio) +static inline bool hvm_vcpu_io_need_completion(const struct hvm_vcpu_io *vio) { return (vio->io_req.state == STATE_IOREQ_READY) && - !vio->io_req.data_is_ptr; + !vio->io_req.data_is_ptr && + (vio->io_req.type != IOREQ_TYPE_PIO || + vio->io_req.dir != IOREQ_WRITE); } struct nestedvcpu { -- generated by git-patchbot for /home/xen/git/xen.git#master _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |