[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common
On 08.08.20 00:50, Stefano Stabellini wrote: Hi Stefano On Fri, 7 Aug 2020, Oleksandr wrote:On 06.08.20 03:37, Stefano Stabellini wrote: Hi Stefano Trying to simulate IO_RETRY handling mechanism (according to model below) I continuously get IO_RETRY from try_fwd_ioserv() ...OK, thanks for the details. My interpretation seems to be correct. In which case, it looks like xen/arch/arm/io.c:try_fwd_ioserv should return IO_RETRY. Then, xen/arch/arm/traps.c:do_trap_stage2_abort_guest also needs to handle try_handle_mmio returning IO_RETRY the first around, and IO_HANDLED when after QEMU does its job. What should do_trap_stage2_abort_guest do on IO_RETRY? Simply return early and let the scheduler do its job? Something like: enum io_state state = try_handle_mmio(regs, hsr, gpa); switch ( state ) { case IO_ABORT: goto inject_abt; case IO_HANDLED: advance_pc(regs, hsr); return; case IO_RETRY: /* finish later */ return; case IO_UNHANDLED: /* IO unhandled, try another way to handle it. */ break; default: ASSERT_UNREACHABLE(); } Then, xen/arch/arm/ioreq.c:handle_mmio() gets called by handle_hvm_io_completion() after QEMU completes the emulation. Today, handle_mmio just sets the user register with the read value. But it would be better if it called again the original function do_trap_stage2_abort_guest to actually retry the original operation. This time do_trap_stage2_abort_guest calls try_handle_mmio() and gets IO_HANDLED instead of IO_RETRY,I may miss some important point, but I failed to see why try_handle_mmio (try_fwd_ioserv) will return IO_HANDLED instead of IO_RETRY at this stage. Or current try_fwd_ioserv() logic needs rework?I think you should check the ioreq->state in try_fwd_ioserv(), if the result is ready, then ioreq->state should be STATE_IORESP_READY, and you can return IO_HANDLED. Indeed! Just coming to this opinion I saw your answer) This is a dirty test patch: --- xen/arch/arm/io.c | 12 ++++++++++++ xen/arch/arm/ioreq.c | 12 ++++++++++++ xen/arch/arm/traps.c | 6 ++++-- xen/include/asm-arm/hvm/ioreq.h | 2 ++ xen/include/asm-arm/traps.h | 3 +++ 5 files changed, 33 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c index 436f669..65a08f8 100644 --- a/xen/arch/arm/io.c +++ b/xen/arch/arm/io.c@@ -130,6 +130,10 @@ static enum io_state try_fwd_ioserv(struct cpu_user_regs *regs, { case STATE_IOREQ_NONE: break; + + case STATE_IORESP_READY: + return IO_HANDLED; + default: printk("d%u wrong state %u\n", v->domain->domain_id, vio->io_req.state);@@ -156,9 +160,11 @@ static enum io_state try_fwd_ioserv(struct cpu_user_regs *regs, else vio->io_completion = HVMIO_mmio_completion; +#if 0 /* XXX: Decide what to do */ if ( rc == IO_RETRY ) rc = IO_HANDLED; +#endif return rc; }@@ -185,6 +191,12 @@ enum io_state try_handle_mmio(struct cpu_user_regs *regs, #ifdef CONFIG_IOREQ_SERVER rc = try_fwd_ioserv(regs, v, &info); + if ( rc == IO_HANDLED ) + { + printk("HANDLED %s[%d]\n", __func__, __LINE__); + handle_mmio_finish(); + } else + printk("RETRY %s[%d]\n", __func__, __LINE__); #endif return rc; diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c index 8f60c41..c8ed454 100644 --- a/xen/arch/arm/ioreq.c +++ b/xen/arch/arm/ioreq.c @@ -33,8 +33,20 @@ #include <public/hvm/dm_op.h> #include <public/hvm/ioreq.h> +#include <asm/traps.h> + bool handle_mmio(void) { + struct cpu_user_regs *regs = guest_cpu_user_regs(); + const union hsr hsr = { .bits = regs->hsr }; + + do_trap_stage2_abort_guest(regs, hsr); + + return true; +} + +bool handle_mmio_finish(void) +{ struct vcpu *v = current; struct cpu_user_regs *regs = guest_cpu_user_regs(); const union hsr hsr = { .bits = regs->hsr }; diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index ea472d1..3493d77 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -1882,7 +1882,7 @@ static bool try_map_mmio(gfn_t gfn) return !map_regions_p2mt(d, gfn, 1, mfn, p2m_mmio_direct_c); } -static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs, +void do_trap_stage2_abort_guest(struct cpu_user_regs *regs, const union hsr hsr) { /*@@ -1965,11 +1965,13 @@ static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs, case IO_HANDLED: advance_pc(regs, hsr); return; + case IO_RETRY: + /* finish later */ + return; case IO_UNHANDLED: /* IO unhandled, try another way to handle it. */ break; default: - /* XXX: Handle IO_RETRY */ ASSERT_UNREACHABLE(); } }diff --git a/xen/include/asm-arm/hvm/ioreq.h b/xen/include/asm-arm/hvm/ioreq.h index 392ce64..fb4684d 100644 --- a/xen/include/asm-arm/hvm/ioreq.h +++ b/xen/include/asm-arm/hvm/ioreq.h @@ -27,6 +27,8 @@ bool handle_mmio(void); +bool handle_mmio_finish(void); + static inline bool handle_pio(uint16_t port, unsigned int size, int dir) { /* XXX */ diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h index 997c378..392fdb1 100644 --- a/xen/include/asm-arm/traps.h +++ b/xen/include/asm-arm/traps.h@@ -40,6 +40,9 @@ void advance_pc(struct cpu_user_regs *regs, const union hsr hsr); void inject_undef_exception(struct cpu_user_regs *regs, const union hsr hsr); +void do_trap_stage2_abort_guest(struct cpu_user_regs *regs, + const union hsr hsr); + /* read as zero and write ignore */ void handle_raz_wi(struct cpu_user_regs *regs, int regidx, bool read, const union hsr hsr, int min_el); -- 2.7.4 That is assuming that you are looking at the live version of the ioreq shared with QEMU instead of a private copy of it, which I am not sure. Looking at try_fwd_ioserv() it would seem that vio->io_req is just a copy? The live version is returned by get_ioreq() ? Even in handle_hvm_io_completion, instead of setting vio->io_req.state to STATE_IORESP_READY by hand, it would be better to look at the live version of the ioreq because QEMU will have already set ioreq->state to STATE_IORESP_READY (hw/i386/xen/xen-hvm.c:cpu_handle_ioreq). I need to recheck that. Thank you. -- Regards, Oleksandr Tyshchenko
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |