[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common
On Sat, 8 Aug 2020, Oleksandr wrote: > On 08.08.20 00:50, Stefano Stabellini wrote: > > On Fri, 7 Aug 2020, Oleksandr wrote: > > > On 06.08.20 03:37, Stefano Stabellini wrote: > > > > > > Hi Stefano > > > > > > Trying to simulate IO_RETRY handling mechanism (according to model below) > > > I > > > continuously get IO_RETRY from try_fwd_ioserv() ... > > > > > > > OK, thanks for the details. My interpretation seems to be correct. > > > > > > > > In which case, it looks like xen/arch/arm/io.c:try_fwd_ioserv should > > > > return IO_RETRY. Then, xen/arch/arm/traps.c:do_trap_stage2_abort_guest > > > > also needs to handle try_handle_mmio returning IO_RETRY the first > > > > around, and IO_HANDLED when after QEMU does its job. > > > > > > > > What should do_trap_stage2_abort_guest do on IO_RETRY? Simply return > > > > early and let the scheduler do its job? Something like: > > > > > > > > enum io_state state = try_handle_mmio(regs, hsr, gpa); > > > > > > > > switch ( state ) > > > > { > > > > case IO_ABORT: > > > > goto inject_abt; > > > > case IO_HANDLED: > > > > advance_pc(regs, hsr); > > > > return; > > > > case IO_RETRY: > > > > /* finish later */ > > > > return; > > > > case IO_UNHANDLED: > > > > /* IO unhandled, try another way to handle it. */ > > > > break; > > > > default: > > > > ASSERT_UNREACHABLE(); > > > > } > > > > > > > > Then, xen/arch/arm/ioreq.c:handle_mmio() gets called by > > > > handle_hvm_io_completion() after QEMU completes the emulation. Today, > > > > handle_mmio just sets the user register with the read value. > > > > > > > > But it would be better if it called again the original function > > > > do_trap_stage2_abort_guest to actually retry the original operation. > > > > This time do_trap_stage2_abort_guest calls try_handle_mmio() and gets > > > > IO_HANDLED instead of IO_RETRY, > > > I may miss some important point, but I failed to see why try_handle_mmio > > > (try_fwd_ioserv) will return IO_HANDLED instead of IO_RETRY at this stage. > > > Or current try_fwd_ioserv() logic needs rework? > > I think you should check the ioreq->state in try_fwd_ioserv(), if the > > result is ready, then ioreq->state should be STATE_IORESP_READY, and you > > can return IO_HANDLED. > > > > That is assuming that you are looking at the live version of the ioreq > > shared with QEMU instead of a private copy of it, which I am not sure. > > Looking at try_fwd_ioserv() it would seem that vio->io_req is just a > > copy? The live version is returned by get_ioreq() ? > > If I understand the code correctly, indeed, get_ioreq() returns live version > shared with emulator. > Desired state change (STATE_IORESP_READY) what actually the hvm_wait_for_io() > is waiting for is set here (in my case): > https://xenbits.xen.org/gitweb/?p=people/pauldu/demu.git;a=blob;f=demu.c;h=f785b394d0cf141dffa05bdddecf338214358aea;hb=refs/heads/master#l698 > > > > Even in handle_hvm_io_completion, instead of setting vio->io_req.state > > to STATE_IORESP_READY by hand, it would be better to look at the live > > version of the ioreq because QEMU will have already set ioreq->state > > to STATE_IORESP_READY (hw/i386/xen/xen-hvm.c:cpu_handle_ioreq). > It seems that after detecting STATE_IORESP_READY in hvm_wait_for_io() the > state of live version is set to STATE_IOREQ_NONE immediately, so looking at > the live version down the handle_hvm_io_completion() > or in try_fwd_ioserv() shows us nothing I am afraid. I see. That is because we want to "free" the ioreq as soon as possible, which is good. handle_hvm_io_completion also sets vio->io_req.state to STATE_IORESP_READY, so our private copy is still set to STATE_IORESP_READY. Thus, try_fwd_ioserv should do the right thing simply reading vio->io_req.state: try_fwd_ioserv should be able to return IO_HANDLED when the "state" is STATE_IORESP_READY, right?
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |