[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/HVM: fix interaction between internal and extern emulation



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 28 November 2017 10:17
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: Julien Grall <julien.grall@xxxxxxx>; Andrew Cooper
> <Andrew.Cooper3@xxxxxxxxxx>; xen-devel <xen-
> devel@xxxxxxxxxxxxxxxxxxxx>
> Subject: RE: [PATCH] x86/HVM: fix interaction between internal and extern
> emulation
> 
> >>> On 28.11.17 at 11:05, <Paul.Durrant@xxxxxxxxxx> wrote:
> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> >> Sent: 28 November 2017 10:02
> >> >>> On 28.11.17 at 10:49, <Paul.Durrant@xxxxxxxxxx> wrote:
> >> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> >> >> Sent: 27 November 2017 08:29
> >> >> handle_hvm_io_completion() is being involved in resuming from
> requests
> >> >> sent to a device model only, while re-invocation of internally handled
> >> >> I/O which couldn't be handled in one go simply re-starts the affected
> >> >> instruction. When an internally handled split request is being followed
> >> >> by one sent to a device model, so far nothing reset vio-
> >io_completion,
> >> >> leading to an MMIO emulation attempt on the next instruction _after_
> >> the
> >> >> one succesfully sent to qemu if that one doesn't itself require
> >> >> completion handling.
> >> >>
> >> >> Since only repeated string instructions are affected, strictly speaking
> >> >> the adjustment to handle_pio() isn't needed. Do it nevertheless for
> >> >> consistency as well as to avoid the lack thereof becoming an issue in
> >> >> the future; put the main change in generic enough a place to also cover
> >> >> VMX real mode emulation.
> >> >>
> >> >> Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> >> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >> >> ---
> >> >> It has been puzzling me for years how we could get away without
> clearing
> >> >> vio->io_completion in any more central place, i.e. other than as part of
> >> >> handling the completion.
> >> >
> >> > The idea is that, because HVMIO_no_completion is zero and thus the
> initial
> >> > value of vio->io_completion, no explicit initialization is required. If 
> >> > it is
> >> > set to anything other than that then there needs to be a call to
> >> > handle_hvm_io_completion() which will duly set it back
> >> HVMIO_no_completion.
> >> > So the question is how it is being set and why does this not result in 
> >> > the
> >> > appropriate completion call? I fear this patch is covering up a more
> >> > fundamental problem with the state model in certain cases.
> >>
> >> Well - see the patch description: vio->mmio_retry being set after an
> >> emulation means hvm_emulate_one_insn() setting ->io_completion
> >> to HVMIO_mmio_completion no matter whether the request needs to
> >> go to qemu or is being handled internally.
> >
> > Well that sounds like the problem then.
> >
> >> Internally handled requests,
> >> as explained, don't need a completion to be run, though, and it will
> >> be the exception rather than the rule that handle_hvm_io_completion()
> >> would be invoked in such a case, causing ->io_completion to be cleared
> >> again.
> >>
> >> Quite the contrary to what you say, I don't see why ->io_completion
> >> wasn't zapped the way the patch does it from the beginning. Nothing
> >> good can come from stale state being used _regardless_ of whether
> >> the most recent operation was handled externally or internally.
> >
> > Because the state should never be stale. It sounds like use of mmio_retry is
> > being overloaded and that's leading to this issue.
> 
> Looking forward to an alternative (preferably not overly intrusive)
> patch proposal then, if you dislike this one.

It would definitely be good to only reset io_completion when it is clear that 
handle_hvm_io_completion() is not going to be called (i.e. for internally 
handled I/O) and perhaps even add ASSERTs in _hvm_emulate_one() and 
handle_pio().

  Paul

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.