[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/HVM: fix interaction between internal and extern emulation



> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx] On Behalf
> Of Paul Durrant
> Sent: 28 November 2017 11:31
> To: 'Jan Beulich' <JBeulich@xxxxxxxx>
> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Julien Grall
> <julien.grall@xxxxxxx>; xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
> Subject: Re: [Xen-devel] [PATCH] x86/HVM: fix interaction between internal
> and extern emulation
> 
> > -----Original Message-----
> > From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> > Sent: 28 November 2017 11:26
> > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> > Cc: Julien Grall <julien.grall@xxxxxxx>; Andrew Cooper
> > <Andrew.Cooper3@xxxxxxxxxx>; xen-devel <xen-
> > devel@xxxxxxxxxxxxxxxxxxxx>
> > Subject: RE: [PATCH] x86/HVM: fix interaction between internal and extern
> > emulation
> >
> > >>> On 28.11.17 at 12:06, <Paul.Durrant@xxxxxxxxxx> wrote:
> > > Yes, it appears that mmio_retry is only set when the underlying
> emulation
> > > returned X86EMUL_OKAY but not all reps were completed. If the
> > underlying
> > > emulation did not return X86EMUL_RETRY then I can't figure out why
> > > vio->io_completion should need to be set to anything other than
> > > HVMIO_no_completion since any other return value indicates there
> should
> > be
> > > nothing pending.
> >
> > So am I getting it right that you're suggesting to remove the
> > mmio_retry part of the condition in hvm_emulate_one_insn()?
> > That looks like it might work (I was previously only considering
> > to get rid of mmio_retry altogether, and that didn't look like a
> > viable route).
> 
> Yes, that's what I'm suggesting. I really can't see why it is needed. It could
> have been a mistake in my original patches or a semantic change in a
> subsequent patch, but it certainly looks wrong in current context.
> Andrew has just sent me his xtf repro so I'll give the change a go with that.

Yes, this patch fixed the problem for me. I'll do some more tests to check for 
collateral damage now... If it's all good, do you want me to submit it or do 
you want to send it as a v2 of your patch?

  Paul

diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index e449b4196e..9d9e1b0e40 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -88,7 +88,7 @@ bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, 
const char *descr)

     rc = hvm_emulate_one(&ctxt);

-    if ( hvm_vcpu_io_need_completion(vio) || vio->mmio_retry )
+    if ( hvm_vcpu_io_need_completion(vio) )
         vio->io_completion = HVMIO_mmio_completion;
     else
         vio->mmio_access = (struct npfec){};
diff --git a/xen/arch/x86/hvm/vmx/realmode.c b/xen/arch/x86/hvm/vmx/realmode.c
index 03dea6c0fc..11211c8cd8 100644
--- a/xen/arch/x86/hvm/vmx/realmode.c
+++ b/xen/arch/x86/hvm/vmx/realmode.c
@@ -103,7 +103,7 @@ void vmx_realmode_emulate_one(struct hvm_emulate_ctxt 
*hvmemul_ctxt)

     rc = hvm_emulate_one(hvmemul_ctxt);

-    if ( hvm_vcpu_io_need_completion(vio) || vio->mmio_retry )
+    if ( hvm_vcpu_io_need_completion(vio) )
         vio->io_completion = HVMIO_realmode_completion;

     if ( rc == X86EMUL_UNHANDLEABLE )

> 
>   Paul
> 
>   Paul
> 
> >
> > Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxxx
> https://lists.xenproject.org/mailman/listinfo/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.