[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/3] x86/HVM: fix forwarding of internally cached requests



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 24 March 2016 12:52
> To: Paul Durrant
> Cc: Andrew Cooper; Chang Jianzhong; xen-devel; Keir (Xen.org)
> Subject: RE: [PATCH 1/3] x86/HVM: fix forwarding of internally cached
> requests
> 
> >>> On 24.03.16 at 12:49, <Paul.Durrant@xxxxxxxxxx> wrote:
> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> >> Sent: 24 March 2016 11:29
> >> @@ -196,8 +196,22 @@ int hvm_process_io_intercept(const struc
> >>          }
> >>      }
> >>
> >> -    if ( i != 0 && rc == X86EMUL_UNHANDLEABLE )
> >> +    if ( unlikely(rc < 0) )
> >>          domain_crash(current->domain);
> >> +    else if ( i )
> >> +    {
> >> +        p->count = i;
> >> +        rc = X86EMUL_OKAY;
> >> +    }
> >> +    else if ( rc == X86EMUL_UNHANDLEABLE )
> >> +    {
> >> +        /*
> >> +         * Don't forward entire batches to the device model: This would
> >> +         * prevent the internal handlers to see subsequent iterations of
> >> +         * the request.
> >> +         */
> >> +        p->count = 1;
> >
> > I guess this is ok. If stdvga is not caching then the accept function would
> > have failed so you won't get here, and if it send the buffered ioreq then
> you
> > still don't get here because it returns X86EMUL_OKAY.
> 
> Good that you thought of this - I had forgotten that stdvga's
> MMIO handling now takes this same code path rather than a
> fully separate one.

Yes, that's why I was a little worried before I saw the code :-)

> I guess I'll steal some of the wording above
> for the v2 commit message.
> 

Sure.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.