[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 for 4.5] ioreq-server: handle the lack of a default emulator properly



> -----Original Message-----
> From: Andrew Cooper
> Sent: 29 September 2014 11:59
> To: Paul Durrant; xen-devel@xxxxxxxxxxxxx
> Cc: Keir (Xen.org); Jan Beulich
> Subject: Re: [Xen-devel] [PATCH v2 for 4.5] ioreq-server: handle the lack of a
> default emulator properly
> 
> On 29/09/14 11:21, Paul Durrant wrote:
> > I started porting QEMU over to use the new ioreq server API and hit a
> > problem with PCI bus enumeration. Because, with my patches, QEMU only
> > registers to handle config space accesses for the PCI device it implements
> > all other attempts by the guest to access 0xcfc go nowhere and this was
> > causing the vcpu to wedge up because nothing was completing the I/O.
> >
> > This patch introduces an I/O completion handler into the hypervisor for the
> > case where no ioreq server matches a particular request. Read requests
> are
> > completed with 0xf's in the data buffer, writes and all other I/O req types
> > are ignored.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> > Cc: Keir Fraser <keir@xxxxxxx>
> > Cc: Jan Beulich <jbeulich@xxxxxxxx>
> > ---
> > v2: - First non-RFC submission
> >     - Removed warning on unemulated MMIO accesses
> >
> >  xen/arch/x86/hvm/hvm.c |   35 ++++++++++++++++++++++++++++++++-
> --
> >  1 file changed, 32 insertions(+), 3 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index 5c7e0a4..822ac37 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -2386,8 +2386,7 @@ static struct hvm_ioreq_server
> *hvm_select_ioreq_server(struct domain *d,
> >      if ( list_empty(&d->arch.hvm_domain.ioreq_server.list) )
> >          return NULL;
> >
> > -    if ( list_is_singular(&d->arch.hvm_domain.ioreq_server.list) ||
> > -         (p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO) )
> > +    if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
> >          return d->arch.hvm_domain.default_ioreq_server;
> >
> >      cf8 = d->arch.hvm_domain.pci_cf8;
> > @@ -2618,12 +2617,42 @@ bool_t
> hvm_send_assist_req_to_ioreq_server(struct hvm_ioreq_server *s,
> >      return 0;
> >  }
> >
> > +static bool_t hvm_complete_assist_req(ioreq_t *p)
> > +{
> > +    switch (p->type)
> > +    {
> > +    case IOREQ_TYPE_COPY:
> > +    case IOREQ_TYPE_PIO:
> > +        if ( p->dir == IOREQ_READ )
> > +        {
> > +            if ( !p->data_is_ptr )
> > +                p->data = ~0ul;
> > +            else
> > +            {
> > +                int i, sign = p->df ? -1 : 1;
> > +                uint32_t data = ~0;
> > +
> > +                for ( i = 0; i < p->count; i++ )
> > +                    hvm_copy_to_guest_phys(p->data + sign * i * p->size, 
> > &data,
> > +                                           p->size);
> 
> This is surely bogus for an `ins` which crosses a page boundary?
> 

hvmemul_do_io() is only given a guest physical address and a size, so handling 
of page boundary conditions must be done at a higher level.

  Paul

> ~Andrew
> 
> > +            }
> > +        }
> > +        /* FALLTHRU */
> > +    default:
> > +        p->state = STATE_IORESP_READY;
> > +        hvm_io_assist(p);
> > +        break;
> > +    }
> > +
> > +    return 1;
> > +}
> > +
> >  bool_t hvm_send_assist_req(ioreq_t *p)
> >  {
> >      struct hvm_ioreq_server *s = hvm_select_ioreq_server(current-
> >domain, p);
> >
> >      if ( !s )
> > -        return 0;
> > +        return hvm_complete_assist_req(p);
> >
> >      return hvm_send_assist_req_to_ioreq_server(s, p);
> >  }
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.