[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 5/8] ioreq-server: add support for multiple servers



> -----Original Message-----
> From: dunlapg@xxxxxxxxx [mailto:dunlapg@xxxxxxxxx] On Behalf Of
> George Dunlap
> Sent: 03 April 2014 16:33
> To: Paul Durrant
> Cc: xen-devel@xxxxxxxxxxxxx; Ian Jackson; Ian Campbell; Jan Beulich; Stefano
> Stabellini
> Subject: Re: [Xen-devel] [PATCH v4 5/8] ioreq-server: add support for
> multiple servers
> 
> On Wed, Apr 2, 2014 at 4:11 PM, Paul Durrant <paul.durrant@xxxxxxxxxx>
> wrote:
> > The previous single ioreq server that was created on demand now
> > becomes the default server and an API is created to allow secondary
> > servers, which handle specific IO ranges or PCI devices, to be added.
> >
> > When the guest issues an IO the list of secondary servers is checked
> > for a matching IO range or PCI device. If none is found then the IO
> > is passed to the default server.
> >
> > Secondary servers use guest pages to communicate with emulators, in
> > the same way as the default server. These pages need to be in the
> > guest physmap otherwise there is no suitable reference that can be
> > queried by an emulator in order to map them. Therefore a pool of
> > pages in the current E820 reserved region, just below the special
> > pages is used. Secondary servers allocate from and free to this pool
> > as they are created and destroyed.
> >
> > The size of the pool is currently hardcoded in the domain build at a
> > value of 8. This should be sufficient for now and both the location and
> > size of the pool can be modified in future without any need to change the
> > API.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> > Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
> > Cc: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> > Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>
> > Cc: Jan Beulich <jbeulich@xxxxxxxx>
> > ---
> >  tools/libxc/xc_domain.c          |  175 +++++++
> >  tools/libxc/xc_domain_restore.c  |   27 +
> >  tools/libxc/xc_domain_save.c     |   24 +
> >  tools/libxc/xc_hvm_build_x86.c   |   30 +-
> >  tools/libxc/xenctrl.h            |   52 ++
> >  tools/libxc/xg_save_restore.h    |    2 +
> >  xen/arch/x86/hvm/hvm.c           | 1035
> +++++++++++++++++++++++++++++++++++---
> >  xen/arch/x86/hvm/io.c            |    2 +-
> >  xen/include/asm-x86/hvm/domain.h |   34 +-
> >  xen/include/asm-x86/hvm/hvm.h    |    3 +-
> >  xen/include/asm-x86/hvm/io.h     |    2 +-
> >  xen/include/public/hvm/hvm_op.h  |   70 +++
> >  xen/include/public/hvm/ioreq.h   |    1 +
> >  xen/include/public/hvm/params.h  |    5 +-
> >  14 files changed, 1383 insertions(+), 79 deletions(-)
> >
> > diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> > index 369c3f3..8cec171 100644
> > --- a/tools/libxc/xc_domain.c
> > +++ b/tools/libxc/xc_domain.c
> > @@ -1284,6 +1284,181 @@ int xc_get_hvm_param(xc_interface *handle,
> domid_t dom, int param, unsigned long
> >      return rc;
> >  }
> >
> > +int xc_hvm_create_ioreq_server(xc_interface *xch,
> > +                               domid_t domid,
> > +                               ioservid_t *id)
> > +{
> > +    DECLARE_HYPERCALL;
> > +    DECLARE_HYPERCALL_BUFFER(xen_hvm_create_ioreq_server_t, arg);
> > +    int rc;
> > +
> > +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> > +    if ( arg == NULL )
> > +        return -1;
> > +
> > +    hypercall.op     = __HYPERVISOR_hvm_op;
> > +    hypercall.arg[0] = HVMOP_create_ioreq_server;
> > +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> > +    arg->domid = domid;
> > +    rc = do_xen_hypercall(xch, &hypercall);
> > +    *id = arg->id;
> > +    xc_hypercall_buffer_free(xch, arg);
> > +    return rc;
> > +}
> 
> Sorry if I missed it, but was there anywhere the 8-server limit is
> checked?  What happens if someone calls xc_hvm_create_ioreq_server() 9
> times?
> 
> > @@ -728,18 +877,53 @@ static int hvm_ioreq_server_init(struct
> hvm_ioreq_server *s, struct domain *d,
> >
> >   fail:
> >      hvm_ioreq_server_remove_all_vcpus(s);
> > -    hvm_ioreq_server_unmap_pages(s);
> > +    hvm_ioreq_server_unmap_pages(s, is_default);
> >
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> >      return rc;
> >  }
> >
> > -static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s)
> > +static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s,
> > +                                    bool_t is_default)
> >  {
> > +    struct list_head *entry;
> > +
> > +    list_for_each ( entry,
> > +                    &s->mmio_range_list )
> > +    {
> > +        struct hvm_io_range *x = list_entry(entry,
> > +                                            struct hvm_io_range,
> > +                                            list_entry);
> > +
> > +        xfree(x);
> 
> Hang on, isn't x still actually on mmio_range_list at this point, and
> doesn't entry equal &(x->list_entry)?  So the next time around
> list_for_each(), you're using x after you've freed it?
> 

Good catch, I should be using list_for_each_safe().

> I think you're missing a list_del_entry(), here and in the other 2
> loops in this function.
> 

I don't the del as this is a full teardown, but I do need to avoid invalidating 
my iterator :-)

  Paul

> I haven't gone through the rest of it with a fine-tooth comb, but
> interface-wise it looks good.
> 
>  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.