[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 5/8] ioreq-server: add support for multiple servers



> -----Original Message-----
> From: Ian Campbell
> Sent: 07 April 2014 16:58
> To: Paul Durrant
> Cc: xen-devel@xxxxxxxxxxxxx; Ian Jackson; Stefano Stabellini; Jan Beulich
> Subject: Re: [PATCH v4 5/8] ioreq-server: add support for multiple servers
> 
> On Wed, 2014-04-02 at 16:11 +0100, Paul Durrant wrote:
> > The previous single ioreq server that was created on demand now
> > becomes the default server and an API is created to allow secondary
> > servers, which handle specific IO ranges or PCI devices, to be added.
> >
> > When the guest issues an IO the list of secondary servers is checked
> > for a matching IO range or PCI device. If none is found then the IO
> > is passed to the default server.
> >
> > Secondary servers use guest pages to communicate with emulators, in
> > the same way as the default server. These pages need to be in the
> > guest physmap otherwise there is no suitable reference that can be
> > queried by an emulator in order to map them. Therefore a pool of
> > pages in the current E820 reserved region, just below the special
> > pages is used. Secondary servers allocate from and free to this pool
> > as they are created and destroyed.
> >
> > The size of the pool is currently hardcoded in the domain build at a
> > value of 8. This should be sufficient for now and both the location and
> > size of the pool can be modified in future without any need to change the
> > API.
> 
> A pool of 8 implies 4 servers with a buffered and unbuffered page each?
> Or later on some combination of servers using both or one or the other?
> 

Yes, it seems like enough room for now.

> > Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> > Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
> > Cc: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> > Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>
> > Cc: Jan Beulich <jbeulich@xxxxxxxx>
> > ---
> >  tools/libxc/xc_domain.c          |  175 +++++++
> >  tools/libxc/xc_domain_restore.c  |   27 +
> >  tools/libxc/xc_domain_save.c     |   24 +
> >  tools/libxc/xc_hvm_build_x86.c   |   30 +-
> >  tools/libxc/xenctrl.h            |   52 ++
> >  tools/libxc/xg_save_restore.h    |    2 +
> >  xen/arch/x86/hvm/hvm.c           | 1035
> +++++++++++++++++++++++++++++++++++---
> >  xen/arch/x86/hvm/io.c            |    2 +-
> >  xen/include/asm-x86/hvm/domain.h |   34 +-
> >  xen/include/asm-x86/hvm/hvm.h    |    3 +-
> >  xen/include/asm-x86/hvm/io.h     |    2 +-
> >  xen/include/public/hvm/hvm_op.h  |   70 +++
> >  xen/include/public/hvm/ioreq.h   |    1 +
> >  xen/include/public/hvm/params.h  |    5 +-
> >  14 files changed, 1383 insertions(+), 79 deletions(-)
> >
> > diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> > index 369c3f3..8cec171 100644
> > --- a/tools/libxc/xc_domain.c
> > +++ b/tools/libxc/xc_domain.c
> > @@ -1284,6 +1284,181 @@ int xc_get_hvm_param(xc_interface *handle,
> domid_t dom, int param, unsigned long
> >      return rc;
> >  }
> >
> > +int xc_hvm_create_ioreq_server(xc_interface *xch,
> > +                               domid_t domid,
> > +                               ioservid_t *id)
> > +{
> > +    DECLARE_HYPERCALL;
> > +    DECLARE_HYPERCALL_BUFFER(xen_hvm_create_ioreq_server_t, arg);
> > +    int rc;
> > +
> > +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> > +    if ( arg == NULL )
> > +        return -1;
> > +
> > +    hypercall.op     = __HYPERVISOR_hvm_op;
> > +    hypercall.arg[0] = HVMOP_create_ioreq_server;
> > +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> 
> Can you add some newlines please.
> 

Sure.

[snip]
> > @@ -1748,6 +1770,11 @@ int xc_domain_restore(xc_interface *xch, int
> io_fd, uint32_t dom,
> >      if (pagebuf.viridian != 0)
> >          xc_set_hvm_param(xch, dom, HVM_PARAM_VIRIDIAN, 1);
> >
> > +    xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_SERVER_PFN,
> > +                     pagebuf.ioreq_server_pfn);
> > +    xc_set_hvm_param(xch, dom,
> HVM_PARAM_NR_IOREQ_SERVER_PAGES,
> > +                     pagebuf.nr_ioreq_server_pages);
> 
> When migrating into this from the previous version of Xen both of these
> will be zero. Does that do the right thing? I didn't see any special
> handling on the hypervisor side. In fact it looks like it will EINVAL.
> 

Yes, it will be EINVAL which is why the return code is deliberately not 
checked. I can special-case if you think that would be clearer, or stick 
'(void)' in front of these calls to show the return code is being deliberately 
ignored.

> > +
> >      if (pagebuf.acpi_ioport_location == 1) {
> >          DBGPRINTF("Use new firmware ioport from the checkpoint\n");
> >          xc_set_hvm_param(xch, dom,
> HVM_PARAM_ACPI_IOPORTS_LOCATION, 1);
> 
> > diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> > index e3a32f2..3b0c678 100644
> > --- a/tools/libxc/xenctrl.h
> > +++ b/tools/libxc/xenctrl.h
> > @@ -1801,6 +1801,48 @@ void xc_clear_last_error(xc_interface *xch);
> >  int xc_set_hvm_param(xc_interface *handle, domid_t dom, int param,
> unsigned long value);
> >  int xc_get_hvm_param(xc_interface *handle, domid_t dom, int param,
> unsigned long *value);
> >
> > +/*
> > + * IOREQ server API
> > + */
> 
> Hopefully xen/include/public has the associated docs for what all these
> functions do.

Not yet. Do think this would best handled in a header or in some markdown in 
docs/misc?

  Paul

> 
> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.