[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 5/8] ioreq-server: add support for multiple servers



> -----Original Message-----
> From: Ian Campbell
> Sent: 08 April 2014 09:41
> To: Paul Durrant
> Cc: xen-devel@xxxxxxxxxxxxx; Ian Jackson; Stefano Stabellini; Jan Beulich
> Subject: Re: [PATCH v4 5/8] ioreq-server: add support for multiple servers
> 
> On Tue, 2014-04-08 at 09:32 +0100, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Ian Campbell
> > > Sent: 07 April 2014 16:58
> > > To: Paul Durrant
> > > Cc: xen-devel@xxxxxxxxxxxxx; Ian Jackson; Stefano Stabellini; Jan Beulich
> > > Subject: Re: [PATCH v4 5/8] ioreq-server: add support for multiple
> servers
> > >
> > > On Wed, 2014-04-02 at 16:11 +0100, Paul Durrant wrote:
> > > > The previous single ioreq server that was created on demand now
> > > > becomes the default server and an API is created to allow secondary
> > > > servers, which handle specific IO ranges or PCI devices, to be added.
> > > >
> > > > When the guest issues an IO the list of secondary servers is checked
> > > > for a matching IO range or PCI device. If none is found then the IO
> > > > is passed to the default server.
> > > >
> > > > Secondary servers use guest pages to communicate with emulators, in
> > > > the same way as the default server. These pages need to be in the
> > > > guest physmap otherwise there is no suitable reference that can be
> > > > queried by an emulator in order to map them. Therefore a pool of
> > > > pages in the current E820 reserved region, just below the special
> > > > pages is used. Secondary servers allocate from and free to this pool
> > > > as they are created and destroyed.
> > > >
> > > > The size of the pool is currently hardcoded in the domain build at a
> > > > value of 8. This should be sufficient for now and both the location and
> > > > size of the pool can be modified in future without any need to change
> the
> > > > API.
> > >
> > > A pool of 8 implies 4 servers with a buffered and unbuffered page each?
> > > Or later on some combination of servers using both or one or the other?
> > >
> >
> > Yes, it seems like enough room for now.
> 
> What I was asking is can I have 3 servers with both buffered and
> unbuffered plus 2 with only unbuffered, for a total of 8 pages?
> 

Yes.

> > [snip]
> > > > @@ -1748,6 +1770,11 @@ int xc_domain_restore(xc_interface *xch, int
> > > io_fd, uint32_t dom,
> > > >      if (pagebuf.viridian != 0)
> > > >          xc_set_hvm_param(xch, dom, HVM_PARAM_VIRIDIAN, 1);
> > > >
> > > > +    xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_SERVER_PFN,
> > > > +                     pagebuf.ioreq_server_pfn);
> > > > +    xc_set_hvm_param(xch, dom,
> > > HVM_PARAM_NR_IOREQ_SERVER_PAGES,
> > > > +                     pagebuf.nr_ioreq_server_pages);
> > >
> > > When migrating into this from the previous version of Xen both of these
> > > will be zero. Does that do the right thing? I didn't see any special
> > > handling on the hypervisor side. In fact it looks like it will EINVAL.
> > >
> >
> > Yes, it will be EINVAL which is why the return code is deliberately
> > not checked. I can special-case if you think that would be clearer, or
> > stick '(void)' in front of these calls to show the return code is
> > being deliberately ignored.
> 
> At the very least the behaviour needs to be written down somewhere.
> 
> But I think being explicit in the restore code about these cases would
> be clearer than relying on EINVAL from the hypervisor, i.e. remember if
> you saw that chunk or not and either make the call or not.
> 
> As for not checking the return code -- what if it fails for some other
> reason. perhaps the migration stream is corrupt?
> 

EPERM is the only other possible failure, and that shouldn't happen for a legit 
domain restore. I'll avoid making the call if the save record is not there 
though, as you suggest.

> How does everything agree on the location of the fallback ioreq PFN in
> this case since it isn't set here?
> 

The QEMU PFNs don't move, so no change there. If the params aren't set properly 
then secondary servers cannot be created after migration.

> >
> > > > +
> > > >      if (pagebuf.acpi_ioport_location == 1) {
> > > >          DBGPRINTF("Use new firmware ioport from the checkpoint\n");
> > > >          xc_set_hvm_param(xch, dom,
> > > HVM_PARAM_ACPI_IOPORTS_LOCATION, 1);
> > >
> > > > diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> > > > index e3a32f2..3b0c678 100644
> > > > --- a/tools/libxc/xenctrl.h
> > > > +++ b/tools/libxc/xenctrl.h
> > > > @@ -1801,6 +1801,48 @@ void xc_clear_last_error(xc_interface *xch);
> > > >  int xc_set_hvm_param(xc_interface *handle, domid_t dom, int param,
> > > unsigned long value);
> > > >  int xc_get_hvm_param(xc_interface *handle, domid_t dom, int
> param,
> > > unsigned long *value);
> > > >
> > > > +/*
> > > > + * IOREQ server API
> > > > + */
> > >
> > > Hopefully xen/include/public has the associated docs for what all these
> > > functions do.
> >
> > Not yet. Do think this would best handled in a header or in some
> > markdown in docs/misc?
> 
> For hypercalls in the headers is best, they will be exported into the
> docs subtree by the build. See
> http://xenbits.xen.org/docs/unstable/hypercall/x86_64/index.html.
> 

Ok. Ta,

  Paul

> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.