[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 5/6] ioreq-server: add support for multiple servers



On Mon, 2014-03-17 at 12:25 +0000, Paul Durrant wrote:
> > drivers in the guest.
> > >  This parameter only takes effect when device_model_version=qemu-xen.
> > >  See F<docs/misc/pci-device-reservations.txt> for more information.
> > >
> > > +=item B<secondary_device_emulators=NUMBER>
> > > +
> > > +If a number of secondary device emulators (i.e. in addition to
> > > +qemu-xen or qemu-xen-traditional) are to be invoked to support the
> > > +guest then this parameter can be set with the count of how many are
> > > +to be used. The default value is zero.
> > 
> > This is an odd thing to expose to the user. Surely (lib)xl should be
> > launching these things while building the domain and therefore know how
> > many there are the config options should be things like
> > "use_split_dm_for_foo=1" or device_emulators = ["/usr/bin/first-dm",
> > "/usr/local/bin/second-dm"] or something.
> > 
> 
> As George says, I donât want to restrict the only way to kick off
> emulators to being libxl at this point.

OK. There's conversely no support here for libxl launching them either
though?

> 
> > > +
> > >  =back
> > >
> > >  =head2 Device-Model Options
> > > diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> > > index 369c3f3..dfa905b 100644
> > > --- a/tools/libxc/xc_domain.c
> > > +++ b/tools/libxc/xc_domain.c
> > > +int xc_hvm_get_ioreq_server_info(xc_interface *xch,
> > > +                                 domid_t domid,
> > > +                                 ioservid_t id,
> > > +                                 xen_pfn_t *pfn,
> > > +                                 xen_pfn_t *buf_pfn,
> > > +                                 evtchn_port_t *buf_port)
> > > +{
> > > +    DECLARE_HYPERCALL;
> > > +    DECLARE_HYPERCALL_BUFFER(xen_hvm_get_ioreq_server_info_t,
> > arg);
> > > +    int rc;
> > > +
> > > +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> > > +    if ( arg == NULL )
> > > +        return -1;
> > > +
> > > +    hypercall.op     = __HYPERVISOR_hvm_op;
> > > +    hypercall.arg[0] = HVMOP_get_ioreq_server_info;
> > > +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> > > +    arg->domid = domid;
> > > +    arg->id = id;
> > > +    rc = do_xen_hypercall(xch, &hypercall);
> > > +    if ( rc != 0 )
> > > +        goto done;
> > > +
> > > +    if ( pfn )
> > > +        *pfn = arg->pfn;
> > > +
> > > +    if ( buf_pfn )
> > > +        *buf_pfn = arg->buf_pfn;
> > > +
> > > +    if ( buf_port )
> > > +        *buf_port = arg->buf_port;
> > 
> > This looks a bit like this function should take a
> > xen_hvm_get_ioreq_server_info_t* and use the bounce buffering stuff.
> > Unless there is some desire to hide that struct from the callers
> > perhaps?
> > 
> 
> Well, I guess the caller could do the marshalling but isn't it neater
> to hide it all in libxenctrl internals? I don't know what the general
> philosophy behind libxenctrl apis is. Most of the other functions seem
> to take an xc_interface and a bunch of args rather than a single
> struct.

I guess it depends what the caller is most likely to do -- if it uses
them immediately and throws them away then this way is fine, if it's
going to pass them around then the existing struct seems useful. I
presume it is the former or you'd have written it the other way, in
which case this is fine.

> > > diff --git a/tools/libxc/xc_domain_restore.c
> > b/tools/libxc/xc_domain_restore.c
> > > index 1f6ce50..3116653 100644
> > > --- a/tools/libxc/xc_domain_restore.c
> > > +++ b/tools/libxc/xc_domain_restore.c
> > > @@ -746,6 +746,7 @@ typedef struct {
> > >      uint64_t acpi_ioport_location;
> > >      uint64_t viridian;
> > >      uint64_t vm_generationid_addr;
> > > +    uint64_t nr_ioreq_servers;
> > 
> > This makes me wonder: what happens if the source and target hosts do
> > different amounts of disaggregation? Perhaps in Xen N+1 we split some
> > additional component out into its own process?
> > 
> > This is going to be complex with the allocation of space for special
> > pages, isn't it?
> >
> 
> As long as we have enough special pages then is it complex?

The "have enough" is where the complexity comes in though. If Xen
version X needed N special pages and Xen X+1 needs N+2 pages then we
have a tricky situation because people may well configure the guest with
N.

>  All Xen needs to know is the base ioreq server pfn and how many the
> VM has. I'm overloading the existing HVM param as the base and then
> adding a new one for the count. George (as I understand it) suggested
> leaving the old params alone, grandfathering them in for the catch-all
> server, and then having a new area for secondary emulators. I'm happy
> with that and, as long as suitable save records were introduced for
> any other dissag. work, then I don't think there's any conflict.

> > > +
> > > +        chunk.id = XC_SAVE_ID_HVM_NR_IOREQ_SERVERS;
> > > +        chunk.data = 0;
> > > +        xc_get_hvm_param(xch, dom, HVM_PARAM_NR_IOREQ_SERVERS,
> > > +                         (unsigned long *)&chunk.data);
> > > +
> > > +        if ( (chunk.data != 0) &&
> > 
> > Can this ever be 0 for an HVM guest?
> > 
> 
> I was assuming it would 0 for a guest migrated in from a host that did
> not know about secondary emulators.

Would that host even include this chunk type at all?

> > > +    evtchn_port_t buf_port; /* OUT - buf ioreq port */
> > > +};
> > > +typedef struct xen_hvm_get_ioreq_server_info
> > xen_hvm_get_ioreq_server_info_t;
> > > +DEFINE_XEN_GUEST_HANDLE(xen_hvm_get_ioreq_server_info_t);
> > > +
> > > +#define HVMOP_map_io_range_to_ioreq_server 19
> > > +struct xen_hvm_map_io_range_to_ioreq_server {
> > > +    domid_t domid;                  /* IN - domain to be serviced */
> > > +    ioservid_t id;                  /* IN - handle from
> > HVMOP_register_ioreq_server */
> > > +    int is_mmio;                    /* IN - MMIO or port IO? */
> > 
> > Do we normally make this distinction via two different hypercalls?
> > 
> 
> I don't really think there's much precedent at an API layer. There's
> code inside Xen which overloads portio and mmio to some degree.

XEN_DOMCTL_ioport_mapping vs. XEN_DOMCTL_memory_mapping is the closest
analogue I think.

It's not really clear to me when DOMCTL vs HVMOP is appropriate, maybe
one of the h/v side folks will comment.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.