|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 6/9] ioreq-server: add support for multiple servers
> -----Original Message-----
> From: Ian Campbell
> Sent: 07 May 2014 10:45
> To: Paul Durrant
> Cc: xen-devel@xxxxxxxxxxxxx; Ian Jackson; Stefano Stabellini; Jan Beulich
> Subject: Re: [PATCH v5 6/9] ioreq-server: add support for multiple servers
>
> On Tue, 2014-05-06 at 14:28 +0100, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Ian Campbell
> > > Sent: 06 May 2014 11:46
> > > To: Paul Durrant
> > > Cc: xen-devel@xxxxxxxxxxxxx; Ian Jackson; Stefano Stabellini; Jan Beulich
> > > Subject: Re: [PATCH v5 6/9] ioreq-server: add support for multiple
> servers
> > >
> > > On Thu, 2014-05-01 at 13:08 +0100, Paul Durrant wrote:
> > > > NOTE: To prevent emulators running in non-privileged guests from
> > > > potentially allocating very large amounts of xen heap, the core
> > > > rangeset code has been modified to introduce a hard limit of 256
> > > > ranges per set.
> > >
> > > OOI how much RAM does that correspond to?
> >
> > Each range is two pointers (list_head) and two unsigned longs (start and
> end), so that's 32 bytes - so 256 is two pages worth.
>
> Seems reasonable.
>
>
> > > > + }
> > > > +
> > > > + arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> > > > + if ( arg == NULL )
> > > > + return -1;
> > > > +
> > > > + hypercall.op = __HYPERVISOR_hvm_op;
> > > > + hypercall.arg[0] = HVMOP_map_io_range_to_ioreq_server;
> > > > + hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> > > > +
> > > > + arg->domid = domid;
> > > > + arg->id = id;
> > > > + arg->type = HVMOP_IO_RANGE_PCI;
> > > > + arg->start = arg->end = HVMOP_PCI_SBDF((uint64_t)segment,
> > >
> > > Since you have HVMOP_IO_RANGE_PCI do you not want to expose that
> via
> > > this interface?
> > >
> >
> > I could have crunched this into the map_range function. I left it
> > separate because I thought it was more convenient for callers - who I
> > think will most likely deal with one PCI device at a time.
>
> Oh, so the "ranginess" is an existing feature of the hypercall which you
> are using but doesn't really apply to this use case? (I did think the
> concept of a range of PCI devices wasn't likely to be very useful,
> except perhaps in the "all functions of a device" case perhaps).
>
> Does the actual hypercall deal with a range? Or does it insist that
> start == end? Looks like the former, I'm happy with that if the
> hypervisor side guys are. Not sure if it is worth a comment somewhere?
>
It'll deal with a range. I debated whether all functions of a device was useful
but decided to stick with a single PCI function in the user API as I think
that's what most clients will want. I'll stick a comment in to note it's a
singleton range.
Paul
> > > > @@ -502,6 +505,31 @@ static int setup_guest(xc_interface *xch,
> > > > special_pfn(SPECIALPAGE_SHARING));
> > > >
> > > > /*
> > > > + * Allocate and clear additional ioreq server pages. The default
> > > > + * server will use the IOREQ and BUFIOREQ special pages above.
> > > > + */
> > > > + for ( i = 0; i < NR_IOREQ_SERVER_PAGES; i++ )
> > > > + {
> > > > + xen_pfn_t pfn = ioreq_server_pfn(i);
> > > > +
> > > > + rc = xc_domain_populate_physmap_exact(xch, dom, 1, 0, 0,
> &pfn);
> > > > + if ( rc != 0 )
> > > > + {
> > > > + PERROR("Could not allocate %d'th ioreq server page.", i);
> > >
> > > This will say things like "1'th". "Could not allocate ioreq server page
> > > %d" avoids that.
> >
> > Ok. It was a cut'n'paste from the special_pfn code just above. I'll
> > fix that while I'm in the neighbourhood.
>
> Thanks!
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |