[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH 3/3] libvchan: interdomain communications library



On 09/21/2011 06:53 AM, Ian Campbell wrote:
> On Mon, 2011-09-19 at 23:43 +0100, Daniel De Graaf wrote:
>> This library implements a bidirectional communication interface between
>> applications in different domains, similar to unix sockets. Data can be
>> sent using the byte-oriented libvchan_read/libvchan_write or the
>> packet-oriented libvchan_recv/libvchan_send.
>>
>> Channel setup is done using a client-server model; domain IDs and a port
>> number must be negotiated prior to initialization. The server allocates
>> memory for the shared pages and determines the sizes of the
>> communication rings (which may span multiple pages, although the default
>> places rings and control within a single page).
>>
>> With properly sized rings, testing has shown that this interface
>> provides speed comparable to pipes within a single Linux domain; it is
>> significantly faster than network-based communication.
>>
>> Signed-off-by: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
> 
> I only skimmed this one I had a few minor thoughts below but really I'm
> pretty much OK for it to go in (modulo any fallout from comments on
> patches 1+2).
> 
> Definite Bonus Points for the doxygen/kernel doc commentary in the
> headers, which tool parses them? (a few comments in the code itself seem
> to have the "/**" marker but not the rest of the syntax).
 
I think doxygen parses them, but I haven't personally run doxygen to
verify that it works as expected.

> You changed the library name to libxenvchan but not the path to the
> source nor the API names?

I suppose backwards compatability with the existing API has already been
killed, so there's no reason not to change the names - it does make
everything more consistent (and easier to grep for).

>> +static int init_gnt_srv(struct libvchan *ctrl)
>> +{
>> +       int pages_left = ctrl->read.order >= PAGE_SHIFT ? 1 << 
>> (ctrl->read.order - PAGE_SHIFT) : 0;
> 
> Here you do >= PAGE_SHIFT but on the out_unmap_left path you do > 11.
> 
> (am I right that left == server and right == client in the libvhan
> terminology?)
> 

>From the public/io/libvchan.h header:
  * Standard consumer/producer interface, one pair per buffer
  * left is client write, server read
  * right is client read, server write

So the client is on the left, if you assume the writer owns the buffer.

>> +       if (ctrl->read.order == 10) {
>> +               ctrl->read.buffer = ((void*)ctrl->ring) + 1024;
>> +       } else if (ctrl->read.order == 11) {
>> +               ctrl->read.buffer = ((void*)ctrl->ring) + 2048;
>> +       } else {
>> +               ctrl->read.buffer = xc_gntshr_share_pages(ctrl->gntshr, 
>> ctrl->other_domain_id,
>> +                       pages_left, ctrl->ring->grants, 1);
>> +               if (!ctrl->read.buffer)
>> +                       goto out_ring;
>> +       }
> 
> switch (...read.order)?
> 
> In other places you have MAX_LARGE_RING/MAX_SMALL_RING etc, I think
> using SMALL/LARGE_RING_ORDER instead of 10 and 11 seems like a good
> idea.
> 
> Similarly using LARGE/SMALL_RING_OFFSET instead of 1024/2048 would help
> clarity.
> 
>> +       if (ctrl->write.order < 10 || ctrl->write.order > 24)
>> +               goto out_unmap_ring;
> 
> What is the significance of 2^24?
> 

Actually, this should be 20 to match MAX_RING_SIZE in init.c; that number
is derived from 1024 bytes of grant data. An order of 22 will always cause
the list of grants to overrun the primary shared page; an order of 21 on
both sides will also cause this, and can also cause the grant list to overlap
the LARGE_RING area. From testing, the performance gain from larger rings
begins to drop off before 2^20 (although this may depend on the size of
your L2/L3 cache). Also, gntalloc is restricted to 1024 pages by default
(this can be adjusted via sysfs or a module parameter).

>> +
>> +// find xenstore entry
>> +       snprintf(buf, sizeof buf, 
>> "/local/domain/%d/data/vchan/%s/%d/ring-ref",
>> +               ctrl->other_domain_id, domid_str, ctrl->device_number);
> 
> I wonder if the base of this path (up to and including "%s/%d"?) ought
> to be caller provided? My thinking is that the rendezvous between client
> and server is out of band and the path is really an element (or even the
> total encoding) of that OOB communication.
> 
> It would also push the selection of xs location to be pushed up into the
> application which also defines the protocol. For example I might want to
> build a pv protocol with this library which is supported by the
> toolstack and therefore want to put my stuff under devices etc or in any
> other protocol specific xs location. The wart I previously mentioned wrt
> using the "data" directory would then be an application wart (which I
> think is ok) rather than baked into the libraries.
> 
> Ian.
> 

Allowing the caller to specify the xenstore path would make the interface
more flexible, and also removes the arbitrary port numbers which already
seem likely to collide.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.