[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC] xen-netback: coalesce frags before copying
On Fri, 2013-03-15 at 12:31 +0000, Wei Liu wrote: > On Fri, 2013-03-15 at 10:42 +0000, Ian Campbell wrote: > > On Thu, 2013-03-14 at 18:26 +0000, Wei Liu wrote: > > > > > + */ > > > +#define NETBK_MAX_SKB_FRAGS_DEFAULT 20 > > > +static unsigned int max_skb_frags = NETBK_MAX_SKB_FRAGS_DEFAULT; > > > +module_param(max_skb_frags, uint, 0444); > > > > You could at least make this 0644 so root can up the limit dynamically? > > > > This was done on purpose. Changing this on the fly can cause race > condition. If we are up to changing this, we need to write more code to > make sure all worker threads in quiescence state, this adds a layer of > complexity. Right, I figured this out myself further down ;-) > > > - for (i = start; i < shinfo->nr_frags; i++, txp++) { > > > - struct page *page; > > > - pending_ring_idx_t index; > > > + /* Coalesce */ > > > + total_frags_merged = 0; > > > + for (i = j = start; i < MAX_SKB_FRAGS && j < nr_frags; i++) { > > > > I can't see any code which causes us to drop the frag if we end up with > > i >= MAX_SKB_FRAGS? Is it just too subtle for me? > > > > The basic assumption that it is not possible to reach i >= MAX_SKB_FRAGS > since the maximum packet size supported in backend is 64K and we can > always accommodate a packet. Any packet larger than 64K causes fatal > error early on. Does that early check also account for frontends which lie in the initial slot->size or is that otherwise caught later too? i.e. do we do the right thing when there are more bytes in the additional slots than the initially header said there would be? I expect we already do, I just want to be sure this doesn't subtly alter the conditions. > I could remove "i < MAX_SKB_FRAGS" if this causes confusion, it is just > a redundant check. Is it an actual invariant? i.e. could it be BUG_ON? > > > struct pending_tx_info *pending_tx_info = > > > netbk->pending_tx_info; > > > > > > - index = pending_index(netbk->pending_cons++); > > > - pending_idx = netbk->pending_ring[index]; > > > - page = xen_netbk_alloc_page(netbk, pending_idx); > > > + page = alloc_page(GFP_KERNEL|__GFP_COLD); > > > > Possibly for future enhancement we could use netdev_alloc_frag() and > > take advantage of higher order pages here? > > > > Sure, but I think that deserves a separate patch. This patch is more of > a fix than improvement. Right, by "future enhancement" I meant a future patch not a change to this one. > > > + gop->flags = GNTCOPY_source_gref; > > > + > > > + gop->source.u.ref = txp->gref; > > > + gop->source.domid = vif->domid; > > > + gop->source.offset = txp->offset; > > > + > > > + gop->dest.domid = DOMID_SELF; > > > + > > > + gop->dest.offset = dst_offset; > > > + gop->dest.u.gmfn = virt_to_mfn(page_address(page)); > > > + > > > + if (dst_offset + txp->size > PAGE_SIZE) { > > > + /* This page can only merge a portion of txp */ > > > + gop->len = PAGE_SIZE - dst_offset; > > > + txp->offset += gop->len; > > > + txp->size -= gop->len; > > > + dst_offset += gop->len; /* == PAGE_SIZE, will > > > quit loop */ > > > > In this case we don't touch pending_cons etc? Is that because when we > > hit this case we always go around again and will eventually hit the else > > case below ensuring we do the pending_cons stuff exactly once for the > > tail end of the buffer? I think that is OK but it is worthy of a > > comment. > > > > Yes, we cannot touch pending_cons in this case, because this slot needs > to be coped with in next loop, using another gop. > > > What happens if we hit SKB_FRAG_MAX right as we hit this case? Might we > > fail to respond to the final request which has been split in this way? > > > > As stated above, we won't hit i >= MAX_SKB_FRAGS. OK, same question for i == MAX_SKB_FRAGS - 1 then ;-) Or whatever condition would be needed for us to exit here without ever getting to this else clause. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |