[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] compound skb frag pages appearing in start_xmit



On Tue, 2012-10-09 at 15:40 +0100, Ian Campbell wrote:
> On Tue, 2012-10-09 at 15:27 +0100, Eric Dumazet wrote:
> > On Tue, 2012-10-09 at 15:17 +0100, Ian Campbell wrote:
> > 
> > > Does the higher order pages effectively reduce the number of frags which
> > > are in use? e.g if MAX_SKB_FRAGS is 16, then for order-0 pages you could
> > > have 64K worth of frag data.
> > > 
> > > If we switch to order-3 pages everywhere then can the skb contain 512K
> > > of data, or does the effective maximum number of frags in an skb reduce
> > > to 2?
> > 
> > effective number of frags reduce to 2 or 3
> > 
> > (We still limit GSO packets to ~63536 bytes)
> 
> Great! Then I think the fix is more/less trivial...

The following seems to work for me.

I haven't tackled netfront yet.

8<--------------------------------------------------------------

>From 551e42e3dd203f2eb97cb082985013bb33b8f020 Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@xxxxxxxxxx>
Date: Tue, 9 Oct 2012 15:51:20 +0100
Subject: [PATCH] xen: netback: handle compound page fragments on transmit.

An SKB paged fragment can consist of a compound page with order > 0.
However the netchannel protocol deals only in PAGE_SIZE frames.

Handle this in netbk_gop_frag_copy and xen_netbk_count_skb_slots by
iterating over the frames which make up the page.

Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
Cc: Eric Dumazet <eric.dumazet@xxxxxxxxx>
Cc: Konrad Rzeszutek Wilk <konrad@xxxxxxxxxx>
Cc: Sander Eikelenboom <linux@xxxxxxxxxxxxxx>
---
 drivers/net/xen-netback/netback.c |   40 ++++++++++++++++++++++++++++++++----
 1 files changed, 35 insertions(+), 5 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c 
b/drivers/net/xen-netback/netback.c
index 4ebfcf3..d747e30 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -335,21 +335,35 @@ unsigned int xen_netbk_count_skb_slots(struct xenvif 
*vif, struct sk_buff *skb)
 
        for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
                unsigned long size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
+               unsigned long offset = skb_shinfo(skb)->frags[i].page_offset;
                unsigned long bytes;
+
+               offset &= ~PAGE_MASK;
+
                while (size > 0) {
+                       BUG_ON(offset >= PAGE_SIZE);
                        BUG_ON(copy_off > MAX_BUFFER_OFFSET);
 
-                       if (start_new_rx_buffer(copy_off, size, 0)) {
+                       bytes = PAGE_SIZE - offset;
+
+                       if (bytes > size)
+                               bytes = size;
+
+                       if (start_new_rx_buffer(copy_off, bytes, 0)) {
                                count++;
                                copy_off = 0;
                        }
 
-                       bytes = size;
                        if (copy_off + bytes > MAX_BUFFER_OFFSET)
                                bytes = MAX_BUFFER_OFFSET - copy_off;
 
                        copy_off += bytes;
+
+                       offset += bytes;
                        size -= bytes;
+
+                       if (offset == PAGE_SIZE)
+                               offset = 0;
                }
        }
        return count;
@@ -403,14 +417,24 @@ static void netbk_gop_frag_copy(struct xenvif *vif, 
struct sk_buff *skb,
        unsigned long bytes;
 
        /* Data must not cross a page boundary. */
-       BUG_ON(size + offset > PAGE_SIZE);
+       BUG_ON(size + offset > PAGE_SIZE<<compound_order(page));
 
        meta = npo->meta + npo->meta_prod - 1;
 
+       /* Skip unused frames from start of page */
+       page += offset >> PAGE_SHIFT;
+       offset &= ~PAGE_MASK;
+
        while (size > 0) {
+               BUG_ON(offset >= PAGE_SIZE);
                BUG_ON(npo->copy_off > MAX_BUFFER_OFFSET);
 
-               if (start_new_rx_buffer(npo->copy_off, size, *head)) {
+               bytes = PAGE_SIZE - offset;
+
+               if (bytes > size)
+                       bytes = size;
+
+               if (start_new_rx_buffer(npo->copy_off, bytes, *head)) {
                        /*
                         * Netfront requires there to be some data in the head
                         * buffer.
@@ -420,7 +444,6 @@ static void netbk_gop_frag_copy(struct xenvif *vif, struct 
sk_buff *skb,
                        meta = get_next_rx_buffer(vif, npo);
                }
 
-               bytes = size;
                if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
                        bytes = MAX_BUFFER_OFFSET - npo->copy_off;
 
@@ -453,6 +476,13 @@ static void netbk_gop_frag_copy(struct xenvif *vif, struct 
sk_buff *skb,
                offset += bytes;
                size -= bytes;
 
+               /* Next frame */
+               if (offset == PAGE_SIZE) {
+                       BUG_ON(!PageCompound(page));
+                       page++;
+                       offset = 0;
+               }
+
                /* Leave a gap for the GSO descriptor. */
                if (*head && skb_shinfo(skb)->gso_size && !vif->gso_prefix)
                        vif->rx.req_cons++;
-- 
1.7.2.5




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.