[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots properly when larger MTU sizes are used


  • To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
  • From: "Palagummi, Siva" <Siva.Palagummi@xxxxxx>
  • Date: Thu, 30 Aug 2012 10:26:23 +0000
  • Accept-language: en-US
  • Cc: "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>
  • Delivery-date: Thu, 30 Aug 2012 10:26:54 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: Ac2F4HBfCFIrW+OHQRWCwjPvwOEhAgAd+rmAABARjCA=
  • Thread-topic: [PATCH RFC V2] xen/netback: Count ring slots properly when larger MTU sizes are used

Ian,

I was using ms exchange client which is converting tabs into spaces and so on. 
That's why I sent it as an attachment. I will find a way to send the patch 
inline and resubmit. Thanks for reviewing it.

--Siva



> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@xxxxxxxxxx]
> Sent: Thursday, August 30, 2012 1:37 PM
> To: Palagummi, Siva
> Cc: xen-devel@xxxxxxxxxxxxx
> Subject: Re: [PATCH RFC V2] xen/netback: Count ring slots properly when
> larger MTU sizes are used
> 
> On Wed, 2012-08-29 at 13:21 +0100, Palagummi, Siva wrote:
> > This patch contains the modifications that are discussed in thread
> > http://lists.xen.org/archives/html/xen-devel/2012-08/msg01730.html
> 
> Thanks.
> 
> Please can you find a way to include your patches inline rather than as
> attachments, it makes reply for review much easier.
> Documentation/email-clients.txt has some hints for various clients or
> you can just use the "git send-email" command (perhaps with "git
> format-patch").
> 
> You should also CC the netdev@xxxxxxxxxxxxxxx list.
> 
> > Instead of using max_required_rx_slots, I used the count that we
> > already have in hand to verify if we have enough room in the batch
> > queue for next skb. Please let me know if that is not appropriate.
> > Things worked fine in my environment. Under heavy load now we seems
> to
> > be consuming most of the slots in the queue and no BUG_ON :-)
> 
> 
> > From: Siva Palagummi <Siva.Palagummi@xxxxxx>
> >
> > count variable in xen_netbk_rx_action need to be incremented
> > correctly to take into account of extra slots required when
> skb_headlen is
> > greater than PAGE_SIZE when larger MTU values are used. Without this
> change
> > BUG_ON(npo.meta_prod > ARRAY_SIZE(netbk->meta)) is causing netback
> thread
> > to exit.
> >
> > The fix is to stash the counting already done in
> xen_netbk_count_skb_slots
> > in skb_cb_overlay and use it directly in xen_netbk_rx_action.
> >
> > Also improved the checking for filling the batch queue.
> >
> > Also merged a change from a patch created for
> xen_netbk_count_skb_slots
> > function as per thread
> > http://lists.xen.org/archives/html/xen-devel/2012-05/msg01864.html
> >
> > The problem is seen with linux 3.2.2 kernel on Intel 10Gbps network
> >
> >
> > Signed-off-by: Siva Palagummi <Siva.Palagummi@xxxxxx>
> > ---
> > diff -uprN a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> netback/netback.c
> > --- a/drivers/net/xen-netback/netback.c     2012-01-25 19:39:32.000000000
> -0500
> > +++ b/drivers/net/xen-netback/netback.c     2012-08-28 17:31:22.000000000
> -0400
> > @@ -80,6 +80,11 @@ union page_ext {
> >     void *mapping;
> >  };
> >
> > +struct skb_cb_overlay {
> > +   int meta_slots_used;
> > +   int count;
> > +};
> > +
> >  struct xen_netbk {
> >     wait_queue_head_t wq;
> >     struct task_struct *task;
> > @@ -324,9 +329,9 @@ unsigned int xen_netbk_count_skb_slots(s
> >  {
> >     unsigned int count;
> >     int i, copy_off;
> > +   struct skb_cb_overlay *sco;
> >
> > -   count = DIV_ROUND_UP(
> > -                   offset_in_page(skb->data)+skb_headlen(skb),
> PAGE_SIZE);
> > +   count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> 
> This hunk appears to be upstream already (see
> e26b203ede31fffd52571a5ba607a26c79dc5c0d). Which tree are you working
> against? You should either base patches on Linus' branch or on the
> net-next branch.
> 
> Other than this the patch looks good, thanks.
> 
> >
> >     copy_off = skb_headlen(skb) % PAGE_SIZE;
> >
> > @@ -352,6 +357,8 @@ unsigned int xen_netbk_count_skb_slots(s
> >                     size -= bytes;
> >             }
> >     }
> > +   sco = (struct skb_cb_overlay *)skb->cb;
> > +   sco->count = count;
> >     return count;
> >  }
> >
> > @@ -586,9 +593,6 @@ static void netbk_add_frag_responses(str
> >     }
> >  }
> >
> > -struct skb_cb_overlay {
> > -   int meta_slots_used;
> > -};
> >
> >  static void xen_netbk_rx_action(struct xen_netbk *netbk)
> >  {
> > @@ -621,12 +625,16 @@ static void xen_netbk_rx_action(struct x
> >             sco = (struct skb_cb_overlay *)skb->cb;
> >             sco->meta_slots_used = netbk_gop_skb(skb, &npo);
> >
> > -           count += nr_frags + 1;
> > +           count += sco->count;
> >
> >             __skb_queue_tail(&rxq, skb);
> >
> > +           skb = skb_peek(&netbk->rx_queue);
> > +           if (skb == NULL)
> > +                   break;
> > +           sco = (struct skb_cb_overlay *)skb->cb;
> >             /* Filled the batch queue? */
> > -           if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
> > +           if (count + sco->count >= XEN_NETIF_RX_RING_SIZE)
> >                     break;
> >     }
> >
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.