[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly when larger MTU sizes are used



>>> On 13.08.12 at 02:12, "Palagummi, Siva" <Siva.Palagummi@xxxxxx> wrote:
>--- a/drivers/net/xen-netback/netback.c        2012-01-25 19:39:32.000000000 
>-0500
>+++ b/drivers/net/xen-netback/netback.c        2012-08-12 15:50:50.000000000 
>-0400
>@@ -623,6 +623,24 @@ static void xen_netbk_rx_action(struct x
> 
>               count += nr_frags + 1;
> 
>+              /*
>+               * The logic here should be somewhat similar to
>+               * xen_netbk_count_skb_slots. In case of larger MTU size,

Is there a reason why you can't simply use that function then?
Afaict it's being used on the very same skb before it gets put on
rx_queue already anyway.

>+               * skb head length may be more than a PAGE_SIZE. We need to
>+               * consider ring slots consumed by that data. If we do not,
>+               * then within this loop itself we end up consuming more meta
>+               * slots turning the BUG_ON below. With this fix we may end up
>+               * iterating through xen_netbk_rx_action multiple times
>+               * instead of crashing netback thread.
>+               */
>+
>+
>+              count += DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);

This now over-accounts by one I think (due to the "+ 1" above;
the calculation here really is to replace that increment).

Jan

>+
>+              if (skb_shinfo(skb)->gso_size)
>+                      count++;
>+
>               __skb_queue_tail(&rxq, skb);
> 
>               /* Filled the batch queue? */



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.