 
	
| [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC] xen-netback: calculate the number of slots required for large MTU vifs
 On 2013-7-11 14:01, annie li wrote: On 2013-7-11 13:14, Matt Wilson wrote:On Wed, Jul 10, 2013 at 08:37:03PM +0100, Wei Liu wrote:On Wed, Jul 10, 2013 at 09:13:33AM +0100, Wei Liu wrote:On Tue, Jul 09, 2013 at 10:40:59PM +0000, Matt Wilson wrote:From: Xi Xiong <xixiong@xxxxxxxxxx> [ note: I've just cherry picked this onto net-next, and only compile tested. This a RFC only. -msw ]Should probably rebase it on net.git because it is a bug fix. Let's worry about that later...*nod*Currently the number of RX slots required to transmit a SKB to xen-netfront can be miscalculated when an interface uses a MTU larger than PAGE_SIZE. If the slot calculation is wrong, xen-netback canpause the queue indefinitely or reuse slots. The former manifests as aloss of connectivity to the guest (which can be restored by lowering the MTU set on the interface). The latter manifests with "Bad grant reference" messages from Xen such as: (XEN) grant_table.c:1797:d0 Bad grant reference 264241157 and kernel messages within the guest such as: [ 180.419567] net eth0: Invalid extra type: 112 [ 180.868620] net eth0: rx->offset: 0, size: 4294967295 [ 180.868629] net eth0: rx->offset: 0, size: 4294967295 BUG_ON() assertions can also be hit if RX slots are exhausted while handling a SKB. This patch changes xen_netbk_rx_action() to count the number of RXslots actually consumed by netbk_gop_skb() instead of using nr_frags + 1.This prevents under-counting the number of RX slots consumed when a SKB has a large linear buffer. Additionally, we now store the estimated number of RX slots required to handle a SKB in the cb overlay. This value is used to determine if the next SKB in the queue can be processed. Finally, the logic in start_new_rx_buffer() can cause RX slots to bewasted when setting up copy grant table operations for SKBs with largelinear buffers. For example, a SKB with skb_headlen() equal to 8157 bytes that starts 64 bytes 64 bytes from the start of the page willDuplicated "64 bytes". And this change looks like an improvement not a bug fix. Probably submit a separate patch for this?Argh, I knew it was in there somewhere (since you pointed it out in Dubin :-). Maybe it could be a separate patch. I think the description is also a bit confusing. I'll work on rewording it.consume three RX slots instead of two. This patch changes the "head" parameter to netbk_gop_frag_copy() to act as a flag. When set, start_new_rx_buffer() will always place as much data as possible into each RX slot. Signed-off-by: Xi Xiong <xixiong@xxxxxxxxxx> Reviewed-by: Matt Wilson <msw@xxxxxxxxxx> [ msw: minor code cleanups, rewrote commit message, adjusted code to count RX slots instead of meta structures ] Signed-off-by: Matt Wilson <msw@xxxxxxxxxx> Cc: Annie Li <annie.li@xxxxxxxxxx> Cc: Wei Liu <wei.liu2@xxxxxxxxxx> Cc: Ian Campbell <Ian.Campbell@xxxxxxxxxx> Cc: netdev@xxxxxxxxxxxxxxx Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx ---drivers/net/xen-netback/netback.c | 51 ++++++++++++++++++++++--------------1 files changed, 31 insertions(+), 20 deletions(-)diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c Sorry, I forgot the offset == MAX_BUFFER_OFFSET case and misunderstand your patch, please ignore my last comments. Your patch keeps the original DIV_ROUND_UP and changes the mechanism in netbk_gop_frag_copy to make slots same with xen_netbk_count_skb_slots. All Roads Lead to Rome!:-) Thanks Annie _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel 
 
 
 | 
|  | Lists.xenproject.org is hosted with RackSpace, monitoring our |