[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/1] xen/netback: correctly calculate required slots of skb.



From: Annie Li <annie.li@xxxxxxxxxx>
Date: Wed, 10 Jul 2013 17:15:11 +0800

> When counting required slots for skb, netback directly uses DIV_ROUND_UP to 
> get
> slots required by header data. This is wrong when offset in the page of header
> data is not zero, and is also inconsistent with following calculation for
> required slot in netbk_gop_skb.
> 
> In netbk_gop_skb, required slots are calculated based on offset and len in 
> page
> of header data. It is possible that required slots here is larger than the one
> calculated in earlier netbk_count_requests. This inconsistency directly 
> results
> in rx_req_cons_peek and xen_netbk_rx_ring_full judgement are wrong.
> 
> Then it comes to situation the ring is actually full, but netback thinks it is
> not and continues to create responses. This results in response overlaps 
> request
> in the ring, then grantcopy gets wrong grant reference and throws out error,
> for example "(XEN) grant_table.c:1763:d0 Bad grant reference 2949120", the
> grant reference is invalid value here. Netback returns XEN_NETIF_RSP_ERROR(-1)
> to netfront when grant copy status is error, then netfront gets rx->status
> (the status is -1, not really data size now), and throws out error,
> "kernel: net eth1: rx->offset: 0, size: 4294967295". This issue can be 
> reproduced
> by doing gzip/gunzip in nfs share with mtu = 9000, the guest would panic after
> running such test for a while.
> 
> This patch is based on 3.10-rc7.
> 
> Signed-off-by: Annie Li <annie.li@xxxxxxxxxx>

This patch looks good to me, but I'd like to see some reviews from other
experts in this area.

In the future I'd really like to see this code either use PAGE_SIZE
everywhere or MAX_BUFFER_OFFSET everywhere, in the buffer chopping
code.

I think using both leads to confusion and makes this code harder to
read.  I prefer MAX_BUFFER_OFFSET because it gives the indication that
what this value represents is the modulus upon which we must chop up
RX buffers in this driver.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.