[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V2] xen/netfront: handle compound page fragments on transmit
On Wed, 2012-11-21 at 12:08 +0000, Ian Campbell wrote: > The max-frag related limitation comes from the "wire" protocol used > between front and back. As it stands either the frontend or the backend > is more than likely going to drop the sort of pathalogical skbs you are > worried. > > I agree that this absolutely needs to be fixed in the protocol (and I've > posted a call to arms on this topic on xen-devel) but I'd like to do it > in a coordinated manner as part of a protocol extension (where the front > and backend negotiate the maximum number of order-0 pages per Ethernet > frame they are willing to handle) rather than as a side effect of this > patch. > > So right now I don't want to introduce frontends which default to > sending increased numbers of pages in to the wild, since that makes > things more complex when we come to extend the protocol. > > Perhaps in the short term doing an skb_linearize when we hit this case > would help, that will turn the pathalogical skb into a much more normal > one. It'll be expensive but it should be rare. That assumes you can > linearize such a large skb, which depends on the ability to allocate > large order pages which isn't a given. Herm, maybe that doesn't work > then. > First of all, thanks a lot for all these detailed informations. This now makes sense ! > AFAIK we don't have an existing skb_foo operation which copies an skb, > including (or only) the frags, with the side effect of aligning and > coalescing them. Do we? > No, we only have the full linearize helper, and skb_try_coalesce() helpers. TCP stack uses an internal function to collapse several skbs so skbs using a single page, I guess we could generalize this and make it available to other uses. Thanks ! _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |