[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Is: SKB_MAX_LEN bites again. Was: Re: bug disabling guest interface



On Tue, Mar 12, 2013 at 08:13:39PM +0000, Wei Liu wrote:
> 
> Actually the copy is done with Hypervisor, I don't know how it is
> possible to do coalesce while copying.
> 
> FWIW I came up with an idea. Netback maintains a ring of skb_frag_t
> groups. Each group has NETBK_SKB_MAX_FRAGS frags. So overall the size of
> this ring is MAX_PENGING_REQS * NETBK_SKB_MAX_FRAGS *
> sizeof(skb_frag_t). If NETBK_SKB_MAX_FRAGS == 20, skb_frag_t size == 16,
> this ring is 80K large.

Just realized two things: 1. the ring size estimation is wrong; 2. we
don't need a extra ring for this. :-)

We can alter how frags are handled inside netback to solve this issue.
We can store start of pending_idx and end of pending_idx in
skb->frags[0] and skb->frags[1] if frontend's MAX_SKB_FRAGS is larger
than backend's.


Wei.

> 
> If we detect frontend's frags > backend's SKB_MAX_FRAGS, we switch to
> use the new ring, and set skb->nr_frags to some specific number (say -1)
> and has frag[0].page.p set to ring index. Then we copy data as usual.
> Later in before xen_netbk_fill_frags if we detect this skb is should be
> constructed via previous skb frag ring, we accommodate those frags to
> that skb.
> 
> if (skb should be constructed via frag ring)
>     construct_skb()
> else /* normal path
>     xen_netbk_fill_frags
> 
> This idea is essentially adding a slow path though, because
> construct_skb() potentially needs to copy / move data around.
> 
> 
> Wei.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.