[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] xennet: skb rides the rocket: 20 slots
> -----Original Message----- > From: xen-devel-bounces@xxxxxxxxxxxxx [mailto:xen-devel- > bounces@xxxxxxxxxxxxx] On Behalf Of Ian Campbell > Sent: 08 January 2013 10:06 > To: ANNIE LI > Cc: Sander Eikelenboom; Konrad Rzeszutek Wilk; xen-devel > Subject: Re: [Xen-devel] xennet: skb rides the rocket: 20 slots > > On Tue, 2013-01-08 at 02:12 +0000, ANNIE LI wrote: > > > >> > > >> I have added some extra info, but i don't have enough knowledge if > > >> this could/should be prevented from happening ? > > > MAX_SKB_FRAGS has never, AFAIK, been as big as 19 or 20 so I'm not > > > sure this can have ever worked. > > > > > > These SKBs seem to be pretty big (not quite 64KB), seemingly most of > > > the data is contained in a smallish number of frags, which suggests > > > compound pages and therefore a reasonably modern kernel? > > > > > > This probably relates somewhat to the issues described in the > > > "netchannel vs MAX_SKB_FRAGS" thread last year, or at least the > > > solution to that would necessarily involve fixing this issue. > > > > If netback complains about "Too many frags", then it should be > > MAX_SKB_FRAGS limitation in netback results in dropping packets in > > netfront. It is possible that other netfronts(windows?) also hit this. > > It's very possible. I rather suspect that non-Linux frontends have > workarounds (e.g. manual resegmenting etc) for this case. > Citrix Windows PV drivers certainly do; packets will be copied if they have more than MAX_SKB_FRAGS fragments. We also choose a maximal TSO size carefully to try to avoid Windows' network stack sending down more than MAX_SKB_FRAGS fragments in the first place. Paul _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |