[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] netchannel vs MAX_SKB_FRAGS (Was: Re: [PATCH] xen/netfront: handle compound page fragments on transmit)



> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxx [mailto:xen-devel-
> bounces@xxxxxxxxxxxxx] On Behalf Of James Harper
> Sent: 23 November 2012 02:11
> To: ANNIE LI; Ian Campbell
> Cc: KonradRzeszutekWilk; xen-devel
> Subject: Re: [Xen-devel] netchannel vs MAX_SKB_FRAGS (Was: Re: [PATCH]
> xen/netfront: handle compound page fragments on transmit)
> 
> >
> > On 2012-11-21 0:04, Ian Campbell wrote:
> > > Limiting to just xen-devel folks.
> > >
> > > On Tue, 2012-11-20 at 15:54 +0000, Ian Campbell wrote:
> > >
> > > This highlights a couple of issues, the main one is that implicitly
> > > including MAX_SKB_FRAGS in the PV net protocol is just madness. It
> > > can and has changed in the past. Someone really needs to
> > > (retroactively) figure out what a sensible default minimum which
> > > front and back must support is and use it in the front and backends
> > > instead of MAX_SKB_FRAGS. Probably something derived from the
> > > existing 64K limit (Linux has used 17 and 18 as MAX_SKB_FRAGS in the
> past).
> > >
> > > Then perhaps implement a feature-flag to allow front and backends to
> > > negotiate something bigger if they like.
> > >
> > > It might also be interesting for front and backends to coalesce
> > > multiple ring slots into compound pages.
> >
> > Yes, MAX_SKB_FRAGS is max frag number, not max slot number required.
> > But I am not so clear about the whole implementation here. Does it
> > mean netfront/netback uses something like
> MAX_SKB_FRAGS*SLOTS_PER_FRAG?
> > and SLOTS_PER_FRAG can be adjusted based on different situation? If
> > netfront and netback negotiate this value, then xenstore key feature
> > is required, right?
> > As mentioned in another compound page fragments patch, for non-
> > compound page, this value could be 64K/PAGE_SIZE + 1, but if taking
> > account into compound page, the worse case is around 48 slots(in
> > 1+4096+1 case). It is hard to negotiate this between netfront and netback
> for different packets.
> >
> 
> FWIW, Windows appears to have no real limit in its SG list, and if its >18 
> then
> netback (or linux?) just drops such packets. For a 64kb "large" packet, the
> buffer arrangement would likely be eth hdr + ip hdr + tcp hdr + (64K - total
> header size)/PAGE_SIZE + 1 frags. It would be good if the documentation of
> the protocol defined this limitation (which may not even exist if dom0 was
> something other than Linux anyway).
> 

Given that it's a backend limitation and not a protocol limitation then it's 
probably not for documentation to define. Instead it may be more useful if the 
protocol itself allowed the backend to notify the frontend of its fragmentation 
constraints (and probably vice versa).

  Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.