[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] netchannel vs MAX_SKB_FRAGS (Was: Re: [PATCH] xen/netfront: handle compound page fragments on transmit)





On 2012-11-21 0:04, Ian Campbell wrote:
Limiting to just xen-devel folks.

On Tue, 2012-11-20 at 15:54 +0000, Ian Campbell wrote:

This highlights a couple of issues, the main one is that implicitly
including MAX_SKB_FRAGS in the PV net protocol is just madness. It can
and has changed in the past. Someone really needs to (retroactively)
figure out what a sensible default minimum which front and back must
support is and use it in the front and backends instead of
MAX_SKB_FRAGS. Probably something derived from the existing 64K limit
(Linux has used 17 and 18 as MAX_SKB_FRAGS in the past).

Then perhaps implement a feature-flag to allow front and backends to
negotiate something bigger if they like.

It might also be interesting for front and backends to coalesce multiple
ring slots into compound pages.

Yes, MAX_SKB_FRAGS is max frag number, not max slot number required.
But I am not so clear about the whole implementation here. Does it mean netfront/netback uses something like MAX_SKB_FRAGS*SLOTS_PER_FRAG? and SLOTS_PER_FRAG can be adjusted based on different situation? If netfront and netback negotiate this value, then xenstore key feature is required, right? As mentioned in another compound page fragments patch, for non-compound page, this value could be 64K/PAGE_SIZE + 1, but if taking account into compound page, the worse case is around 48 slots(in 1+4096+1 case). It is hard to negotiate this between netfront and netback for different packets.

Thanks
Annie
I don't have time for either of those unfortunately

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.