[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] XEN_NETIF_NR_SLOTS_MIN



On Mon, Sep 07, 2015 at 09:15:40AM -0600, Jan Beulich wrote:
> >>> On 07.09.15 at 16:56, <wei.liu2@xxxxxxxxxx> wrote:
> > On Fri, Sep 04, 2015 at 06:50:45AM -0600, Jan Beulich wrote:
> >> Wei,
> >> 
> >> coming back to commit 723a375f4e introducing this constant along
> >> with some commentary: First of all - why 18 when in old Linux kernels
> >> MAX_SKB_FRAGS is 18 and the head usually requires another slot?
> > 
> > That indeed didn't count the head slot, which is handled separately. It
> > also doesn't include meta slots.
> > 
> > Maybe the comment needs to be updated to "slots that contain actual
> > data"?
> 
> That would take care of the meta slots, but the head to me also is
> "actual data".
> 

Right, I get what you mean now. Bumping that to 19 seems appropriate.

> >> And then - why mostly/only for the (guest) TX path? Don't we have
> >> two compatibility issues:
> >> 1) old netfront (max 18 frags) sending via new netback (max 17
> >> frags)
> > 
> > That's handled by netback by consolidating frags.
> 
> Right - that's the TX path that the description in netif.h refers to.
> 
> >> 2) old netback (max 18 frags) handing received packets to new
> >> frontend (max 17 frags)
> > 
> > That's fine because 18 > 17.
> 
> Exactly because of 18 > 17 this is not fine: How would netfront fit
> 18 slots handed to it by netback into 17 frags without special
> adjustments?
> 

I misread it as guest tx while you said guest rx, sorry.

> >> I.e. shouldn't modern netback be capable to (guest) send at least
> >> 19 slots despite there only being 17 frags, and shouldn't similarly
> > 
> > It does -- as said above, the head and extra slots are not counted
> > against that limit.
> > 
> >> modern netfront be able to receive at least 19 slots? In the latter
> >> case - if not, how should netback guess at what number of slots it
> >> can safely forward? (Yes, I am aware of David's 1650d5455b
> > 
> > Note that slot != frag. I admit netif.h conflates the two concepts in a
> > bad way. I think that's due to netback has evolved a lot overtime.
> > 
> > I'm pretty sure there is check in netback to not overrun the ring by
> > using too many slots.
> > 
> > If you're talking about number of frags, it's not on protocol level.
> > Linux netfront is able to consolidate several slots into reasonable
> > number of frags and make sure the number of frags won't exceed limit.
> > See xennet_fill_frags.
> 
> That's too late - xennet_get_responses() would have rejected such
> a packet already afaict (error message "Too many slots").
> 

Right, that's a bit unfortunate.  We might need to have that on the
protocol level, say, backend should not push more than X slots to
frontend.

OOI are you seeing this in real world? If you have a list of bugs that
you find that would be easier to check against source code. I admit I
often find it quite surprising that netback does things in a certain
ways -- that's mostly due to many assumptions were broken over the
years.

Wei.

> >> relying on hypervisor side improvements available only in 4.6, i.e.
> >> while in general considering this a reasonable approach, I'm not
> >> sure this should be done unconditionally, and hence I'm trying to
> >> figure reasonable criteria of when to do this on older hypervisors.)
> > 
> > I think with my above explanation it should be safe for you to not
> > include 1650d5455b.
> 
> Without an equivalent of David's change I don't see how netfront
> would cope with what _old_ netback may hand it.
> 
> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.