[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] "tcp: refine TSO autosizing" causes performance regression on Xen



On 04/15/2015 07:17 PM, Eric Dumazet wrote:
> Do not expect me to fight bufferbloat alone. Be part of the challenge,
> instead of trying to get back to proven bad solutions.

I tried that.  I wrote a description of what I thought the situation
was, so that you could correct me if my understanding was wrong, and
then what I thought we could do about it.  You apparently didn't even
read it, but just pointed me to a single cryptic comment that doesn't
give me enough information to actually figure out what the situation is.

We all agree that bufferbloat is a problem for everybody, and I can
definitely understand the desire to actually make the situation better
rather than dying the death of a thousand exceptions.

If you want help fighting bufferbloat, you have to educate people to
help you; or alternately, if you don't want to bother educating people,
you have to fight it alone -- or lose the battle due to having a
thousand exceptions.

So, back to TSQ limits.  What's so magical about 2 packets being *in the
device itself*?  And what does 1ms, or 2*64k packets (the default for
tcp_limit_output_bytes), have anything to do with it?

Your comment lists three benefits:
1. better RTT estimation
2. faster recovery
3. high rates

#3 is just marketing fluff; it's also contradicted by the statement that
immediately follows it -- i.e., there are drivers for which the
limitation does *not* give high rates.

#1, as far as I can tell, has to do with measuring the *actual* minimal
round trip time of an empty pipe, rather than the round trip time you
get when there's 512MB of packets in the device buffer.  If a device has
a large internal buffer, then having a large number of packets
outstanding means that the measured RTT is skewed.

The goal here, I take it, is to have this "pipe" *exactly* full; having
it significantly more than "full" is what leads to bufferbloat.

#2 sounds like you're saying that if there are too many packets
outstanding when you discover that you need to adjust things, that it
takes a long time for your changes to have an effect; i.e., if you have
5ms of data in the pipe, it will take at least 5ms for your reduced
transmmission rate to actually have an effect.

Is that accurate, or have I misunderstood something?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.