[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCHv1 net-next] xen-netback: always fully coalesce guest Rx packets

On 13/01/15 14:30, Wei Liu wrote:
> On Tue, Jan 13, 2015 at 02:05:17PM +0000, David Vrabel wrote:
>> Always fully coalesce guest Rx packets into the minimum number of ring
>> slots.  Reducing the number of slots per packet has significant
>> performance benefits (e.g., 7.2 Gbit/s to 11 Gbit/s in an off-host
>> receive test).
> Good number.
>> However, this does increase the number of grant ops per packet which
>> decreases performance with some workloads (intrahost VM to VM)
> Do you have figures before and after this change?

Some better (more rigorous) results done by Jonathan Davies shows no
regressions with full coalescing even without the grant copy
optimization, and a big improvement to single stream receive.

                         baseline    Full coalesce
Interhost aggregate      24 Gb/s     24 Gb/s
Interhost VM receive      7.2 Gb/s   11 Gb/s
Intrahost single stream  14 Gb/s     14 Gb/s
Intrahost aggregate      34 Gb/s     34 Gb/s

We do not measure the performance of dom0 to guest traffic but my ad-hoc
measurements suggest this may be 5-10% slower.  I don't think this is a
very important use case though.


>> /unless/ grant copy has been optimized for adjacent ops with the same
>> source or destination (see "grant-table: defer releasing pages
>> acquired in a grant copy"[1]).
>> Do we need to retain the existing path and make the always coalesce
>> path conditional on a suitable version of Xen?

...I think the answer to this is no.

>> ---
>>  drivers/net/xen-netback/common.h  |    1 -
>>  drivers/net/xen-netback/netback.c |  106 
>> ++-----------------------------------
>>  2 files changed, 3 insertions(+), 104 deletions(-)
> Love the diffstat!

Yes, it's always nice when you delete code and it goes faster... :)


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.