[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Poor network performance between DomU with multiqueue support

On Tue, Dec 02, 2014 at 11:50:59AM +0000, Zhangleiqiang (Trump) wrote:
> > -----Original Message-----
> > From: xen-devel-bounces@xxxxxxxxxxxxx
> > [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of Wei Liu
> > Sent: Tuesday, December 02, 2014 7:02 PM
> > To: zhangleiqiang
> > Cc: wei.liu2@xxxxxxxxxx; xen-devel@xxxxxxxxxxxxx
> > Subject: Re: [Xen-devel] Poor network performance between DomU with
> > multiqueue support
> > 
> > On Tue, Dec 02, 2014 at 04:30:49PM +0800, zhangleiqiang wrote:
> > > Hi, all
> > >     I am testing the performance of xen netfront-netback driver that with
> > multi-queues support. The throughput from domU to remote dom0 is 9.2Gb/s,
> > but the throughput from domU to remote domU is only 3.6Gb/s, I think the
> > bottleneck is the throughput from dom0 to local domU. However, we have
> > done some testing and found the throughput from dom0 to local domU is
> > 5.8Gb/s.
> > >     And if we send packets from one DomU to other 3 DomUs on different
> > host simultaneously, the sum of throughout can reach 9Gbps. It seems like 
> > the
> > bottleneck is the receiver?
> > >     After some analysis, I found that even the max_queue of netfront/back
> > is set to 4, there are some strange results as follows:
> > >     1. In domU, only one rx queue deal with softirq
> > 
> > Try to bind irq to different vcpus?
> Do you mean we try to bind irq to different vcpus in DomU? I will try it now.

Yes. Given the fact that you have two backend threads running while only
one DomU vcpu is busy, it smells like misconfiguration in DomU.

If this phenomenon persists after correctly binding irqs, you might want
to check traffic is steering correctly to different queues.

> > 
> > >     2. In dom0, only two netback queues process are scheduled, other two
> > process aren't scheduled.
> > 
> > How many Dom0 vcpu do you have? If it only has two then there will only be
> > two processes running at a time.
> Dom0 has 6 vcpus, and 6G memory. There are only one DomU running in Dom0 and 
> so four netback processes are running in Dom0 (because the max_queue param of 
> netback kernel module is set to 4). 
> The phenomenon is that only 2 of these four netback process were running with 
> about 70% cpu usage, and another two use little CPU.
> Is there a hash algorithm to determine which netback process to handle the 
> input packet?

I think that's whatever default algorithm Linux kernel is using.

We don't currently support other algorithms.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.