[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Significant network performance difference when different NIC MQ handles the packet





On 04/01/15 06:15, Zhangleiqiang (Trump) wrote:
On 31/12/14 09:06, Zhangleiqiang (Trump) wrote:
Hi, all:

        I have used pkt-gen to send udp packet (1400 bytes) from one Dom0 A to
another Dom0 B each of which is connected by 10GE network. On the receive
side (B),different "ksoftirqd/x" processes will handle the packet during each
testing because the nic has multiple queues. I find that if "ksoftirqd/0" 
handles
the packet, the throughout can reach to about 7Gbps. However, when
"ksoftirqd/2" handles the packet, the throughout can only reach to about
2Gbps, and for "ksoftirqd/3", the throughout can even as low as 1Gbps.

        I am wondering what the reason it is. Could anyone give me some hint
about it ?
NUMA topology? Your card might use memory "close" to CPU core 0, but not
to the others?

Thanks for your hint.

I am not sure if the NUMA topology is the reason for it. Because my server has two nodes, 
each of which has 8 cpus. The result of "numctl --hardware" is as follows:
Can you send xl info -n, so we can see how Xen sees it?


available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 32090 MB
node 0 free: 31640 MB
node 1 cpus: 8 9 10 11 12 13 14 15
node 1 size: 32311 MB
node 1 free: 31643 MB
node distances:
node   0   1
   0:  10  20
   1:  20  10

I booted dom0 with "dom0_max_vcpus=6 dom0_vcpu_pin" command line. So I think "ksoftirqd/x" will 
running on cpu "x", e.g. ksoftirqd/0 and ksortirqd/2 are running on cpu0 and cpu2 respectively. If it is so, 
ksoftirqd/0 and ksortirqd/2 should have the "same distance" to the memory which NIC card uses.

dom0_max_vcpus sets the number of VCPUs for Dom0, but it doesn't necessarily tell on which node they should run. Although AFAIK Xen tries to keep the vcpus on the same node.
Also, you should check the NUMA assignment of your NIC:

http://superuser.com/questions/701090/determine-pcie-device-numa-node

And check this as well if you haven't done so:

http://wiki.xen.org/wiki/Xen_on_NUMA_Machines



Zoltan


----------
zhangleiqiang (Trump)

Best Regards


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.