[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Question about significant network performance difference after pin RX netback to vcpu0


  • To: xen-devel@xxxxxxxxxxxxx
  • From: trump_zhang <trump_zhang@xxxxxxx>
  • Date: Thu, 8 Jan 2015 10:33:17 +0800 (CST)
  • Delivery-date: Thu, 08 Jan 2015 02:33:57 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>

Hi, 
    I am trying to test the single-queue networking performance for netback/netfront in upstream, my testing environment is as follows:
    1. Using pkt-gen to send a single UDP flow from one host to a vm which runs on another XEN host. Two hosts are connected with 10GE network (Intel 82599 NIC, which has multiqueue support)
    2. The receiver XEN host uses xen 4.4, and dom0's OS is Linux 3.17.4 which already has multiqueue netback support
    3. The receiver XEN host's CPU is NUMA, and cpu0-cpu7 belong to the same node
    4. The receiver Dom0 has 6 VCPU and 6G Memory, and the vcpu/mem is "pinned" to host by using "dom0_vcpu_num=4 dom0_vcpus_pin dom0_mem=6144M" args during boot
    5. The receiver VM has 4 vcpu and 4G memory, and vcpu0 is pinned to pcpu 6 which means the vcpu0 is running on the same numa node with dom0

    During testing, I have modified the sender host's IP address to make the receiver host and vm have behaviors as follows:
    1. The queue 5 of NIC in Dom0 handles the flow, which can be confirmed from  "/proc/interrupts". As a result, the "ksoftirqd/5" process will run with some cpu usage during testing, which can be confirmed from the result of "top" command
    2. The queue 2 of netback vif handles the flow, which can be confirmed from the result of "top" command in Dom0 (which shows that the "vif1.0-guest-q2" process runs with some cpu usage)
    3. The "ksfotirqd/0" (which running on vcpu0) in DomU runs with some high cpu usage, and it seems that this process handles the soft interrupt of RX

    However, I find some strange phenomenon as follows:
    1. All RX interrupts of netback vif are only sent to queue 0, which can be confirmed from the content of "/proc/interrupts" file in Dom0. But other than "ksoftirqd/0", it seems that  "ksoftirqd/2" will handle the soft interrupt and run with high cpu usage when the "vif1.0-guest-q2" process runs on vpu2. But why not the ksortirqd/0 handles the soft interrupt because the RX interrupts are sent to queue0?  I have also tried to make another queue of netback VIF (e.g. vif1.0-guest-q1, vif1.0-guest-q3, etc) to handle the flow, but all RX interrupts are still only sent to vcpu0, I am wondering the reason for it?
    2. The RX interrupts are sent to queue 2 of vnic in DomU, but it seems that it is the "ksoftirqd/0" other than "ksoftirqd/2" handles the soft interrupt.
    3. If and only if I "pin" the "vif1.0-guest-q2" process to the vcpu0 of Dom0, the throughout of flow is higher (which can be improved as much as double). This is the most strange phenomenon I find, I am wondering the reason for it.

   Could anyone give me some hint about it? If there are some things unclear, please tell me and I will give more detailed description. Thanks.

-------------------
Best Regards
Trump


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.