[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] XEN 4.3.1 network performance issue


  • To: xen-users@xxxxxxxxxxxxx
  • From: "NiX" <nix@xxxxxxxxxxxxxxxx>
  • Date: Sat, 28 Dec 2013 20:34:09 +0200
  • Delivery-date: Sat, 28 Dec 2013 18:37:07 +0000
  • Importance: Normal
  • List-id: Xen user discussion <xen-users.lists.xen.org>

> On Fri, Dec 27, 2013 at 05:00:16PM +0200, NiX wrote:
>> > On Fri, Dec 27, 2013 at 02:17:19PM +0200, NiX wrote:
>> >> > On Tue, Dec 24, 2013 at 05:06:37PM +0200, NiX wrote:
>> >> >> Hi. I am running a XEN VM in PV mode. This VM has four cores
>> enabled.
>> >> VM
>> >> >> network config is as follows:
>> >> >>
>> >> >> vif =
>> ["ip=127.0.0.1,mac=XX:XX:XX:XX:XX:XX,bridge=br0,rate=100Mb/s"]
>> >> >>
>> >> >> Dom0 is dual XEON X5450 with 16GB of RAM.
>> >> >>
>> >> >> I can barely zmap at 10mbps ~ 12k packets/second on this VM
>> >> >>
>> >> >
>> >> > My google-fu tells me that zmap sends out gigantic amount of TCP
>> SYN
>> >> > packets in its default configuration. It is possible the ring size
>> >> > between frontend and backend is the bottleneck. Data copying from
>> >> > frontend to backend might also be a problem.
>> >> >
>> >> > You can probably patch your Dom0 and DomU kernel with multi-page
>> ring
>> >> > support to see if it makes it better.
>> >> >
>> >> > Wei.
>> >> >
>> >>
>> >> I was not able neither to exceed 10k packets/second with a
>> lightweight
>> >> equivalent software and netback/0 fully utilized one 3GHz core.
>> >>
>> >
>> > Hmm... You're using kthread-based netback. In your use case frontend
>> is
>> > likely to create interrupt storm for backend. Do you have breakdown of
>> > CPU usage (sy, si etc)?
>>
>> I am using Linux nix 3.2.53-grsec #1 SMP and the server is pretty much
>> idle expect when I attempt to zmap.Load average: 0.09 0.06 0.13
>>
>> ps aux | grep netback
>> root      1411  2.6  0.0      0     0 ?        S    Dec09 696:17
>> [netback/0]
>> root      1412  0.0  0.0      0     0 ?        S    Dec09   2:44
>> [netback/1]
>> root      1413  1.1  0.0      0     0 ?        S    Dec09 312:39
>> [netback/2]
>> root      1414  0.0  0.0      0     0 ?        S    Dec09   0:00
>> [netback/3]
>> root      1415  0.0  0.0      0     0 ?        S    Dec09   8:22
>> [netback/4]
>> root      1416  0.0  0.0      0     0 ?        S    Dec09   2:54
>> [netback/5]
>> root      1417  0.0  0.0      0     0 ?        S    Dec09   0:00
>> [netback/6]
>> root      1418  0.0  0.0      0     0 ?        S    Dec09   0:00
>> [netback/7]
>>
>> It seems to only fully utilize netback/0 when I attempt to zmap and
>> takes
>> one 3GHz core ... Maybe there's something to look at in terms of
>> optimizing?
>>
>
> That's expected, because only one netback thread is necessary to serve
> that VIF.
>
> Could you run 'top' then press 1 and copy the summary area in your
> email?
>
> It should look like this:
>
> top - 15:49:09 up 27 days,  6:14, 16 users,  load average: 5.00, 5.02,
> 5.08
> Tasks: 259 total,   2 running, 257 sleeping,   0 stopped,   0 zombie
> %Cpu0  :  1.0 us,  1.3 sy, 97.7 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,
> 0.0 st
> %Cpu1  :  0.0 us,  0.3 sy, 99.7 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,
> 0.0 st
> %Cpu2  :  4.0 us, 53.5 sy, 42.5 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,
> 0.0 st
> %Cpu3  :  1.7 us, 40.2 sy, 58.1 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,
> 0.0 st
> KiB Mem:   8132308 total,  7385188 used,   747120 free,   160348 buffers
> KiB Swap:        0 total,        0 used,        0 free,  3283276 cached
>
>   PID USER      PR  NI  VIRT  RES  SHR S  %CPU %MEM    TIME+  COMMAND
>  3378 colord    20   0  284m 110m 2484 R  98.0  1.4  15502:29 colord-sane
> 25531 weil      20   0 24788 3268 1436 S   2.0  0.0 165:58.78 htop
> 22448 weil      20   0 1010m  97m  24m S   0.7  1.2   4:57.67 chrome
>
> Wei.
>

I ran 'zmap -p 80 -B 10M -o results.txt --interface=eth0' and took a
screenshot of top.

0:05 0% (1d11h left); send: 74425 14.9 Kp/s (14.8 Kp/s avg); recv: 2196
492 p/s (436 p/s avg); drops: 0 p/s (0 p/s avg); hits: 2.95

10mbps ~ 15k packets/second

The server is dual xeon X5450

Attachment: top_zmap.png
Description: PNG image

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.