[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-users] Xen version 4.3.2-rc1 netback CPU usage is too high
Hi. I've posted about this many months ago. Well I wanted to test the latest 4.3.x series XEN but unfortunately it seems the issue is still there. The issue is CPU usage when I zmap on a PV guest. I only used 10Mbs: zmap -p 80 -B 10M -o results.txt --interface=eth0 --blacklist-file=/etc/zmap/blacklist_non_us.conf Mar 04 22:15:20.108 [INFO] zmap: started Mar 04 22:15:20.108 [INFO] zmap: zmap selected output module: csv 0:01 0%; send: 14914 14.8 Kp/s (14.3 Kp/s avg); recv: 375 374 p/s (359 p/s avg); drops: 0 p/s (0 p/s avg); hits: 2.51% 0:02 0%; send: 29775 14.9 Kp/s (14.6 Kp/s avg); recv: 826 450 p/s (404 p/s avg); drops: 0 p/s (0 p/s avg); hits: 2.77% 0:03 0%; send: 44664 14.9 Kp/s (14.7 Kp/s avg); recv: 1279 452 p/s (420 p/s avg); drops: 0 p/s (0 p/s avg); hits: 2.86% 0:04 0%; send: 59563 14.9 Kp/s (14.7 Kp/s avg); recv: 1718 438 p/s (424 p/s avg); drops: 0 p/s (0 p/s avg); hits: 2.88% So a 10Mbps & ~15k packets/second makes netback use 90% of CPU. Check attachment for more details. Dom0 is dual XEON X5450 and it's on 1Gbps network. I've verified zmap on bare metal and I can easily do 450k packets/second (300Mbps upload) and it uses about 150% CPU (1.5 cores) on Xeon CPU E3-1230 V2 My problem is that I would like to virtualize few VM's that runs zmap on that Xeon CPU E3-1230 V2 machine but I am unable to do so because I can barely use 10mbps due to above issue. The kernel on Dom0 is: Linux nix 3.2.55-grsec #2 SMP Tue Mar 4 14:49:29 EET 2014 x86_64 GNU/Linux PS. I've of course tweaked sysctl settings to support such gigantic packets/second so the kernel on bare metal is not the issue. I am not saying XEN is bad but CPU usage is way too high versus packets/second Has anyone tested similarly on VMware? Attachment:
netback.png _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxx http://lists.xen.org/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |