[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel] More network tests with xenoprofile this time
Andrew, You may want to take a look at the folowing paper which is being presented at VEE'05 (June 11 and 12, 2005). http://www.hpl.hp.com/research/dca/system/papers/xenoprof-vee05.pdf It presents network performance results using xenoprof. This was done for xen 2.0.3. The profile you reported has some similarities with our results although the exact numbers are different. But that is expected, since you are running a different version of Xen on a different hardware. We have seen that a significant amount of time was spent on handling interrupts in Xen, as well. We have also seen that a significant amount of time is spent on the hypervisor (+/- 40%) for the dom1 <-> external case, measured both at dom1 and at dom0. (in our case we instrumented the receive side) When we run the benchmark on dom0 the time spent on Xen is reduced to (+/-20%). Most of this extra Xen overhead when running a guest seems to come from the page transfer between domain 0 and the guest (see table 6 and discussion on paper). The paper omits the complete oprofile reports for brevity. I will be happy to send you any detailed oprofile report we have generated for the paper, if you want to compare it with your results. Just let me know ... Renato >> -----Original Message----- >> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx >> [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Ian Pratt >> Sent: Tuesday, May 31, 2005 3:16 PM >> To: Andrew Theurer; xen-devel@xxxxxxxxxxxxxxxxxxx >> Subject: RE: [Xen-devel] More network tests with xenoprofile >> this time >> >> >> > I had a chance to run a couple of the netperf tests with >> > xenoprofile. I am still having some trouble with >> > multi-domain profiles (probably user error), but I have been >> > able to profile dom0 while running 2 types of tests. I was >> > surprised to see as much as 50% cpu in hypervisor on these tests: >> > >> > netperf tcp_stream 16k msg size, dom1 -> dom2 dom0 is on >> > cpu0, HT thread 0, dom1 is on cpu1, HT thread 0, >> > dom2 is on cpu1, HT thread 1. >> >> Let's ignore the domU <-> domU results for the moment as we >> know about the problem with lack of batching in this >> scenario. Let's dig into the dom1 -> external. >> >> First off, are these figures just for CPU 0 HT 0? i.e. just >> dom0 so we don't see where time goes in the domU? How is >> idle time on the CPU reported? >> >> Spending 18% of the time handling interrupts in Xen is >> surprisingy (at least to me). >> >> What interrupt rate are you observing? What are the default >> tg3 interrupt coallescing settings? What interrupt rate do >> you get on native? Also, what hypercall rate are you seeing? >> >> (It would be good to put this in context of the rx/tx packet rates). >> >> Is the Ethernet NIC sharing an interrupt with the USB >> controller per chance? >> >> Seeing find_domain_by_id and copy_from_user so high up the >> list is pretty surprising. >> >> Cheers, >> Ian >> >> > netperf tcp_stream 16k msg size, dom1 -> external host dom0 >> > is on cpu0, HT thread 0, dom1 is on cpu1, HT thread 1. >> > >> > >> > Throughput is ~940 Mbps, wire speed. >> > >> > xenoprofile opreport: >> > >> > 4244562 49.9375 xen-unstable-syms >> > 4110594 48.3614 vmlinux-2.6.11-xen0-up >> > 132643 1.5606 oprofiled >> > 4212 0.0496 libc-2.3.3.so >> > 2892 0.0340 libpython2.3.so.1.0 >> > >> > xenoprofile opreport -l: >> > >> > 828587 9.75 xen-unstable-syms end_level_ioapic_irq >> > 712035 8.38 xen-unstable-syms >> mask_and_ack_level_ioapic_irq >> > 370265 4.36 vmlinux-2.6.11-xen0-up net_tx_action >> > 323797 3.81 vmlinux-2.6.11-xen0-up ohci_irq >> > 282005 3.32 vmlinux-2.6.11-xen0-up tg3_interrupt >> > 273161 3.21 xen-unstable-syms find_domain_by_id >> > 234726 2.76 xen-unstable-syms hypercall >> > 206693 2.43 xen-unstable-syms do_update_va_mapping >> > 203758 2.40 xen-unstable-syms __copy_from_user_ll >> > 201665 2.37 xen-unstable-syms do_mmuext_op >> > 195020 2.29 vmlinux-2.6.11-xen0-up nf_iterate >> > 184295 2.17 vmlinux-2.6.11-xen0-up nf_hook_slow >> > 172110 2.02 vmlinux-2.6.11-xen0-up tg3_rx >> > 164337 1.93 vmlinux-2.6.11-xen0-up net_rx_action >> > 141999 1.67 xen-unstable-syms do_mmu_update >> > 139120 1.64 vmlinux-2.6.11-xen0-up fdb_insert >> > 122483 1.44 xen-unstable-syms mod_l1_entry >> > 122017 1.44 xen-unstable-syms put_page_from_l1e >> > 111159 1.31 xen-unstable-syms get_page_from_l1e >> > 109921 1.29 xen-unstable-syms do_IRQ >> > 99847 1.17 vmlinux-2.6.11-xen0-up br_handle_frame >> > 99709 1.17 xen-unstable-syms get_page_type >> > 93613 1.10 vmlinux-2.6.11-xen0-up kfree >> > 90885 1.07 vmlinux-2.6.11-xen0-up end_pirq >> > >> > >> > -Andrew >> > >> > _______________________________________________ >> > Xen-devel mailing list >> > Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel >> > >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel >> _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |