[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Direct I/O to domU seeing a 30% performance hit
Ian,(Sorry. I sent things from the wrong e-mail address, so you've probably been getting bounces. This one should work.) My config is attached,Both dom0 and the domU are SLES 10, so I don't know why the "idle" performance of the two should be different. The obvious asymmetry is the disk. Since the disk isn't direct, any disk I/O by the domU would certainly impact dom0, but I don't think there should be much, if any. I did run a dom0 test with the domU started, but idle and there was no real change to dom0's numbers. What's the best way to gather information about what is going on with the domains without perturbing them? (Or, at least, perturbing everyone equally.) As to the test, I am running netperf 2.4.1 on an outside machine to the dom0 and the domU. (So the doms are running the netserver portion.) I was originally running it in the doms to the outside machine, but when the bad numbers showed up I moved it to the outside machine because I wondered if the bad numbers were due to something happening to the system time in domU. The numbers is the "outside" test to domU look worse. Thanks, John Byrne Ian Pratt wrote: There have been a couple of network receive throughput performance regressions to domUs over time that were subsequently fixed. I think one may have crept in to 3.0.3.The report was (I believe) with a NIC directly assigned to the domU, so not using netfront/back at all. John: please can you give more details on your config. IanAre you seeing any dropped packets on the vif associated with your domU in your dom0? If so, propagating changeset11861 from unstable may help: changeset: 11861:637eace6d5c6 user: kfraser@xxxxxxxxxxxxxxxxxxxxx date: Mon Oct 23 11:20:37 2006 +0100summary: [NET] back: Fix packet queuing so that packets are drained if theIn the past, we also had receive throughput issues to domUs that were due to socket buffer size logic but those were fixed a while ago.Can you send netstat -i output from dom0? Emmanuel. On Mon, Nov 06, 2006 at 09:55:17PM -0800, John Byrne wrote:I was asked to test direct I/O to a PV domU. Since, I had a system with two NICs, I gave one to a domU and one dom0. (Each isrunning thedom0 and Isame kernel: xen 3.0.3 x86_64.)I'm running netperf from an outside system to the domU andhave a guessam seeing 30% less throughput for the domU vs dom0.Is this to be expected? If so, why? If not, does anyoneas to what I might be doing wrong or what the issue might be? Thanks, John Byrne_______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel disk = [ 'file:/disk2/vm1/hda,hda,w' ] memory = 3072 vcpus = 1 builder = 'linux' name = 'vm1' #vif = [ 'mac=00:16:3e:55:0a:86' ] localtime = 0 on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' extra = ' TERM=xterm' #bootloader = '/usr/lib/xen/boot/domUloader.py' #bootentry = 'hda2:/boot/vmlinuz-xen,/boot/initrd-xen' kernel = '/disk2/vm1/vmlinuz-xen' ramdisk = '/disk2/vm1/initrd-xen' root = '/dev/hda2' pci = [ '05:00.0' ] _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |