[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] Poor Windows 2003 + GPLPV performance compared to VMWare
On Tue, 2012-09-18 at 14:06 +0100, Adam Goryachev wrote: > On 15/09/12 00:53, Adam Goryachev wrote: > > On 14/09/12 23:30, Ian Campbell wrote: > > > >>>>> device_model = '/usr/lib/xen-default/bin/qemu-dm' > >>>>> localtime = 1 > >>>>> name = "vm1" > >>>>> cpus = "2,3,4,5" # Which physical CPU's to allow > >>>> Have you pinned dom0 to use pCPU 1 and/p pCPUs > 6? > >>> No, how should I pin dom0 to cpu0 ? > >> dom0_vcpus_pin as described in > >> http://xenbits.xen.org/docs/4.2-testing/misc/xen-command-line.html > > Thanks, I'll need to reboot the dom0 to apply this, will do as soon as > > this current scheduled task is complete. > OK, I have pinned dom0 to cpu0, and this had no effect on performance. > >> You have: > >> cpus = "2,3,4,5" > >> which means "let all the guests VCPUs run on any of PCPUS 2-5". > >> > >> It sounds like what you are asking for above is: > >> cpus = [2,3,4,5] > >> Which forces guest vcpu0=>pcpu=2, 1=>3, 2=>4 and 3=>5. > >> > >> Subtle I agree. > > Ugh... ok, I'll give that a try. BTW, it would seem this is different > > from xen 4.0 (from debian stable) where it seems to magically do what I > > meant to say, or I'm just lucky on those machines :) > Actually, the above syntax doesn't work: > cpus = [2,3,4,5] # Which physical CPU's to allow > Error: 'int' object has no attribute 'split' Is it ["2","3"...] then I wonder? > Once I reverted to: > cpus = "2,3,4,5" > I can then boot again, but on reboot I get this: > xm vcpu-list > Name ID VCPU CPU State Time(s) CPU > Affinity > Domain-0 0 0 0 r-- 148.9 0 > cobweb 6 0 5 --- 0.5 2-5 > cobweb 6 1 - --p 0.0 2-5 > cobweb 6 2 - --p 0.0 2-5 > cobweb 6 3 - --p 0.0 2-5 > > So it isn't pinning each vcpu to a specific cpu... but I suppose it > should be smart enough to do it well anyway... > Performance is still at the same level. To be honest I wouldn't expect it to make much difference at this stage. I notice the state is "--p" for all but VCPU0 -- which means they are paused. It's probably just that you ran the xm vcpu-list before Windows booted as far as brining up secondary CPUs but it would be worth checking that Windows is actually bringing up / using all the CPUs! Once the VM is fully booted then the state for each vcpu should either be "r--" (running) or "-b-" (currently blocked). I presume something like Windows task manager will also confirm that all the CPUs are in use. > I'm really at a bit of a loss on where to go from here.... The standard > performance improvements don't seem to make any difference at all, and > I'm running out of ideas.... > > Could you suggest a "standard" tool which would allow me to test disk IO > performance (this is my initial suspicion for slow performance), and > also CPU performance (I'm starting to suspect this too now) in both > windows (domU), linux (domU I can create one for testing) and Linux > (dom0). Then I can see where performance is lost (CPU/disk) and at what > layer (dom0/domU) etc... Fajar's suggestion of fio is a good one. Ian. _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxx http://lists.xen.org/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |