[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Too much VCPUS makes domU high CPU utiliazation




 
> From: kevin.tian@xxxxxxxxx
> To: tinnycloud@xxxxxxxxxxx; xen-devel@xxxxxxxxxxxxxxxxxxx
> CC: george.dunlap@xxxxxxxxxxxxx
> Date: Fri, 20 May 2011 07:29:55 +0800
> Subject: RE: [Xen-devel] Too much VCPUS makes domU high CPU utiliazation
>
> >From: MaoXiaoyun [mailto:tinnycloud@xxxxxxxxxxx]
> >Sent: Friday, May 20, 2011 12:24 AM
> >>
> >> does same thing happen if you launch B/C/D after A?
> >>
> > 
> >From test aspects, not really, all domains CPU Util are low.
> >It looks like this only happen in the process of domain A booting, and result in quite long time to boot.
>
> One possible reason is the lock contention at the boot time. If the lock holder
> is preempted unexpectedly and lock waiter continues to consume cycles, the
> boot progress could be slow.
>
> Which kernel version are you using? Does a different kern el version expose
> same problem?
>
 
At the very early, it is reported that VM with large number VCPUS result in high CPU utilization and
very slow response, and the kernel version is 2.6.31.13.
 
I was trying to do some investigation now.  But the issue I report in this mail is found in 2.6.32.36.
 

> > 
> >One thing to correct, today even I destory B/C/D, domain A still comsume 800% CPU for quite a long time till now I am writing
> >this mail.
>
> So this phenomenon is intermittent? In your earlier mail A is back to normal
> after you destroyed B/C/D, and this time the slowness continues.
>
> >Another strang thing is seems all VCPUS of domainUA, are running only on even number Physicals CPUS(that is 0, 2,4...),
> >where explains where CPU Util is 800%.  But I don't understand whether this is designed to.
>
> Are there any hard limitation you added from the configuration file? Or do you
> observe this weird affinity all the time or occasionally? It looks strange to me
> that scheduler will consistently keep such affinity if you don't assign it explicitly.
>
In fact I have twn machine for test, M1 and M2, for now, the issue only show M1.
I think I have no special configuration.After I compare the "xm info" of M1 and M2, I found
that M1 has two nr_nodes while M2 only have one.
This may explain where either all DomUA VCPUS all on even number of Physical CPUs.
 And also, this might cause guest slow booting.
What do u think of this?
 
thanks.
 
 
=================M1 output=====================
release                : 2.6.32.36xen
version                : #1 SMP Fri May 20 15:27:10 CST 2011
machine                : x86_64
nr_cpus                : 16
nr_nodes               : 2
cores_per_socket       : 4
threads_per_core       : 2
cpu_mhz                : 2400
hw_caps                : bfebfbff:28100800:00000000:00001b40:009ce3bd:00000000:00000001:00000000
virt_caps & nbsp;            : hvm
total_memory           : 24567
free_memory            : 11898
node_to_cpu            : node0:0,2,4,6,8,10,12,14
                                      node1:1,3,5,7,9,11,13,15
node_to_memory      : node0:3997
                                      node1:7901
node_to_dma32_mem      : node0:2995
 &nbs p;                                            node1:0
max_node_id         : 1
xen_major              : 4
xen_minor              : 0
xen_extra              : .1
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params         : virt_start=0xffff800000000000
xen_changeset          : unavailable
xen_commandline        : msi=1 iommu=off x2apic=off console=com1,vga com1=115200,8n1 noreboot dom0_mem=5630M dom0_max_vcpus=4 dom0_vcpus_pin cpuidle=0 cpufreq=none no-xsave
 
=================M2 output=====================
release                : 2.6.32.36xen
version                : #1 SMP Fri May 20 15:27:10 CST 2011
machine                : x86_64
nr_cpus                : 16
nr_nodes               : 1
cores_per_socket       : 4
threads_per_core       : 2
cpu_mhz                : 2400
hw_caps                : bfebfbff:28100800:00000000:00001b40:009ce3bd:00000000:00000001:00000000
virt_caps & nbsp;            : hvm
total_memory           : 24567
free_memory            : 11898
node_to_cpu            : node0:0-15
node_to_memory         : node0:11898
node_to_dma32_mem      : node0:2995
max_node_id            : 0
xen_major              : 4
xen_minor              : 0
xen_extra              : .1
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p h vm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : unavailable
xen_commandline        : msi=1 iommu=off x2apic=off console=com1,vga com1=115200,8n1 noreboot dom0_mem=max:5630M dom0_max_vcpus=4 dom0_vcpus_pin cpuidle=0 cpufreq=none no-xsave

> you may want to run xenoprof to sampling domain A to see its hot points, or
> use xentrace to trace VM-exit events to deduce from another angle.
>
> Thanks
> Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.