[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] VM memory allocation speed with cs 26056



On 12/11/2012 15:01, "Zhigang Wang" <zhigang.x.wang@xxxxxxxxxx> wrote:

> Hi Keir/Jan,
> 
> Recently I got a chance to access a big machine (2T mem/160 cpus) and I tested
> your patch: http://xenbits.xen.org/hg/xen-unstable.hg/rev/177fdda0be56
> 
> Attached is the result.

The PVM result is weird, there is a small-ish slowdown for small domains,
becoming a very large %age slowdown as domain memory increases, and then
turning into a *speedup* as the memory size gets very large indeed.

What are the error bars like on these measurements I wonder? One thing we
could do to allow PV guests doing 4k-at-a-time allocations through
alloc_heap_pages() to benefit from the TLB-flush improvements, is pull the
filtering-and-flush out into populate_physmap() and increase_reservation().
This is listed as a todo in the original patch (26056).

To be honest I don't know why the original patch would make PV domain
creation slower, and certainly not by a varying %age depending on domain
memory size!

 -- Keir

> Test environment old:
> 
>       # xm info
>       host                   : ovs-3f-9e-04
>       release                : 2.6.39-300.17.1.el5uek
>       version                : #1 SMP Fri Oct 19 11:30:08 PDT 2012
>       machine                : x86_64
>       nr_cpus                : 160
>       nr_nodes               : 8
>       cores_per_socket       : 10
>       threads_per_core       : 2
>       cpu_mhz                : 2394
>       hw_caps                :
> bfebfbff:2c100800:00000000:00003f40:02bee3ff:00000000:00000001:00000000
>       virt_caps              : hvm hvm_directio
>       total_memory           : 2097142
>       free_memory            : 2040108
>       free_cpus              : 0
>       xen_major              : 4
>       xen_minor              : 1
>       xen_extra              : .3OVM
>       xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
> hvm-3.0-x86_32p hvm-3.0-x86_64
>       xen_scheduler          : credit
>       xen_pagesize           : 4096
>       platform_params        : virt_start=0xffff800000000000
>       xen_changeset          : unavailable
>       xen_commandline        : dom0_mem=31390M no-bootscrub
>       cc_compiler            : gcc version 4.1.2 20080704 (Red Hat 4.1.2-48)
>       cc_compile_by          : mockbuild
>       cc_compile_domain      : us.oracle.com
>       cc_compile_date        : Fri Oct 19 21:34:08 PDT 2012
>       xend_config_format     : 4
> 
>       # uname -a
>       Linux ovs-3f-9e-04 2.6.39-300.17.1.el5uek #1 SMP Fri Oct 19 11:30:08 PDT
> 2012 x86_64 x86_64 x86_64 GNU/Linux
> 
>       # cat /boot/grub/grub.conf
>       ...
>       kernel /xen.gz dom0_mem=31390M no-bootscrub dom0_vcpus_pin
> dom0_max_vcpus=32
> 
> Test environment new: old env + cs 26056
> 
> Test script: test-vm-memory-allocation.sh (attached)
> 
> My conclusion from the test:
> 
>   - HVM create time is greatly reduced.
>   - PVM create time is increased dramatically for 4G, 8G, 16G, 32G, 64G, 128G.
>   - HVM/PVM destroy time is not affected.
>   - If most of our customers are using PVM, I think this patch is bad: because
> most VM memory should under 128G.
>   - If they are using HVM, then this patch is great.
> 
> Questions for discussion:
> 
>   - Did you get the same result?
>   - It seems this result is not ideal. We may need to improve it.
> 
> Please note: Imay not have access to the same machine for awhile.
> 
> Thanks,
> 
> Zhigang
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.