[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] VM memory allocation speed with cs 26056



On 11/12/2012 01:25 PM, Keir Fraser wrote:
> On 12/11/2012 15:01, "Zhigang Wang" <zhigang.x.wang@xxxxxxxxxx> wrote:
>
>> Hi Keir/Jan,
>>
>> Recently I got a chance to access a big machine (2T mem/160 cpus) and I 
>> tested
>> your patch: http://xenbits.xen.org/hg/xen-unstable.hg/rev/177fdda0be56
>>
>> Attached is the result.
> The PVM result is weird, there is a small-ish slowdown for small domains,
> becoming a very large %age slowdown as domain memory increases, and then
> turning into a *speedup* as the memory size gets very large indeed.
>
> What are the error bars like on these measurements I wonder? One thing we
> could do to allow PV guests doing 4k-at-a-time allocations through
> alloc_heap_pages() to benefit from the TLB-flush improvements, is pull the
> filtering-and-flush out into populate_physmap() and increase_reservation().
> This is listed as a todo in the original patch (26056).
>
> To be honest I don't know why the original patch would make PV domain
> creation slower, and certainly not by a varying %age depending on domain
> memory size!
>
>  -- Keir
I did it second time. It seems the result (attached) is promising.

I think the strange result is due to the order of testing:
start_physical_machine -> test_hvm -> test_pvm.

This time, I did: start_physical_machine -> test_pvm -> test_hvm.

You can see the pvm memory allocation speed is not affected by your patch this 
time.

So I believe this patch is excellent now.

Thanks,

Zhigang

Attachment: result.pdf
Description: Adobe PDF document

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.