[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/mm: pod: Use the correct memory flags for alloc_domheap_page{, s}



On Fri, 2015-10-23 at 11:33 +0100, Julien Grall wrote:
> The last parameter of alloc_domheap_page{s,} contain the memory flags
> and
> not the order of the allocation.
> 
> Use 0 for the call in p2m_pod_set_cache_target as it was before
> 1069d63c5ef2510d08b83b2171af660e5bb18c63 "x86/mm/p2m: use defines for
> page sizes". Note that PAGE_ORDER_4K is also equal to 0 so the
> behavior
> stays the same.
> 
> For the call in p2m_pod_offline_or_broken_replace we want to allocate
> the new page on the same numa node as the previous page. So retrieve
> the
> numa node and pass it in the memory flags.
> 
> Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
> 
> ---
> 
> Note that the patch has only been build tested.
> 
I've done some basic testing. That means I:
 - created an HVM guest with memory < maxmem
 - played with `xl mem-set' and `xl mem-max' on it
 - local migrated it
 - played with `xl mem-set' and `xl mem-max' on it again
 - shutdown it

All done on a NUMA host, with memory dancing (during the 'play' phases)
up and down the amount of RAM present in each NUMA node.

I'm not sure how I should trigger and test memory hotunplug, neither
whether or not my testbox supports it.

Since it seems that memory hotumplug is what was really necessary, I'm
not sure it's appropriate to add the following tag:

Tested-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>

but I'll let you guys (Jan, mainly, I guess) decide. If the above is
deemed enough, feel free to stick it there, if not, fine anyway. :-)

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.