[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] how to move the memory of a domain from one NUMAnode to another?



On lun, 2013-08-12 at 14:22 +0800, butine@xxxxxxxxxx wrote:
> helloïDario
> 
Hi,

> Your approach about memory migration has some problems:
> 
It sure does. :-)

> (1)the domain must be suspended,and how long is the downtime(100ms or
> 1s)? 
> 
The downtime is supposed to be very short, and I think it really is
possible to make it very short. However, I don't have any measurements
about it right now.

That being said, I'm not aware of any approach to memory migration that
does not involve suspending the domain (at least for PV), if you have
any idea, please, share it with us. :-)

> (2)will the memory migration affect the balloon,TMem mechanism? 
> 
In general, it will "affect" them for sure... Actually, it affects
almost everything! :-O

However, I think you'd have to clarify what you mean by "affect
ballooning" and "affect Tmem". The guest will have a new node-affinity,
so it should get the memory from the new set of nodes when ballooning up
after memory migration, if that was what you where asking. Also, we have
someone working on NUMA-aware ballooning already. As soon as both the
project will be mature enough, we'll double check the interactions.

About Tmem, Dan kindly replied to my original RFC series, saying that
The Good Thing (TM) to do about Tmem in this case is _do_not_touch_ it,
and I agree and plan to try doing so.

Again, if this don't answer your question, I'm afraid you will have to
try to formulate it a bit better, because I am almost sure I am not
getting it.

> And I want to design to migrate the memory pages,the process is as
> follows:
> 
> (1)drop the reference to the memory pages of domain
> 
Ok, and does "drop the reference" also means deallocate it? If yes,
where do you want to store it temporarily? If no, how do you plan to
have the domain allocate the double of its allowed static_max amount of
memory without the entire Xen ecosystem complaining about it? Is this
something you want to do that without suspending the domain? If yes,
how's that possible for a PV guest?

> (2)lock and copy the memory pages in the new node
> 
See above regarding static_max...

> (3)update the page tables,the P2M
> 
See above regarding the need for suspending/quiesce a PV guest...

> (4)bind the VCPU to the new node 
> This process will not affect the interrupt of the domain.
>
Are you telling or asking? And in any case, what is it that you are
telling or asking?

> pls provide the suggestions,thanks!! 
>
Mmmm... Sorry, but I fail to understand again... The suggestions about
what? All in all, I don't think what you say you would like/want to
implement is much different from what I have in mind and (still
incomplete) in code.

I'm sure there is something, both in the design and in the code, from my
approach that could be improved, and if you're keen on doing that,
you're very welcome on-board. :-D

What I have been able to submit up to now, is in the RFC patches that
you've seen. What I am baking these last weeks, I will release as soon
as coming back from vacations (early in September), but it still follows
more or less the same approach (taking into account, of course, of the
comments that the RFC got).

If there is something that you think it should be done different (and
better), then it is me that am waiting for your suggestions, and believe
me, I'm very open to them, so do not hesitate to speak and/or send your
code! :-P

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.