[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2 of 2 RFC] xl: allow for moving the domain's memory when changing vcpu affinity
On Fri, 2012-07-06 at 13:53 +0100, George Dunlap wrote: > On 06/07/12 10:54, Dario Faggioli wrote: > > By introducing a new '-M' option to the `xl vcpu-pin' command. The actual > > memory "movement" is achieved suspending the domain to a ttemporary file and > > resuming it with the new vcpu-affinity > Hmm... this will work and be reliable, but it seems a bit clunky. > If I can ask, the idea or the implementation? :-) > Long > term we want to be able to do node migration in the background without > shutting down a VM, right? > Definitely, and we also want to do that automatically, according to some load/performance/whatever run-time measurement, without any user intervention. However ... > If that can be done in the 4.3 timeframe, > then it seems unnecessary to implement something like this. > ... I think something like this could still be useful. IOW, I think it would be worthwhile to have both the automatic memory migration happening in background and something like this explicit, do-it-all-now, as they serve different purposes. It's sort of like in cpu scheduling, where you have free tasks/vcpus that run wherever the scheduler want, but you also have pinning, if at some point you want to confine them to some subset of cpus for whatever reason. Also, as it happens for pinning, you can do it at domain creation time, but then your might change your mind and you'd need something to have the system reflecting your actual needs. So, if you allow me, initial vcpu pinning is like NUMA automatic placement, and automatic memory migration happening in background is like "free scheduling", then this feature is like what we get when running `xl vcpu-pin'. (I hope the comparison helped in clarifying my view rather than making it even more obscure :-)). Also, as an example, what happens if you want to create a ne VM and you have enough memory, but not in a single node because of memory fragmentation? Ideally, that won't happen thanks to the automatic memory migration, but that's not guaranteed to be possible (there might be VM pinned, cpupools, and any sort of stuff), or to happen effectively and soon enough (it will be an heuristics after all). In such a situation, I figure this feature could be a useful tool for a system administrator. Than again, I'm talking about the feature. The implementation might well change, and I really expect being able to migrate memory in background to help beautifying it (the implementation) quite a bit. Sorry for writing so much. :-P Thanks and Regards, Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |