[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-API] [XCP] Frequently-Asked Questions (FAQ) for Dynamic Memory Control (DMC)
On 15/09/10 16:59, George Shuklin wrote: Thank you very much for document. You're welcome. But I like to point to one specific problem for current state of memory management in XCP. By default XCP (in Linux catalogue on xs-tools.iso) use pv_ops kernel, which one does not support preinflated balloon. I belive this is great problem for VM running under XCP: it simply will not able to grant VM more memory than it have at startup. Forgive me for not addressing this point right away. I'm going to research what's going on here and get back to you later. The second problem I know is way DMC retrive memory from guests... It simply ignores memory usage in guest and retrive equal amount of memory from every running VM. This description is mostly correct, and by design. The existing memory ballooning daemon assigns targets to VMs in a proportional way. It attempts to balance targets across all VMs on a host, so that every VM on that host has a target that lies the same proportional distance between dynamic-min and dynamic-max. Furthermore, when asked to start VMs on a host whose physical memory is already full, it will attempt to retrieve the same dynamic range proportion for each VM. So XCP will attempt to reclaim: * larger amounts from VMs with wider dynamic ranges; and * smaller amounts from VMs with narrower dynamic ranges. * nothing at all from VMs with dynamic-min = dynamic-max. However one thing not obvious from the FAQ is that the memory ballooning daemon is a replaceable component, built separately from Xapi, and running as a separate service on each XCP host. In principle, it would be possible to write a replacement that assigned targets based on memory pressure information gathered from the guest. I believe that building such a daemon wouldn't be trivial, but it is at least possible. We'd be very happy to hear from anyone who wants to give it a go. We decided to go with the simpler design for our first release of DMC, since early feedback from customers indicated that if given the choice between: * no dynamic memory management at all; * an earlier release with a proportional solution; * a later release with a more complex solution, most people requested an earlier release with a simpler design. Here sample: Assume we have 5Gb memory for 4 VMs. (And our administrative limits is dynamic-min 256MiB dynamic-max 2GiB). Every VM will gain about 1.20GiB memory. After some time VM 1 lazy waiting anything and use about 100MiB memory. VM2 is heavy loaded and use about 1.1GiB memory. Now we starting 5th VM and DMC ballancing memory - it grabs about 200MiB from every VM to grant new VM it own 1000MiB of memory. When it grabs 200MiB from lazy machine nothing bad happens... But when it grabs 200MiB from 1.2GiB machine with 1.1GiB memory usage, or heavy swapping occur, or, this really ugly: oom killer 'make memory free' in heavy loaded server. Okay. The trick here is to set up the dynamic ranges to match what you wish to achieve in your environment. First of all, if you're happy to control the memory manually, and if you don't want automatic ballooning, then I'd suggest using target mode for each of your VMs. This still allows you to change memory at run-time, but the system won't change the allocations automatically in response to changing numbers of VMs. (Choosing a specific target is equivalent to choosing a dynamic range where min = max.) In my opinion, target mode is useful for environments where you know exactly how much memory you want to assign to each VM, but where you want the flexibility to manually alter the memory allocation later on, without booting the VM. For example, this might be very useful in cloud computing environments, where the customer has paid for a certain amount of memory, but where you really want the flexibility to upgrade their memory allocation without requiring them to reboot their VM. Secondly, in your example, your dynamic range is really very wide: 256 MiB -- 2 GiB. As an administrator, when you choose such a dynamic range, you're effectively telling the system: "I'm happy for you to assign memory anywhere in the range 256 MiB -- 2 GiB, and I'm happy for you to vary the amounts purely depending on the number of VMs in the system". In my opinion, dynamic range mode becomes more useful when you have larger numbers of VMs, all with broadly similar workloads, and when you want to be able to handle temporary spikes in the number of such VMs. A good example is an environment providing virtual desktops to many users. You could set these VMs to all have dynamic ranges of [1.5 GiB ... 2 GiB]. Now in quiet times, your users would receive 2 GiB. However, during peak periods, specifying this range allows the system to remove just a little memory from each of your existing users, in order to support a few more users. This is reason I always set dynamic-min==dynamic-max - to DISABLE this cruel DMC. I agree that DMC could be cruel. After all, if we're going to be philosophical about this, then it's just a piece of software without a mind :). Having said that, I think its effectiveness really depends on how you set it up and in what scenario you're using it. Thanks for your questions! Cheers, Jonathan ----------------------- Jonathan Knowles Xen Cloud Platform Team _______________________________________________ xen-api mailing list xen-api@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/mailman/listinfo/xen-api
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |