[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] over-allocation of memory
This can not be happen. There is no oversell in Xen. Ever. All technology used by all experiments still coming to single conclusion: no oversell. The Xen is driven by idea of page allocation. So, if we allocate page to some VM we can not allocate it to other machine. All Xen memory controls works around idea of xenballoon - special service (part of guest kernel) wich take memory from guest and returns to hypervisor. And vice versa. Xen works with idea 'max memory' (theoretical limit to take memory from hypervisor and return it to certain DomU). If maxmem is more than available free memory for Xen, that means 'no more memory for DomU', and this means 'ok, this is end line', MemoryErorr, OOM, etc as if domU was normal comuter with non-rubber silicon DRAM modules with limited capacity. All Xen can do: xen summ of maxmem > real_mem_for_host. But it could not give more real pages (physical memory) to guests than have. So, asked condition can not occur. Ð ÐÑÐ, 21/10/2010 Ð 15:00 -0600, Greg Woods ÐÐÑÐÑ: > This is probably a really stupid question, but what happens if you have > more memory allocated to domU's than your physical RAM? Will that cause > dom0 to crash? DomU's to crash? Or just slow performance due to swap use > on dom0? > > I ask this because we've got high availability clusters running domU's, > and I want to know how much memory is safe to allocate to domU's given > that a failure of one server could suddenly cause twice as many domU's > to be started on the other server. I'd like to know if I really have to > keep half the RAM reserved for a fairly rare occurrence. > > --Greg > > > > _______________________________________________ > Xen-users mailing list > Xen-users@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |