[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] domain creation vs querying free memory (xend and xl)
> From: Olaf Hering [mailto:olaf@xxxxxxxxx] > Subject: Re: [Xen-devel] domain creation vs querying free memory (xend and xl) > > On Mon, Oct 01, Dan Magenheimer wrote: > Hi Olaf -- Thanks for the reply. > domain. All of this needs math, not locking. > : > As IanJ said, the memory handling code in libxl needs such a feature to > do the math right. The proposed handling of > sharing/paging/ballooning/PoD/tmem/... in libxl is just a small part of > it. Unfortunately, as you observe in some of the cases earlier in your reply, it is more than a math problem for libxl... it is a crystal ball problem. If xl launches a domain D at time T and it takes N seconds before it has completed asking the hypervisor for all of the memory M that D will require to successfully launch, then xl must determine at time T the maximum memory allocated across all running domains for the future time period between T and T+N. In other words, xl must predict the future. Clearly this is impossible especially when page-sharing is not communicating its dynamic allocations (e.g. due to page-splitting) to libxl, and tmem is not communicating allocations resulting from multiple domains simultaenously making tmem hypercalls to libxl, and PoD is not communicating its allocations to libxl, and in-guest-kernel selfballooning is not communicating allocations to libxl. Only the hypervisor is aware of every dynamic allocation request. So all libxl can do is guess about the future because races are going to occur. Multiple threads are simultaneously trying to access a limited resource (pages of memory) and only the hypervisor knows whether there is enough to deliver memory for all requests. To me, the solution to racing for a shared resource is locking. Naturally, you want the critical path to be as short as possible. And you don't want to lock all instances of the resource (i.e. every page in memory) if you can avoid it. And you need to ensure that the lock is honored for all requests to allocate the shared resource, meaning in this case that it has to be done in the hypervisor. I think that's what the proposed design does: It provides a mechanism to ask the hypervisor to reserve a fixed amount of memory M, some or all of which will eventually turn into an allocation request; and a mechanism to ask the hypervisor to no longer honor that reservation ("unreserve") whether or not all of M has been allocated. It essentially locks that M amount of memory between reserve and unreserve so that other dynamic allocations (page-sharing, tmem, PoD, OR another libxl thread trying to create another domain) cannot sneak in and claim memory capacity that has been reserved. Does that make sense? Thanks, Dan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |