[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] RFC: initial libxl support for xenpaging
> On Thu, 2012-02-23 at 12:18 +0000, George Dunlap wrote: >> On Thu, 2012-02-23 at 10:42 +0000, Ian Campbell wrote: >> > What if you switch back without making sure you are in such a state? I >> > think switching between the two is where the potential for unexpected >> > behaviour is most likely. >> >> Yeah, correctly predicting what would happen requires understanding what >> mem-set does under the hood. >> >> > I like that you have to explicitly ask for the safety wheels to come >> off >> > and explicitly put them back on again. It avoids the corner cases I >> > alluded to above (at least I hope so). >> >> Yes, I think your suggestion sounds more like driving a car with a >> proper hood, and less like driving a go-kart with the engine >> exposed. :-) > > yeah, I'm way past "live fast die young" ;-) > >> >> > Without wishing to put words in Andres' mouth I expect that he >> intended >> > "footprint" to cover other technical means than paging too -- >> > specifically I expect he was thinking of page sharing. (I suppose it >> > also covers PoD to some extent too, although that is something of a >> > special case) >> > >> > While I don't expect there will be a knob to control number of shared >> > pages (either you can share some pages or not, the settings would be >> > more about how aggressively you search for sharable pages) it might be >> > useful to consider the interaction between paging and sharing, I >> expect >> > that most sharing configurations would want to have paging on at the >> > same time (for safety). It seems valid to me to want to say "make the >> > guest use this amount of actual RAM" and to achieve that by sharing >> what >> > you can and then paging the rest. >> >> Yes, it's worth thinking about; as long as it doesn't stall the paging >> UI too long. :-) > > Right. I think the only issue here is whether we make the control called > "paging-foo" or "footprint-foo". > > I think your point that this control doesn't actually control sharing is > a good one. In reality it control's paging and if sharing is enabled > then it is a best effort thing which simply alleviates the need to do > some amount of paging to reach the paging target. > >> The thing is, you can't actually control how much sharing happens. That >> depends largely on whether the guests create and maintain pages which >> are share-able, and whether the sharing detection algorithm can find >> such pages. Even if two guests are sharing 95% of their pages, at any >> point one of the guests may simply go wild and change them all. So it >> seems to me that shared pages need to be treated like sunny days in the >> UK: Enjoy them while they're there, but don't count on them. :-) >> >> Given that, I think that each VM should have a "guaranteed minimum >> memory footprint", which would be the amount of actual host ram it will >> have if suddenly no shared pages become available. After that, there >> should be a policy of how to use the "windfall" or "bonus" pages >> generated by sharing. > >> One sensible default policy would be "givers gain": Every guest which >> creates a page which happens to be shared by another VM gets a share of >> the pages freed up by the sharing. Another policy might be "communism", >> where the freed up pages are shared among all VMs, regardless of whose >> pages made the benefit possible. (In fact, if shared pages come from >> zero pages, they should probably be given to VMs with no zero pages, >> regardless of the policy.) > > An easily policy to implement initially would be "do nothing and use > tmem". > >> However, I'd say the main public "knobs" should be just consist of two >> things: >> * xl mem-set memory-target. This is the minimum amount of physical RAM >> a guest can get; we make sure that the sum of these for all VMs does not >> exceed the host capacity. > > Isn't this what we've previously called mem-paging-set? We defined > mem-set earlier as controlling the amount of RAM the guest _thinks_ it > has, which is different. > >> * xl sharing-policy [policy]. This tells the sharing system how to use >> the "windfall" pages gathered from page sharing. >> >> Then internally, the sharing system should combine the "minimum >> footprint" with the number of extra pages and the policy to set the >> amount of memory actually used (via balloon driver or paging). > > This is an argument in favour of mem-footprint-set rather than > mem-paging set? > > Here is an updated version of my proposed interface which includes > sharing, I think as you described (modulo the use of mem-paging-set > where you said mem-set above). > > I also included "mem-paging-set manual" as an explicit thing with an > error on "mem-paging-set N" if you don't switch to manual mode. This > might be too draconian -- I'm not wedded to it. > > maxmem=X # maximum RAM the domain can ever see > memory=M # current amount of RAM seen by the > # domain > paging=[off|on] # allow the amount of memory a guest > # thinks it has to differ from the > # amount actually available to it (its > # "footprint") > pagingauto=[off|on] (dflt=on) # enable automatic enforcement of > # "footprint" for guests which do not > # voluntarily obey changes to memory=M > pagingdelay=60 # amount of time to give a guest to > # voluntarily comply before enforcing a > # footprint > pagesharing=[off|on] # cause this guest to share pages with > # other similarly enabled guests where > # possible. Requires paging=on. > pageextrapolocy=... # controls what happens to extra pages > # gain via sharing (could be combined > # with pagesharing option: > # [off|policy|...]) > > Open question -- does pagesharing=on require paging=on? I've > tried to specify things below such that it does not, but it > might simplify things to require this. No it doesn't, from the point of view of the hypervisor & libxc. It would be strictly an xl constraint. Note that the pager won't be able to evict shared pages, so it will choose a non-shared victim. Word of caution. It is not easy to account for shared pages per domain. Right now the dominfo struct does not report back the count of "shared" pages for the domain -- although that's trivially fixable. Even then, the problem is that a page counting as shared does not strictly mean "gone from the domain footprint". A shared page with two references (from dom A and dom B), that cow-breaks due to a write from dom A, will still be a shared page with a single reference from dom B, and count as a shared page for dom B. Also, all shared pages, whether they are referenced by one, two, or more domains, belong to dom_cow. What you could do is xc_sharing_freed_pages(), and split the result into all domains, for a rough estimate of how much each domain is saving in terms of sharing (divide in equal parts, or prorated by the domain shared count). Another tool is xc_memshr_debug_gfn, which will tell you how many refs to the shared page backing the gfn there are (if the gfn is indeed shared). Or, we could also have a "sharing_saved" count per domain to mean exactly what you're looking for here. > > xl mem-set domain M > Sets the amount of RAM which the guest believes it has available > to M. The guest should arrange to use only that much RAM and > return the rest to the hypervisor (e.g. by using a balloon > driver). If the guest does not do so then the host may use > technical means to enforce the guest's footprint of M. The guest > may suffer a performance penalty for this enforcement. > > paging off: set balloon target to M. > paging on: set balloon target to M. > if pagingauto: > wait delay IFF new target < old > set paging target to M > support -t <delay> to override default? > > Open question -- if a domain balloons to M as requested should > it still be subject to sharing? There is a performance hit > associated with sharing (far less than paging though?) but > presumably the admin would not have enabled sharing if they > didn't want this, therefore I think it is right for sharing on > to allow the guest to actually have <M assigned to it. Might be > a function of the individual sharing policy? Sharing + balloon is mostly a bad idea. It's not forbidden or broken from the hypervisor angle, but it's a lose-lose game. Ballooning a shared page gains you nothing. The ballooner can't know what is shared and what isn't, in order to make an educated decision. And the sharer can't foresee what the balloon will victimize. I would make those two mutually exclusive in xl. > > xl mem-paging-set domain manual > Enables manual control of paging target. > > paging off: error > paging on: set pagingauto=off > sharing on: same as paging on. > > xl mem-paging-set domain N > Overrides the amount of RAM which the guest actually has > available (its "footprint") to N. The host will use technical > means to continue to provide the illusion to the guest that it > has memory=M (as adjusted by mem-set). There may be a > performance penalty for this. > > paging off: error > paging on: if pagingauto=on: > error > set paging target > set pagingauto=off > > xl mem-paging-set domain auto > Automatically manage paging. Request that the guest uses > memory=M (current value of memory, as adjusted by mem-set) > enforced when the guest is uncooperative (as described in > "mem-set") > > paging off: error > paging on: set paging target to M > set pagingauto=on > > xl mem-sharing-policy-set domain [policy] > Configures policy for use of extra pages. > > if !paging || pagingauto: > If guest's actual usage drops below M due to sharing > then extra pages are distributed per the sharing policy. > else: > If If guest's actual usage drops below N due to sharing > then extra pages are distributed per the sharing policy. See above note on determining shared pages per domain. Andres > > TBD potential policies. > > NB: shared pages reduce a domain's actual usage. Therefore it is > possible that sharing reduces the usage to less than the paging > target. In this case no pages will be paged out. > > We should ensure that the sum over for all domains of: > pagingauto(D)? M : N > does not exceed the amount of host memory. > > Ian. > > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |