[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Pointed questions re Xen memory overcommit

> > As a result, VMware customers outside of some very specific domains
> > (domains possibly overlapping with Snowflock?) will tell you
> > that "memory overcommit sucks and so we turn it off".
> >
> > Which is why I raised the question "why are we doing this?"...
> > If the answer is "Snowflock customers will benefit from it",
> > that's fine.  If the answer is "to handle disastrous failover
> > situations", that's fine too.  And I suppose if the answer is
> > "so that VMware can't claim to have a feature that Xen doesn't
> > have (even if almost nobody uses it)", I suppose that's fine too.
> >
> > I'm mostly curious because I've spent the last four years
> > trying to solve this problem in a more intelligent way
> > and am wondering if the "old way" has improved, or is
> > still just the old way but mostly warmed over for Xen.
> > And, admittedly, a bit jealous because there's apparently
> > so much effort going into the "old way" and not toward
> > "a better way".
> Is it possible to use the "better way" in Windows?

Ian Pratt at one point suggested it might be possible to use some
binary modification techniques to put the tmem hooks into Windows,
though I am dubious.  Lacking that, it would require source
changes done at MS.

> If not, then those
> who want to support Windows guests are stuck with the old way, until
> MS discovers / "re-invents" tmem. :-)  (Maybe you should give a your
> tmem talk at MS research?  Or try to give it again if you already
> have?)

I have an invite (from KY) to visit MS but I think tmem
will require broader support (community and corporate) before
MS would seriously consider it.  Oracle is not exactly MS's
best customer. :-}

> Even then there'd still be a market for supporting all those
> pre-tmem OS's for quite a while.

Yes, very true.

> Apart from that, here's my perspective:
> * Having page sharing is good, because of potential memory savings in
> VDI deployments, but also because of potential for fun things like VM
> fork, &c
> * Having page sharing requires you to have an emergency plan if
> suddenly all the VMs write to their pages and un-share them.
> Hypervisor paging is the only reasonable option here.

Yes, agreed.

> If it makes you feel any better, one of the suggested default policies
> for "what to do with all the memory sharing generates" was "put it
> into a tmem pool". :-)  That, and as we try to hammer out what the
> implications of default policies are, the semantic gap between balloon
> drivers, paging, and sharing, is rearing a very ugly head -- I would
> much rather just hand the paging/ballooning stuff over to tmem.

/me smiles.  Yes, thanks it does make me feel better :-)

I'm not sure putting raw memory into a tmem pool would do
more good than just freeing it.  Putting *data* into tmem
is what makes it valuable, and I think it takes a guest OS
to know how and when to put and get the data and (perhaps
most importantly) when to flush it to ensure coherency.

BUT, maybe this IS a possible new use of tmem that I just
haven't really considered and can't see my way through because
of focusing on the other uses for too long.  So if I can
help with talking this through, let me know.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.