[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Pointed questions re Xen memory overcommit



> From: Tim Deegan [mailto:tim@xxxxxxx]
> Subject: Re: Pointed questions re Xen memory overcommit

Thanks much for the reply Tim!

> At 12:53 -0800 on 24 Feb (1330088020), Dan Magenheimer wrote:
> > 1) How is the capability and implementation similar or different
> > from VMware's?  And specifically I'm asking for hard information
> > relating to:
> >
> > http://lwn.net/Articles/309155/
> > http://lwn.net/Articles/330589/
> >
> > I am not a lawyer and my employer forbids me from reading the
> > related patent claims or speculating on any related issues, but
> > I will be strongly recommending a thorough legal review before
> > Oracle ships this code in any form that customers can enable.
> > (I'm hoping for an answer that would render a review moot.)
> 
> I am not a lawyer and my employer forbids me from reading the
> related patent claims or speculating on any related issues. :P

Heh.  If there is a smiley-face that means "we both roll
our eyes at the insanity of lawyers", put it here.
 
> > 2) Assuming no legal issues, how is Xen memory overcommit different
> > or better than VMware's, which is known to have lots of issues
> > in the real world, such that few customers (outside of a handful
> > of domains such as VDI) enable it?  Or is this effort largely to
> > remove an item from the VMware sales team's differentiation list?
> > And a comparison vs Hyper-V and KVM would be interesting also.
> 
> The blktap-based page-sharing tool doesn't use content hashes to find
> pages to share; it relies on storage-layer knowledge to detect disk
> reads that will have identical results.  Grzegorz's PhD dissertation and
> the paper on Satori discuss why that's a better idea than trying to find
> shareable pages by scanning.

Excellent.

> I agree that using page sharing to try to recover memory for higher VM
> density is, let's say, challenging.  But in certain specific workloads
> (e.g. snowflock &c), or if you're doing something else with the
> recovered memory (e.g. tmem?) then it makes more sense.
> 
> I have no direct experience of real-world deployments.

Me neither, just some comments from a few VMware users.  I agreea
Satori and or "the other page sharing" may make good sense in certain
heavily redundant workloads.

> > 3) Is there new evidence that a host-based-policy-driven memory
> > balancer works sufficiently well on one system, or for
> > multiple hosts, or for a data center?
> 
> That, I think, is an open research question.

OK, that's what I thought.

> > It would be nice for
> > all Xen developers/vendors to understand the intended customer
> > (e.g. is it the desktop user running a handful of VMs running
> > known workloads?)
> 
> With my hypervisor hat on, we've tried to make a sensible interface
> where all the policy-related decisions that this question would apply to
> can be made in the tools.  (I realise that I'm totally punting on the
> question).

That's OK... though "sensible" is difficult to measure without
a broader context.  IMHO (and I know I'm in a minority), punting
to the tools only increases the main problem (semantic gap).

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.