[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Proposed new "memory capacity claim" hypercall/feature



> From: Ian Jackson [mailto:Ian.Jackson@xxxxxxxxxxxxx]
> Subject: Re: Proposed new "memory capacity claim" hypercall/feature
> 
> Keir Fraser writes ("Re: Proposed new "memory capacity claim" 
> hypercall/feature"):
> > On 07/11/2012 22:17, "Dan Magenheimer" <dan.magenheimer@xxxxxxxxxx> wrote:
> > > I think this brings us back to the proposed "claim" hypercall/subop.
> > > Unless there are further objections or suggestions for different
> > > approaches, I'll commence prototyping it, OK?
> >
> > Yes, in fact I thought you'd started already!
> 
> Sorry to play bad cop here but I am still far from convinced that a
> new hypercall is necessary or desirable.
> 
> A lot of words have been written but the concrete, detailed, technical
> argument remains to be made IMO.

Hi Ian --

I agree, a _lot_ of words have been written and this discussion
has had a lot of side conversations so has gone back and forth into
a lot of weed patches.

I agree it would be worthwhile to restate the problem clearly,
along with some of the proposed solutions/pros/cons.  When I
have a chance I will do that, but prototyping may either clarify
some things or bring out some new unforeseen issues, so I think
I will do some more coding first (and this may take a week or two
due to some other constraints).

But to ensure that any summary/restatement touches on your
concerns, could you be more specific as to about what you are
unconvinced?

I.e. I still think the toolstack can manage all memory
allocation; or, holding the heap_lock for a longer period
should solve the problem; or I don't understand what the
original problem is that you are trying to solve, etc.

Thanks,
Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.