[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] initial ballooning amount on HVM+PoD



On Mon, 2014-01-20 at 07:19 -0800, Boris Ostrovsky wrote:
> ----- Ian.Campbell@xxxxxxxxxx wrote:
> 
> > > > 
> > > > I previously had a patch to use memblock_phys_mem_size(), but when
> > I saw
> > > > Boris switch to get_num_physpages() I thought that would be OK,
> > but I
> > > > didn't look into it very hard. Without checking I suspect they
> > return
> > > > pretty much the same think and so memblock_phys_mem_size will have
> > the
> > > > same issue you observed (which I confess I haven't yet gone back
> > and
> > > > understood).
> > > 
> > > If there's no reserved memory in that range, I guess ARM might
> > > be fine as is.
> > 
> > Hrm I'm not sure what a DTB /memreserve/ statement turns into wrt the
> > memblock stuff and whether it is included in phys_mem_size -- I think
> > that stuff is still WIP upstream (i.e. the kernel currently ignores
> > such
> > things, IIRC there was a fix but it was broken and reverted to try
> > again
> > and then I've lost track).
> 
> 
> FWIW, when I was looking at this code I also first thought of using
> memblock but IIRC I wasn't convinced that we always have
> CONFIG_HAVE_MEMBLOCK set so I ended up with physmem_size().

FWIW it's select'd by CONFIG_X86 and CONFIG_ARM and overall by about
half of all architectures. So I suppose it depends where your threshold
for caring about future architecture ports lies...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.