[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 05 of 23] libxl: drop 8M slack for PV guests
On 21/02/12 10:10, Ian Campbell wrote: > On Tue, 2012-02-21 at 10:10 +0000, Stefano Stabellini wrote: >> On Mon, 20 Feb 2012, Ian Jackson wrote: >>> Ian Campbell writes ("[Xen-devel] [PATCH 05 of 23] libxl: drop 8M slack for >>> PV guests"): >>>> libxl: drop 8M slack for PV guests. >>>> >>>> As far as I can tell this serves no purpose. I think it relates to the old >>>> 8M to "account for backend allocations" which we used to add. This leaves >>>> a bit >>>> of unpopulated space in the Pseudo-physical address space which can be >>>> used by >>>> backends when mapping foreign memory. However 8M is not representative of >>>> that >>>> any more and modern kernels do not operate in this way anyway. >>>> >>>> I suspect an argument could be made for removing this from the libxl API >>>> altogether but instead lets just set the overhead to 0. >>> >>> I think this is plausible but I'd like to hear from Stefano, who iirc >>> may have some knowledge about the reason for this 8Mb. >> >> It comes from XenD (and XAPI too, if I recall correctly), see this >> comment in tools/python/xen/xend/image.py: > > That doesn't answer the "why" though. > > I think I should just be more assertive since I believe I do actually > know why the 8 is there (or certainly nobody appears to know any > better). I'll update the changelog to read: > > This serves no purpose. It relates to the old 8M to "account for > backend allocations" which we used to add. This leaves a bit of > unpopulated space in the Pseudo-physical address space which can > be used by backends when mapping foreign memory. However 8M is > not representative of that any more and modern kernels do not > operate in this way anyway. Should a similar fix be applied to Xen as well for dom0? Daid _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |