[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V3 1/1] libxl: set stub domain size based on VRAM size
On 07/14/2015 11:02 AM, Eric Shelton wrote: > On Jul 14, 2015 4:51 AM, "George Dunlap" <George.Dunlap@xxxxxxxxxxxxx> > wrote: >> >> On Sat, Jul 11, 2015 at 10:30 PM, Eric Shelton <eshelton@xxxxxxxxx> wrote: >>> Allocate additional memory to the stub domain for qemu-traditional if >>> more than 4 MB is assigned to the video adapter to avoid out of memory >>> condition for QEMU. >>> >>> Signed-off-by: Eric Shelton <eshelton@xxxxxxxxx> >> >> This seems like a good fix for now, thanks. >> >> But (just speaking to people in general) it does make it even harder >> to predict exactly how much host memory a VM is going to end up >> needing to start, which is something a lot of people complain about. >> >> I'm beginning to think that one feature we should try to look at is a >> way for the admin to specify, "Max host memory used", from which >> shadow/p2m/device memory/stubdomain can all be "allocated". > > Just curious: what happens when all of that memory has been "allocated," > yet there is still plenty of memory overall? Would you not be able to spin > up a domain despite there actually being plenty of memory to do so? The idea would be that you would set this value to say, "2048MiB", and then all the little bits and bobs would come out of that 2048MiB. So if you have a 48MiB stubdom, and 4MiB of device ROMs, and 4MiB of p2ms (including alternate p2ms), and 1MiB of other miscellaneous stuff, then the guest would only see 1991MiB of virtual RAM. (If I've done the math correctly.) Which does mean, if you set this value to 50MiB, but run it with a stubdom and 16MiB of video RAM and altp2ms and everything, you're going to have problems. :-) But in the normal use case, it should never cause a guest not to be created. -George _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |