[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xl: insufficient ballooning when starting certain guests



>>> On 29.10.15 at 16:36, <wei.liu2@xxxxxxxxxx> wrote:
> On Tue, Sep 22, 2015 at 03:41:47AM -0600, Jan Beulich wrote:
>> it looks as if changes in the memory requirements of the hypervisor
>> have pushed things over the border of not working anymore when
>> passing through a device and not sharing page tables (between VT-d
>> and EPT, or unconditionally on AMD). Since this has always been a
> 
> I think this is referring to a system-wide option, which presumably can
> be tuned by setting "sharept" command line option?

Yes. Hence the consideration to pass that information back via sysctl.

>> latent issue (as it's quite obvious that two sets of page tables require
>> more memory than a single set) it should imo be fixed. The main issue
>> (afaics) here is that the information about whether sharing is in use
>> is not currently available to the tools (ignoring the awkward option of
>> parsing the hypervisor log). Therefore the question is - should we
>> extend XEN_SYSCTL_physinfo accordingly, or are there other
>> suitable means to communicate such information which I'm not
>> (immediately) aware of?
> 
> I'm not aware of other channel at the moment but I don't think physinfo
> is the right place.
> 
> Furthermore, with the understanding that you hope toolstack to set the
> limit properly (with SHADOW_OP_SET_ALLOCATION), passing back one bit of
> information doesn't help toolstack determine how much extra memory it
> needs. We might as well blindly increase the slack value in libxl (which
> is a bad idea IMHO).
> 
> Is there a way to pre-determine how much extra memory is needed for the
> non-shared case? If so, can it be done in toolstack alone?

Doesn't the tool stack already estimate the amount of memory
needed for shadow/hap page tables? In the non-shared case, and
with a device assigned, it would just need to double the value.

> What does the hypervisor do when PTs are shared but it still hits the
> boundary? I presume the current default value never triggers such
> situation?

Not sure what you mean here - we're talking about creating the
guest, and the failure thereof due to not enough memory getting
made available. If the tool stack estimate is appropriate, then we
shouldn't run into an oom situation in the shared case (which is
identical to the no-device-assigned one, i.e. it being wrong should
be noticed reasonably quickly).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.