[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: set dom0's default maximum reservation to the initial number of pages

>>> On 21.03.12 at 10:58, David Vrabel <david.vrabel@xxxxxxxxxx> wrote:
> On 21/03/12 08:36, Jan Beulich wrote:
>>>>> On 20.03.12 at 19:21, David Vrabel <david.vrabel@xxxxxxxxxx> wrote:
>>> From: David Vrabel <david.vrabel@xxxxxxxxxx>
>>> If a maximum reservation for dom0 is not explictly given (i.e., no
>>> dom0_mem=max:MMM command line option), then set the maximum
>>> reservation to the initial number of pages.  This is what most people
>>> seem to expect when they specify dom0_mem=512M (i.e., exactly 512 MB
>>> and no more).
>> I think I had seen this before (not sure whether it came from you),
>> and had already NAK-ed the idea back then. Please let's not play
>> with the meaning of existing hypervisor options, unless the "most"
>> in your description can validly be replaced by "all". In this case, _I_
>> am using the option in its current sense almost everywhere
>> (expecting to be able to balloon up beyond the initial allocation),
>> and it can also be useful in hotplug capable machines. If someone
>> wants a particular maximum, (s)he should add the respective
>> command line option.
> XenServer uses dom0_mem=752M and expects it to set the maximum
> reservation to 752M (if it doesn't it grinds to a halt rather
> spectacularly as it runs out of memory due to the extra memory used for
> page tables etc.).  So whilst I appreciate your desire to not have to
> change any command lines, I don't want to either.
> The dom0_mem option has been historically confusing.  The min, and max
> options don't affect the initial number of pages in a useful way
> (min:MMM and max:MMM are no different to simply specifying MMM).
> So I think we need to decide what behaviour is the best in the long term.
> The two options are:
> The current behaviour:
>                       | Initial | Max       |
> dom0_mem=III          | III     | Unlimited |
> dom0_mem=III,max:MMM  | III     | MMM       |
> dom0_mem=max:MMM      | MMM     | MMM       |
> or this proposal:
>                       | Initial | Max       |
> dom0_mem=III          | III     | III       |
> dom0_mem=III,max:MMM  | III     | MMM       |
> dom0_mem=max:MMM      | MMM     | MMM       |
> Hmm. Having enumerated the two options, the current behaviour looks more
> sensible to me now...


> The docs still need updating and I think the min:MMM option should be
> deprecated.

Why should we deprecate min:? It can be useful when combined with
other sub-options, can't it?

>>> This change means that with Linux 3.0.5 and later kernels,
>>> dom0_mem=512M has the same result as older, 'classic Xen' kernels. The
>>> older kernels used the initial number of pages to set the maximum
>>> number of pages and did not query the hypervisor for the maximum
>>> reservation.
>> If the pv-ops kernels behave differently in this aspect than the forward
>> ported ones, then it should really be the newer one to get adjusted to
>> the original behavior (if so intended), unless the old behavior was
>> completely insane. Adjusting the hypervisor for a shortcoming(?) in a
>> particular implementation of a Dom0 kernel doesn't seem to be the right
>> route - please also consider non-Linux Dom0 kernels that might expect
>> the current behavior.
> The old behaviour did not allow dom0 to balloon up beyond the initial
> allocation so we don't what to go back to that.

In pv-ops Linux Dom0 you mean? While I can't speak of non-Linux,
ballooning up beyond the initial allocation works fine in the legacy
and forward ported kernels.

>>> It is still possible to have a larger reservation by explicitly
>>> specifying dom0_mem=max:MMM.
>> I am not intending to update dozens of command lines.
> Looks like I will be though...

Sorry, but yes, you're going to have to use the originally intended
option if XenServer's expectation indeed is an enforced maximum.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.