[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges.



>>> On 04.02.16 at 14:33, <Ian.Jackson@xxxxxxxxxxxxx> wrote:
> Jan Beulich writes ("Re: [Xen-devel] [PATCH v3 3/3] tools: introduce 
> parameter 
> max_wp_ram_ranges."):
>> On 04.02.16 at 10:38, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
>> > So another question is, if value of this limit really matters, will a
>> > lower one be more acceptable(the current 256 being not enough)?
>> 
>> If you've carefully read George's replies, [...]
> 
> Thanks to George for the very clear explanation, and also to him for
> an illuminating in-person discussion.
> 
> It is disturbing that as a result of me as a tools maintainer asking
> questions about what seems to me to be a troublesome a user-visible
> control setting in libxl, we are now apparently revisiting lower
> layers of the hypervisor design, which have already been committed.
> 
> While I find George's line of argument convincing, neither I nor
> George are maintainers of the relevant hypervisor code.  I am not
> going to insist that anything in the hypervisor is done different and
> am not trying to use my tools maintainer position to that end.
> 
> Clearly there has been a failure of our workflow to consider and
> review everything properly together.  But given where we are now, I
> think that this discussion about hypervisor internals is probably a
> distraction.

While I recall George having made that alternative suggestion,
both Yu and Paul having reservations against it made me not
insist on that alternative. Instead I've been trying to limit some
of the bad effects that the variant originally proposed brought
with it. Clearly, with the more detailed reply George has now
given (involving areas where he is the maintainer for), I should
have been more demanding towards the exploration of that
alternative. That's clearly unfortunate, and I apologize for that,
but such things happen.

As to one of the patches already having for committed - I'm not
worried about that at all. We can always revert, that's why the
thing is called "unstable".

Everything below here I think was meant to be mainly directed
at Yu instead of me.

> Let pose again some questions that I still don't have clear answers
> to:
> 
>  * Is it possible for libxl to somehow tell from the rest of the
>    configuration that this larger limit should be applied ?
> 
>    AFAICT there is nothing in libxl directly involving vgpu.  How can
>    libxl be used to create a guest with vgpu enabled ?  I had thought
>    that this was done merely with the existing PCI passthrough
>    configuration, but it now seems that somehow a second device model
>    would have to be started.  libxl doesn't have code to do that.
> 
>  * In the configurations where a larger number is needed, what larger
>    limit is appropriate ?  How should it be calculated ?
> 
>    AFAICT from the discussion, 8192 is a reasonable bet.  Is everyone
>    happy with it.
> 
> Ian.
> 
> PS: Earlier I asked:
> 
>  * How do we know that this does not itself give an opportunity for
>    hypervisor resource exhaustion attacks by guests ?  (Note: if it
>    _does_ give such an opportunity, this should be mentioned more
>    clearly in the documentation.)
> 
>  * If we are talking about mmio ranges for ioreq servers, why do
>    guests which do not use this feature have the ability to create
>    them at all ?
> 
> I now understand that these mmio ranges are created by the device
> model.  Of course the device model needs to be able to create mmio
> ranges for the guest.  And since they consume hypervisor resources,
> the number of these must be limited (device models not necessarily
> being trusted).

Which is why I have been so hesitant to accept these patches.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.