[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges.



> From: George Dunlap [mailto:george.dunlap@xxxxxxxxxx]
> Sent: Friday, February 05, 2016 1:12 AM
> 
> On 04/02/16 14:08, Jan Beulich wrote:
> >>>> On 04.02.16 at 14:33, <Ian.Jackson@xxxxxxxxxxxxx> wrote:
> >> Jan Beulich writes ("Re: [Xen-devel] [PATCH v3 3/3] tools: introduce 
> >> parameter
> >> max_wp_ram_ranges."):
> >>> On 04.02.16 at 10:38, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
> >>>> So another question is, if value of this limit really matters, will a
> >>>> lower one be more acceptable(the current 256 being not enough)?
> >>>
> >>> If you've carefully read George's replies, [...]
> >>
> >> Thanks to George for the very clear explanation, and also to him for
> >> an illuminating in-person discussion.
> >>
> >> It is disturbing that as a result of me as a tools maintainer asking
> >> questions about what seems to me to be a troublesome a user-visible
> >> control setting in libxl, we are now apparently revisiting lower
> >> layers of the hypervisor design, which have already been committed.
> >>
> >> While I find George's line of argument convincing, neither I nor
> >> George are maintainers of the relevant hypervisor code.  I am not
> >> going to insist that anything in the hypervisor is done different and
> >> am not trying to use my tools maintainer position to that end.
> >>
> >> Clearly there has been a failure of our workflow to consider and
> >> review everything properly together.  But given where we are now, I
> >> think that this discussion about hypervisor internals is probably a
> >> distraction.
> >
> > While I recall George having made that alternative suggestion,
> > both Yu and Paul having reservations against it made me not
> > insist on that alternative. Instead I've been trying to limit some
> > of the bad effects that the variant originally proposed brought
> > with it. Clearly, with the more detailed reply George has now
> > given (involving areas where he is the maintainer for), I should
> > have been more demanding towards the exploration of that
> > alternative. That's clearly unfortunate, and I apologize for that,
> > but such things happen.
> >
> > As to one of the patches already having for committed - I'm not
> > worried about that at all. We can always revert, that's why the
> > thing is called "unstable".
> 
> It looks like I should have been more careful to catch up on the current
> state of things before I started arguing again -- please accept my
> apologies.

Thanks George for your careful thinking.

> 
> I see that patch 2/3 addresses the gpfn/io question in the commit
> message by saying, "Previously, a new hypercall or subop was suggested
> to map write-protected pages into ioreq server. However, it turned out
> handler of this new hypercall would be almost the same with the existing
> pair - HVMOP_[un]map_io_range_to_ioreq_server, and there's already a
> type parameter in this hypercall. So no new hypercall defined, only a
> new type is introduced."
> 
> And I see that 2/3 internally separates the WP_RAM type into a separate
> rangeset, whose size can be adjusted separately.
> 
> This addresses my complaint about the interface using gpfns rather than
> MMIO ranges as an interface (somewhat anyway).  Sorry for not
> acknowledging this at first.
> 
> The question of the internal implementation -- whether to use RB tree
> rangesets, or radix trees (as apparently ARM memaccess does) or p2m
> types -- is an internal implementation question.  I think p2m types is
> long-term the best way to go, but it won't hurt to have the current
> implementation checked in, as long as it doesn't have any impacts on the
> stable interface.

I'm still trying to understand your suggestion vs. this one. Today we
already have a p2m_mmio_write_dm type. It's there already, and any
write fault hitting that type will be delivered to ioreq server. Then next
open is how a ioreq server could know whether it should handle this
request or not, which is why some tracking structures (either RB/radix)
are created to maintain that specific information. It's under the assumption
that multiple ioreq servers co-exist, so a loop check on all ioreq servers
is required to identify the right target. And multiple ioreq servers are
real case in XenGT, because our vGPU device model is in kernel, as
part of Intel i915 graphics driver. So at least two ioreq servers already
exist, with one routing to XenGT in Dom0 kernel space and the other 
to the default Qemu in Dom0 user.

In your long-term approach with p2m types, looks you are proposing
encoding ioreq server ID in p2m type directly (e.g. 4bits), which then 
eliminates the need of tracking in ioreq server side so the whole
security concern is gone. And no limitation at all. Because available
p2m bits are limited, as Andrew pointed out, so it might be reasonable
to implement this approach when a new p2t structure is added, which
is why we consider it as a long-term approach.

Please correct me if above understanding is correct?

> 
> At the moment, as far as I can tell, there's no way for libxl to even
> run a version of qemu with XenGT enabled, so there's no real need for
> libxl to be involved.

no way because we have upstreamed all toolstack changes yet, but
we should still discuss the requirement as we've been doing in this
thread right? or do you mean something different?

> 
> The purpose of having the limit would putatively be to prevent a guest
> being able to trigger an exhaustion of hypervisor memory by inducing the
> device model to mark an arbitrary number of ranges as mmio_dm.
> 
> Two angles on this.
> 
> First, assuming that limiting the number of ranges is what we want:  I'm
> not really a fan of using HVM_PARAMs for this, but as long as it's not
> considered a public interface (i.e., it could go away or disappear and
> everything would Just Work), then I wouldn't object.
> 
> Although I would ask: would it instead be suitable for now to just set
> the default limit for WP_RAM to 8196 in the hypervisor, since we do
> expect it to be tracking gpfn ranges rather than IO regions?  And if we
> determine in the future that more ranges are necessary, to then do the
> work of moving it to using p2m types (or exposing a knob to adjust it)?
> 
> But (and this the other angle): is simply marking a numerical limit
> sufficient to avoid memory exhaustion? Is there a danger that after
> creating several guests, such that Xen was now running very low on
> memory, that a guest would (purposely or not) cause memory to be
> exhausted sometime further after boot, causing a system-wide DoS (or
> just general lack of stability)?
> 
> In the shadow / hap memory case, the memory is pre-allocated up front,
> which makes sure that nothing a guest does can cause Xen to run out of
> memory once it's booted.  Without pre-allocating it, it's still possible
> that the admin might start up enough VMs that exhaustion is *possible*,
> but won't be triggered until later (when the guest starts using more GTTs).

that's a valid concern though I believe not cleanly addressed in many places. 
:-)

> 
> Although in fact this really points to the need for a general overhaul
> in how memory allocation on behalf of a domain is handled in general;
> that's a bigger chunk of work.
> 
> But in any case, it seems to me that we can entirely avoid the question
> of how many ranges might ever be necessary by starting with a fixed
> limit in the hypervisor, and then moving to a p2m-type based
> implementation if and when that becomes unsatisfactory.
> 

I agree with this approach. Let's see whether we can get consensus from
others to make a conclusion.

Actually another quick thought. Please help check whether it makes sense
or just be more tricky.

Could we have an intermediate option toward p2m-type based option, by
assuming only one ioreq server can handle write_dm related faults? If that's
the case:

- we don't need to add more bits in p2m type;
- XenGT can register as the default ioreq server for write_dm;
- when a write-dm fault is triggered, we'll loop all ioreq servers to find the
default one and then deliver through that path w/o further check;
- we may not change ioreq server interface at all. Assume the ioreq sever 
which receives the 1st write-protection request from device model is the
default one;

It is not very clean, but code change is minimal. Also it's easily extensible to
support multiple ioreq server both with write_dm requirement in the future.

But I haven't think about all corner cases carefully. Welcome comments. :-)

Thanks
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.