[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges.



>>> On 04.02.16 at 18:12, <george.dunlap@xxxxxxxxxx> wrote:
> Two angles on this.
> 
> First, assuming that limiting the number of ranges is what we want:  I'm
> not really a fan of using HVM_PARAMs for this, but as long as it's not
> considered a public interface (i.e., it could go away or disappear and
> everything would Just Work), then I wouldn't object.

The parameter setting interface generally is guest exposed, it's just
that many parameters can't be set by guests for themselves. Of
course the new #define could be put inside a Xen/tools conditional.

> Although I would ask: would it instead be suitable for now to just set
> the default limit for WP_RAM to 8196 in the hypervisor, since we do
> expect it to be tracking gpfn ranges rather than IO regions?  And if we
> determine in the future that more ranges are necessary, to then do the
> work of moving it to using p2m types (or exposing a knob to adjust it)?

That's what collides with disaggregation (as pointed out before):
Any tool stack component could then create wp-ram pages in
guests it controls, no matter whether those guests actually have
a need for such. I continue to think that if we indeed got this
route, then the host admin should be required to explicitly state
that the risks are acceptable (by allowing only certain guests to
actually create [many] of such pages).

> But (and this the other angle): is simply marking a numerical limit
> sufficient to avoid memory exhaustion? Is there a danger that after
> creating several guests, such that Xen was now running very low on
> memory, that a guest would (purposely or not) cause memory to be
> exhausted sometime further after boot, causing a system-wide DoS (or
> just general lack of stability)?

The guest itself can't, but other than fully privileged tool stack
components could, and that's still something we need to avoid.

> In the shadow / hap memory case, the memory is pre-allocated up front,
> which makes sure that nothing a guest does can cause Xen to run out of
> memory once it's booted.  Without pre-allocating it, it's still possible
> that the admin might start up enough VMs that exhaustion is *possible*,
> but won't be triggered until later (when the guest starts using more GTTs).
> 
> Although in fact this really points to the need for a general overhaul
> in how memory allocation on behalf of a domain is handled in general;
> that's a bigger chunk of work.

Well, right now there's pretty little allocation happening at run time
afaict, i.e. having a couple of pages available will generally keep
things going smoothly. (Once upon a time it was that there were -
iirc - no active runtime allocations at all, i.e. such had occurred only
in the context of things like domctls, where failure could be dealt
with by ballooning a few pages out of some domain and retrying.
I'm afraid that's not the case anymore nowadays, and even if it
was would collide with disaggregation, as a tool stack component
legitimately issuing a domctl may not have the privileges to
balloon other than the domain it controls, in particular not Dom0.)

> But in any case, it seems to me that we can entirely avoid the question
> of how many ranges might ever be necessary by starting with a fixed
> limit in the hypervisor, and then moving to a p2m-type based
> implementation if and when that becomes unsatisfactory.

With all of the above I think we should rather explore the p2m-type
based approach, in particular the suggestion Kevin has made to
direct all p2m_mmio_write_dm (a name which appears to have been
badly chosen, considering that we're talking about RAM pages
here) write accesses to the default ioreq server (utilizing that
upstream qemu doesn't itself register as such).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.