[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/6] mmu: Introduce XENMEM_claim_pages (subop of memory ops).



On Wed, Mar 06, 2013 at 09:07:46AM +0000, Tim Deegan wrote:
> Hi, 
> 
> At 16:43 -0500 on 05 Mar (1362501800), Konrad Rzeszutek Wilk wrote:
> > > The 'normal' restriction isn't actually enforced except at claim time.
> > > Since the gate in alloc_heap_pages is effectively:
> > 
> > The big thing is *might*. I put this in the code path to explain better:
> > 
> >   /* note; The usage of tmem claim means that allocation from a guest 
> > *might*
> > +     * have to come from freeable memory. Using free memory is always 
> > better, if
> > +     * it is available, than using freeable memory. This flag is for the 
> > use
> > +     * case where the toolstack cannot be constantly aware of the exact 
> > current
> > +     * value of free and/or freeable on each machine and is multi-machine
> > +     * capable. It can try/fail a "normal" claim on all machines first 
> > then,
> > +     * and if the normal claim on all machines fail, then "fallback" to a
> > +     * tmem-flag type claim.
> 
> Oh I see.  That's pretty strange semantics for a 'claim', though.
> Wouldn't it make sense for the toolstack just to query free and claimed
> memory on the first pass and fail if there's not enough space?

So do something like this:

        if ( dom->claim_enabled ) {
                unsigned long outstanding = 
xc_domain_get_outstanding_pages(dom->xch);
                xc_physinfo_t xcphysinfo = { 0 };
                int flag = XENMEMF_claim_normal;
        
                rc = xc_physinfo(dom->xch, &xcphysinfo);

                if (xcphysinfo.total_pages + outstanding > dom->total_pages)
                        flag = XENMEMF_claim_tmem;

                rc = xc_domain_claim_pages(dom->xch, dom->guest_domid, 
dom->total_pages,
                                           flag);
        }

(Ignorning the checks for 'rc' and bailing out as neccessary)

> The race between that query and the claim call (i.e. enough
> actually-free space at query time but only freeable memory at claim
> time) isn't materially different from the claim succeeding and then tmem
> consuming the memory.  And a race between multiple claims can be
> trivially avoided with a mutex in toolstack.

The location where the claim call is in the libxc toolstack (it seemed
the most appropiate as it deals with the populate calls). There is no locking
semantics at all in that library - that is something the other libraries - 
such as libxl, have. The libxl has the lock, but it would be a coarse lock
as it would be around:

        xc_hvm_build and xc_dom_mem_init

which are the libxc calls that also call populate physmap. So in essence
we would be back to part of the original problem - trying to launch
multiple guests in parallel and hitting a stall for minutes when one
guest is being populated (as both of these would do the claim and also
in a loop call populate_physmap).

The other idea I had was to stick the claim call right outside those two
calls:

        take lock
        xc_domain_claim_pages(...)
        release lock
        xc_dom_mem_init() (or xc_hvm_build)
        xc_domain_claim_pages(..., XENMEM_claim_cancel)

which works great for PV, but not so well for HVM as there is a bunch
of the PoD code in there that subtracts how many pages we need.
(referring to setup_Guest in xc_hvm_build_x86.c)

Unfortunatly it looks a bit odd - the PV code (libxl__build_pv) is all
about:
        xc_dom_something
        xc_dom_something_other
        --> xc_domain_claim_pages <--
        xc_dom_something_other
        --> xc_domain_claim_pages <--
        xc_dom_something_other

and then this oddity of xc_domain_claim_pages stuck in between.
At which point it looks more natural to have it inside the
xc_dom_something calls. Especially as the API of libxc looks
to be structured around - xc_dom_* are the top level which
then calls a variety of xc_domain_* as needed.


> 
> > +     * The memory claimed by a normal claim isn't enforced against 
> > "freeable
> > +     * pages" once the claim is successful. That's by design. Once the 
> > claim
> > +     * has been made, it still can take minutes before the claim is fully
> > +     * satisfied. Tmem can make use of the unclaimed pages during this time
> > +     * (to store ephemeral/freeable pages only, not persistent pages).
> > +     */
> > 
> > Here is the updated patch (with the long git commit description removed).
> 
> Thanks.  Could you also replace the other uses of "unclaimed" (in
> interfaces & comments)?

Naturally. Let me get that out to you tomorrow.
> 
> Cheers,
> 
> Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.