[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/page_alloc: Simplify domain_adjust_tot_pages



On Wed Feb 26, 2025 at 2:28 PM GMT, Roger Pau Monné wrote:
> On Wed, Feb 26, 2025 at 03:08:33PM +0100, Jan Beulich wrote:
> > On 26.02.2025 14:56, Roger Pau Monné wrote:
> > > On Mon, Feb 24, 2025 at 01:27:24PM +0000, Alejandro Vallejo wrote:
> > >> --- a/xen/common/page_alloc.c
> > >> +++ b/xen/common/page_alloc.c
> > >> @@ -490,13 +490,11 @@ static long outstanding_claims; /* total 
> > >> outstanding claims by all domains */
> > >>  
> > >>  unsigned long domain_adjust_tot_pages(struct domain *d, long pages)
> > >>  {
> > >> -    long dom_before, dom_after, dom_claimed, sys_before, sys_after;
> > >> -
> > >>      ASSERT(rspin_is_locked(&d->page_alloc_lock));
> > >>      d->tot_pages += pages;
> > >>  
> > >>      /*
> > >> -     * can test d->claimed_pages race-free because it can only change
> > >> +     * can test d->outstanding_pages race-free because it can only 
> > >> change
> > >>       * if d->page_alloc_lock and heap_lock are both held, see also
> > >>       * domain_set_outstanding_pages below
> > >>       */
> > >> @@ -504,17 +502,16 @@ unsigned long domain_adjust_tot_pages(struct 
> > >> domain *d, long pages)
> > >>          goto out;
> > > 
> > > I think you can probably short-circuit the logic below if pages == 0?
> > > (and avoid taking the heap_lock)
> > 
> > Are there callers passing in 0?
>
> Not sure, but if there are no callers expected we might add an ASSERT
> to that effect then.
>
> > >>      spin_lock(&heap_lock);
> > >> -    /* adjust domain outstanding pages; may not go negative */
> > >> -    dom_before = d->outstanding_pages;
> > >> -    dom_after = dom_before - pages;
> > >> -    BUG_ON(dom_before < 0);
> > >> -    dom_claimed = dom_after < 0 ? 0 : dom_after;
> > >> -    d->outstanding_pages = dom_claimed;
> > >> -    /* flag accounting bug if system outstanding_claims would go 
> > >> negative */
> > >> -    sys_before = outstanding_claims;
> > >> -    sys_after = sys_before - (dom_before - dom_claimed);
> > >> -    BUG_ON(sys_after < 0);
> > >> -    outstanding_claims = sys_after;
> > >> +    BUG_ON(outstanding_claims < d->outstanding_pages);
> > >> +    if ( pages > 0 && d->outstanding_pages < pages )
> > >> +    {
> > >> +        /* `pages` exceeds the domain's outstanding count. Zero it out. 
> > >> */
> > >> +        outstanding_claims -= d->outstanding_pages;
> > >> +        d->outstanding_pages = 0;
> > >> +    } else {
> > >> +        outstanding_claims -= pages;
> > >> +        d->outstanding_pages -= pages;
> > > 
> > > I wonder if it's intentional for a pages < 0 value to modify
> > > outstanding_claims and d->outstanding_pages, I think those values
> > > should only be set from domain_set_outstanding_pages().
> > > domain_adjust_tot_pages() should only decrease the value, but never
> > > increase either outstanding_claims or d->outstanding_pages.
> > > 
> > > At best the behavior is inconsistent, because once
> > > d->outstanding_pages reaches 0 there will be no further modification
> > > from domain_adjust_tot_pages().
> > 
> > Right, at that point the claim has run out. While freeing pages with an
> > active claim means that the claim gets bigger (which naturally needs
> > reflecting in the global).
>
> domain_adjust_tot_pages() is not exclusively called when freeing
> pages, see steal_page() for example.
>
> When called from steal_page() it's wrong to increase the claim, as
> it assumes that the page removed from d->tot_pages is freed, but
> that's not the case.  The domain might end up in a situation where
> the claim is bigger than the available amount of memory.
>
> Thanks, Roger.

This is what I meant by my initial reply questioning the logic itself.

It's all very dubious with memory_exchange and makes very little sense on the
tentative code I have for per-node claims.

I'd be quite happy to put an early exit before the spin_lock on pages <= 0.
That also covers your initial comment and prevents claims from growing after a
domain started running if it didn't happen to consume all of them.

Is anyone opposed?

Cheers,
Alejandro



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.