[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 3/6] x86 / pv: do not treat PGC_extra pages as RAM

> -----Original Message-----
> From: Jan Beulich <jbeulich@xxxxxxxx>
> Sent: 10 March 2020 14:59
> To: paul@xxxxxxx
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; 'Paul Durrant' <pdurrant@xxxxxxxxxx>; 
> 'Andrew Cooper'
> <andrew.cooper3@xxxxxxxxxx>; 'Wei Liu' <wl@xxxxxxx>; 'Roger Pau Monné' 
> <roger.pau@xxxxxxxxxx>
> Subject: Re: [PATCH v5 3/6] x86 / pv: do not treat PGC_extra pages as RAM
> On 10.03.2020 14:32, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Jan Beulich <jbeulich@xxxxxxxx>
> >> Sent: 09 March 2020 13:04
> >> To: paul@xxxxxxx
> >> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Paul Durrant <pdurrant@xxxxxxxxxx>; 
> >> Andrew Cooper
> >> <andrew.cooper3@xxxxxxxxxx>; Wei Liu <wl@xxxxxxx>; Roger Pau Monné 
> >> <roger.pau@xxxxxxxxxx>
> >> Subject: Re: [PATCH v5 3/6] x86 / pv: do not treat PGC_extra pages as RAM
> >>
> >> On 09.03.2020 11:23, paul@xxxxxxx wrote:
> >>> From: Paul Durrant <pdurrant@xxxxxxxxxx>
> >>>
> >>> This patch modifies several places walking the domain's page_list to make
> >>> them ignore PGC_extra pages:
> >>>
> >>> - dump_pageframe_info() should ignore PGC_extra pages in its dump as it
> >>>   determines whether to dump using domain_tot_pages() which also ignores
> >>>   PGC_extra pages.
> >>
> >> This argument looks wrong to me: Let's take an example - a domain
> >> almost fully cleaned up, with 8 "normal" and 3 "extra" pages left.
> >> domain_tot_pages() returns 8 in this case, i.e. "normal" page
> >> dumping doesn't get skipped. However, there now won't be any trace
> >> of the "extra" pages, because they're also not on xenpage_list,
> >> which gets all its pages dumped in all cases. Correct restoration
> >> of original behavior would be to dump "normal" pages when there
> >> are less than 10, and to dump all "extra" pages. (Same of course
> >> goes for live domains, where "normal" page dumping would be
> >> skipped in the common case, but xenheap pages would be dumped, and
> >> hence so should be "extra" ones.) As indicated before, the removal
> >> of the APIC assist page from xenpage_list was already slightly
> >> regressing in this regard (as well as in at least one other way,
> >> I'm afraid), and you're now deliberately making the regression
> >> even bigger.
> >
> > I thought the idea here was that the domheap dump loop should be
> > dumping 'normal' pages so it seems reasonable to me that the number
> > of pages dumped to match the value returned by domain_tot_pages().
> I never thought of such a connection. To me the invocation of
> domain_tot_pages() there is there only to avoid overly much output.
> > Would you perhaps be happier if we put 'extra' pages on separate
> > page list, which can be unconditionally dumped so as I transition
> > xenheap pages to 'extra' pages they don't get missed? It would
> > also get rid of some of the other checks for PGC_extra that I
> > have to introduce because they currently end up on the domain's
> > page list.
> Hmm, wasn't it an intended side effect to have all pages on one
> list now?

That would be nice, but I cannot reconcile that with unconditionally dumping 
all extra pages... otherwise dump_pageframe_info() would always have to walk 
the entire page_list no matter how long it was.

> Introducing a 3rd list (even if just temporarily, until
> xenpage_list can be dropped) will be somewhat ugly because of how
> arch_free_heap_page() works.

Yes, but it would at least be temporary.

> In reply to patch 6 I did suggest to
> have a separate list, but without taking these pages off
> d->page_list,

How would that work without adding an extra page_list_entry into struct 

> such that here you would skip them in the main
> domain page dumping loop, but you would then traverse that second
> list and dump all of its elements, just like xenpage_list gets
> handled there.

Well, that's what I'm trying to achieve, yes.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.