[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/6] x86 / pv: do not treat PGC_extra pages as RAM when constructing dom0



> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx> On Behalf Of Jan 
> Beulich
> Sent: 06 March 2020 11:56
> To: pdurrant@xxxxxxxx
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Durrant, Paul <pdurrant@xxxxxxxxxxxx>; 
> Roger Pau Monné
> <roger.pau@xxxxxxxxxx>; Wei Liu <wl@xxxxxxx>; Andrew Cooper 
> <andrew.cooper3@xxxxxxxxxx>
> Subject: Re: [Xen-devel] [PATCH v3 3/6] x86 / pv: do not treat PGC_extra 
> pages as RAM when
> constructing dom0
> 
> On 05.03.2020 13:45, pdurrant@xxxxxxxx wrote:
> > --- a/xen/arch/x86/pv/dom0_build.c
> > +++ b/xen/arch/x86/pv/dom0_build.c
> > @@ -792,6 +792,10 @@ int __init dom0_construct_pv(struct domain *d,
> >      {
> >          mfn = mfn_x(page_to_mfn(page));
> >          BUG_ON(SHARED_M2P(get_gpfn_from_mfn(mfn)));
> > +
> > +        if ( page->count_info & PGC_extra )
> > +            continue;
> 
> This surely is a pattern, i.e. there are more similar changes to
> make: tboot_gen_domain_integrity() e.g. ignores d->xenpage_list,
> and hence with the goal of converting the shared info page would
> also want adjustment. For dump_numa() it may be less important,
> but it would still look more correct if it too got changed.
> audit_p2m() might apparently complain about such pages (and
> hence might be a problem with the one PGC_extra page VMX domains
> now have). And this is only from me looking at
> page_list_for_each(..., &d->page_list) constructs; who knows
> what else there is.
> 

Those are dealt with by the is_special_page() patch later on I think. It didn't 
seem appropriate to use that macro here though since we know pages on the page 
list cannot be xenheap pages.

  Paul
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.