[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] domain: use PGC_extra domheap page for shared_info

On Fri, 2020-03-06 at 12:37 +0100, Jan Beulich wrote:
> > For live update we need to give a region of memory to the new Xen which
> > it can use for its boot allocator, before it's handled any of the live
> > update records and before it knows which *other* memory is still
> > available for use.
> > 
> > In order to do that, the original Xen has to ensure that it *doesn't*
> > use any of that memory region for domain-owned pages which would need
> > to be preserved.
> > 
> > So far in the patches I've posted upstream I have cheated, and simply
> > *not* added them to the main heap. Anything allocated before
> > end_boot_allocator() is fine because it is "ephemeral" to the first Xen
> > and doesn't need to be preserved (it's mostly frame tables and a few
> > PTE pages).
> > 
> > Paul's work is making it possible to use those pages as xenheap pages,
> > safe in the knowledge that they *won't* end up being mapped to domains,
> > and won't need to be preserved across live update.
> I've started looking at the latest version of Paul's series, but I'm
> still struggling to see the picture: There's no true distinction
> between Xen heap and domain heap on x86-64 (except on very large
> systems). Therefore it is unclear to me what "those pages" is actually
> referring to above. Surely new Xen can't be given any pages in use
> _in any way_ by old Xen, no matter whether it's ones assigned to
> domains, or ones used internally to (old) Xen.

Hm, I'm not sure my previous response actually answered your question;
sorry, I've been away all week so it's still Monday morning in my head
right now. Let me try again...

What I said just now is true. The new Xen can use anything that isn't
actually owned by domains, but old Xen is dead and any of its own
internal allocations, Xen page tables and data structures (i.e. most of
what it allocated on its xenheap) have died with it and those pages are
considered 'free' by the new Xen.

Theoretically, it would be possible for the new Xen to go directly to
that state. The live update data could be processed *early* in the new
Xen before the boot allocator is even running, and new Xen could prime
its boot allocator with the memory ranges that happen to be available
according to the criteria outlined above.

Our initial implementation did that, in fact. It was complex in early
boot, and it didn't scale to more than 512 separate free ranges because
the boot allocator panics if it has more free regions than that.

That's why we settled on the model of reserving a specific region for
the new Xen to use for its boot allocator. Old Xen promises that it
won't put anything into that region that needs to be preserved over
kexec, and then the startup process for the new Xen is much simpler; it
can use that contiguous region for its boot allocations and then
process the live update data in a better environment once things like
vmap() are already available Then *finally* it can add the rest of the
system memory that *isn't* used by running domains, into the buddy

So this requires old Xen to promise that it *won't* put anything into
that region of reserved bootmem (aka "those pages"), that needs to be
preserved across kexec. That promise is *mostly* equivalent to "will
only allocate xenheap pages from those pages"... except for the fact
that sometimes we allocate a page from the xenheap and share it to

Thus, "don't do that then", and THEN we can say that it's OK for
xenheap allocations to come from the reserved bootmem region, but not
domheap allocations.

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.