[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH 0/3] Live update boot memory management
On Wed, 2020-01-08 at 17:24 +0000, David Woodhouse wrote: > When doing a live update, Xen needs to be very careful not to scribble > on pages which contain guest memory or state information for the > domains which are being preserved. > > The information about which pages are in use is contained in the live > update state passed from the previous Xen — which is mostly just a > guest-transparent live migration data stream, except that it points to > the page tables in place in memory while traditional live migration > obviously copies the pages separately. > > Our initial implementation actually prepended a list of 'in-use' ranges > to the live update state, and made the boot allocator treat them the > same as 'bad pages'. That worked well enough for initial development > but wouldn't scale to a live production system, mainly because the boot > allocator has a limit of 512 memory ranges that it can keep track of, > and a real system would end up more fragmented than that. > > My other concern with that approach is that it required two passes over > the domain-owned pages. We have to do a later pass *anyway*, as we set > up ownership in the frametable for each page — and that has to happen > after we've managed to allocate a 'struct domain' for each page_info to > point to. If we want to keep the pause time due to a live update down > to a bare minimum, doing two passes over the full set of domain pages > isn't my favourite strategy. > > So we've settled on a simpler approach — reserve a contiguous region > of physical memory which *won't* be used for domain pages. Let the boot > allocator see *only* that region of memory, and plug the rest of the > memory in later only after doing a full pass of the live update state. > > This means that we have to ensure the reserved region is large enough, > but ultimately we had that problem either way — even if we were > processing the actual free ranges, if the page_info grew and we didn't > have enough contiguous space for the new frametable we were hosed > anyway. > > So the straw man patch ends up being really simple, as a seed for > bikeshedding. Just take a 'liveupdate=' region on the command line, > which kexec(8) can find from the running Xen. The initial Xen needs to > ensure that it *won't* allocate any pages from that range which will > subsequently need to be preserved across live update, which isn't done > yet. We just need to make sure that any page which might be given to > share_xen_page_with_guest() is allocated appropriately. > > The part which actually hands over the live update state isn't included > yet, so this really does just *defer* the addition of the memory until > a little bit later in __start_xen(). Actually taking ranges out of it > will come later. What isn't addressed in this series is actually *honouring* the promise not to put pages into the reserved LU bootmem region that need to be preserved over live update. As things stand, we just add them to the heap anyway in end_boot_allocator(). It isn't even sufficient to use these pages for xenheap allocations and not domheap, since there are cases where we allocate from the xenheap and then share pages to a domain. Hongyan's patches to kill the directmap have already started addressing a bunch of the places that do that, so what I'm inclined to do in the short term is just *not* use the remaining space in the reserved LU bootmem region. Use it for boot time allocations (including the frametable) only, and *not* insert the rest of those pages into the heap allocator in end_boot_allocator() for now. If sized appropriately, there shouldn't be much wastage anyway. We can refine it and ensure that we can use those pages but *not* for domain allocations, once the dust has settled on the directmap removal. Attachment:
smime.p7s _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |