[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [RFC PATCH v3 0/22] Live update: boot memory management, data stream handling, record format



Now with added documentation:
http://david.woodhou.se/live-update-handover.pdf


v1: 
Reserve a contiguous region of memory which can be safely used by the
boot allocator in the new Xen, before the live update data stream has
been processed and thus before the locations of all the other pages
which contain live domain data are known.

v2:

As the last gasp of kexec_reloc(), leave a 'breadcrumb' in the first
words of the reserved bootmem region, for Xen to find the live update
data. Which is mostly a guest-transparent live migration data stream,
except the guest memory is left in-place.

The breadcrumb has a magic value, the physical address of an MFN array
referencing the pages with actual data, and the number of such pages.
All of these are allocated in arbitrary heap pages (and not in the
reserved bootmem region) by the original Xen.

Provide functions on the "save" side for appending to the LU data
stream (designed to cope with the way that hvm_save_size() and
hvm_save() work), and on the "receive" side for detecting and mapping
it.

On the way to excluding "already in use" pages from being added to the
heap at start up, also fix the long-standing bug that pages marked bad
with 'badpage=' on the command line weren't being eschewed if they were
above HYPERVISOR_VIRT_END and added directly to the heap; only
init_boot_pages() was doing that filtering.

This is now handled by setting either PGC_broken (for bad pages) or
PGC_allocated (for those containing live update data) in the
corresponding page_info, at a time when the frametable is expected to
be initialised to zero. When init_heap_pages() sees such a page it
knows not to use it. Bad pages thus get completely ignored as they
should be (and put on the pointless page_broken_list that nobody ever
uses AFAICT).

The "in use" pages will need some rehabilitation (of refcount,
ownership etc.) before the system is in a correct state. That will come
shortly, as we start passing real domain data across this mechanism and
processing it.

v3:

Define the migration stream record format based on the libxc format,
provide helper functions for creating and parsing those records in a
live update data stream. Send a basic LU_VERSION record.

Refactor __setup_xen() a little to allow for the different code paths
for creating a new dom0 vs. resuming the existing one after live
update. This currently has stub functions for lu_reserve_pages() and
lu_restore_domains().

Next step is actually passing migration records over this interface
which allow us to preserve a Dom0 from one Xen to the next. We do have
that working in our internal proof-of-concept tree and we're working on
cleaning it up for public consumption.

David Woodhouse (21):
      x86/setup: Don't skip 2MiB underneath relocated Xen image
      x86/boot: Reserve live update boot memory
      Reserve live update memory regions
      Add KEXEC_RANGE_MA_LIVEUPDATE
      Add KEXEC_TYPE_LIVE_UPDATE
      Add IND_WRITE64 primitive to kexec kimage
      Add basic live update stream creation
      Add kimage_add_live_update_data()
      Add basic lu_save_all() shell
      Don't add bad pages above HYPERVISOR_VIRT_END to the domheap
      xen/vmap: allow vmap() to be called during early boot
      x86/setup: move vm_init() before end_boot_allocator()
      Detect live update breadcrumb at boot and map data stream
      Start documenting the live update handover
      Migrate migration stream definitions into Xen public headers
      Add lu_stream_{open,close,append}_record()
      Add LU_VERSION and LU_END records to live update stream
      Add shell of lu_reserve_pages()
      x86/setup: lift dom0 creation out into create_dom0 function
      x86/setup: finish plumbing in live update path through __start_xen()
      x86/setup: simplify handling of initrdidx when no initrd present

Wei Liu (1):
      xen/vmap: allow vm_init_type to be called during early_boot

 docs/misc/xen-command-line.pandoc        |   9 +
 docs/specs/libxc-migration-stream.pandoc |  19 +-
 docs/specs/live-update-handover.pandoc   | 371 +++++++++++++++++++++++++++++
 tools/libxc/xc_sr_common.c               |  20 +-
 tools/libxc/xc_sr_common_x86.c           |   4 +-
 tools/libxc/xc_sr_restore.c              |   2 +-
 tools/libxc/xc_sr_restore_x86_hvm.c      |   4 +-
 tools/libxc/xc_sr_restore_x86_pv.c       |   8 +-
 tools/libxc/xc_sr_save.c                 |   2 +-
 tools/libxc/xc_sr_save_x86_hvm.c         |   4 +-
 tools/libxc/xc_sr_save_x86_pv.c          |  12 +-
 tools/libxc/xc_sr_stream_format.h        |  97 +-------
 xen/arch/x86/machine_kexec.c             |  13 +-
 xen/arch/x86/setup.c                     | 395 ++++++++++++++++++++++---------
 xen/arch/x86/x86_64/kexec_reloc.S        |   9 +-
 xen/common/Makefile                      |   1 +
 xen/common/kexec.c                       |  24 ++
 xen/common/kimage.c                      |  34 +++
 xen/common/lu/Makefile                   |   1 +
 xen/common/lu/restore.c                  |  39 +++
 xen/common/lu/save.c                     |  67 ++++++
 xen/common/lu/stream.c                   | 219 +++++++++++++++++
 xen/common/page_alloc.c                  | 128 +++++++++-
 xen/common/vmap.c                        |  45 +++-
 xen/include/Makefile                     |   2 +-
 xen/include/asm-x86/config.h             |   1 +
 xen/include/public/kexec.h               |  13 +-
 xen/include/public/migration_stream.h    | 135 +++++++++++
 xen/include/xen/kimage.h                 |   4 +
 xen/include/xen/lu.h                     |  62 +++++
 xen/include/xen/mm.h                     |   2 +
 31 files changed, 1492 insertions(+), 254 deletions(-)

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.