[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 06/12] docs/migration Specify migration v3 and STATIC_DATA_END
On 24.12.2019 16:19, Andrew Cooper wrote: > Migration data can be split into two parts - that which is invariant of > guest execution, and that which is not. Separate these two with the > STATIC_DATA_END record. > > The short term, we want to move the x86 CPU Policy data into the stream. > In the longer term, we want to provisionally send the static data only > to the destination as a more robust compatibility check. In both cases, > we will want a callback into the higher level toolstack. > > Mandate the presence of the STATIC_DATA_END record, and declare this v3, > along with instructions for how to compatibly interpret a v2 stream. What doesn't become clear (to me) from all of the above is why this record is needed (wanted), and hence why it is to be mandatory. After all ... > @@ -675,9 +694,23 @@ A typical save record for an x86 HVM guest image would > look like: > HVM_PARAMS must precede HVM_CONTEXT, as certain parameters can affect > the validity of architectural state in the context. > > +Compatibility with older versions > +================================= > + > +v3 compat with v2 > +----------------- > + > +A v3 stream is compatible with a v2 stream, but mandates the presense of a > +STATIC_DATA_END record ahead of any memory/register content. This is to ease > +the introduction of new static configuration records over time. > + > +A v3-compatible reciever interpreting a v2 stream should infer the position > of > +STATIC_DATA_END based on finding the first X86_PV_P2M_FRAMES record (for PV > +guests), or PAGE_DATA record (for HVM guests) and behave as if > STATIC_DATA_END > +had been sent. ... you already imply a v3 receiver can deal with its absence. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |