[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [Patch v6 01/13] docs: libxc migration stream specification
On 08/07/14 04:53, Hongyang Yang wrote: > > > On 07/08/2014 01:37 AM, Andrew Cooper wrote: >> Add the specification for a new migration stream format. The document >> includes all the details but to summarize: >> >> The existing (legacy) format is dependant on the word size of the >> toolstack. This prevents domains from migrating from hosts running >> 32-bit toolstacks to hosts running 64-bit toolstacks (and vice-versa). >> >> The legacy format lacks any version information making it difficult to >> extend in compatible way. >> >> The new format has a header (the image header) with version information, >> a domain header with basic information of the domain and a stream of >> records for the image data. >> >> The format will be used for future domain types (such as on ARM). >> >> The specification is pandoc format (an extended markdown format) and the >> documentation build system is extended to support pandoc format >> documents. >> >> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> >> CC: Ian Campbell <Ian.Campbell@xxxxxxxxxx> >> CC: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx> >> >> --- >> v6: >> * Add pandoc -> txt/html conversion >> * Spelling fixes >> * Introduce SAVING_CPU record >> --- >> docs/Makefile | 11 +- >> docs/specs/libxc-migration-stream.pandoc | 677 >> ++++++++++++++++++++++++++++++ >> 2 files changed, 687 insertions(+), 1 deletion(-) > > ...snip... > >> + >> +VERIFY >> +------ >> + >> +A verify record indicates that, while all memory has now been sent, >> the sender >> +shall send further memory records for debugging purposes. >> + >> + 0 1 2 3 4 5 6 7 octet >> + +-------------------------------------------------+ >> + >> +The verify record contains no fields; its body_length is 0. >> + >> +\clearpage >> + >> +SAVING\_CPU >> +---------- >> + >> +A saving cpu record provides a human readable representation of the >> CPU on >> +which the guest was saved. >> + >> + 0 1 2 3 4 5 6 7 octet >> + +------------------------+------------------------+ >> + | 7bit ASCII String | >> + ... >> + +-------------------------------------------------+ >> + >> +The information is purely informative as it doesn't directly affect >> how to >> +save or restore the guest, but in the case of an error on >> restoration may help >> +to narrow down the issue. >> + >> +x86 architecutres use the _CPUID_ 48 character processor brand string. >> + >> +\clearpage >> + >> +Layout >> +====== >> + >> +The set of valid records depends on the guest architecture and type. >> + >> +x86 PV Guest >> +------------ >> + >> +An x86 PV guest image will have this order of records: >> + >> +1. Image header >> +2. Domain header >> +3. X86\_PV\_INFO record >> +4. X86\_PV\_P2M\_FRAMES record >> +5. Many PAGE\_DATA records >> +6. TSC\_INFO >> +7. SHARED\_INFO record >> +8. VCPU context records for each online VCPU >> + a. X86\_PV\_VCPU\_BASIC record >> + b. X86\_PV\_VCPU\_EXTENDED record >> + c. X86\_PV\_VCPU\_XSAVE record >> + d. X86\_PV\_VCPU\_MSRS record >> +9. END record >> + >> +x86 HVM Guest >> +------------- >> + >> +1. Image header >> +2. Domain header >> +3. Many PAGE\_DATA records >> +4. TSC\_INFO >> +5. HVM\_CONTEXT >> +6. HVM\_PARAMS >> + > > do VERIFY/SAVING_CPU records should be document here? > Although they are optional, we may need to know in which order > when they are included... I will have to rewrite this section for clarity. The wording is not fantastic. There is no prescribed order of records, which is why the receiving side is deliberately coded with "do { get_one_record(); process_one_record(); } while (!END_RECORD);" loops The above are expected ordering in standard cases, but even performing a legacy conversion results in a different ordering of records. For the specific two you identified, the VERIFY record is for debugging only and I don't expect to see it in production use. The SAVING_CPU record is currently written immediately after the Domain Header, but could reside anywhere in the stream. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |