[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 0/9] Migration Stream v2

On Wed, Apr 30, 2014 at 07:36:43PM +0100, Andrew Cooper wrote:
> Hello,
> Presented here for review is v4 of the Migration Stream v2 work.  David,
> Frediano and myself have worked together on the implementation, and we have
> now managed to get PV and HVM migration working using the new format (even
> when transparently inserted underneath Xapi).
> This series is based on staging, as 9f2f1298d0211962 is a prerequisite
> hypercall for a bugfix.
> Draft F of the design document is available from
>   xenbits.xen.org/people/andrewcoop/domain-save-format-F.pdf

pg 8 says:

body Record body of length body_length octects.

It is a bit hard to parse. Perhaps:
Data stream (body) of size defined in 'body_length' in multiples
of octects.

pg. 11
Mentions 'XEN_DOMCTL_PFINFO_*' type. Should you at least point
the reader to where those are defined in the code? Or in the spec?
[edit: next page has it. You could mention that (following page defines

'page_data: "page_size octets of uncompressed page contents for each
page set as present in the pfn array." I think you might want to
replace: "..for each page set" as "corresponding to each element
in the pfn array."

Also oddly enough the picture shows 'page_data[N-1]' but for the
pfn array it has 'pfn[C-1]'. I think you want s/N/C.
[edit: And the next page explains why N != C].

Perhaps for the 'page_data' definition you should mention
that N != C as some of the pfn array entries skip the page_data
array of data?

xfeature_mask: The xfeature_mask rom the hypercall body.

I would replace body with structure?

TSC_MODE_* constant. You should probably list them. Or at least
say what they are.

Incarnation? Wiki says it means "embodied in flesh" Umm.

pg. 20

blob? How about payload? Data stream?

pg. 21

Would it make sense to point to the list of them at least?
Say: "Can be found in XYZ file."

pg. 22

It says temporary - so... should it be ripped out of the toolstack?

pg. 23

An x86 PV guest image will have in this order: ->
An x86 PV guest image will have this order of records:

pg. 23
x86 HVM guest:
 You should probably say "An x86 HVM guest will have.."

but more interesingly - there is an "DEVICE_MODEL_STATE" which
is not defined in the spec?

> The series is available from the branch 'saverestore2-v4' of
>   git://xenbits.xen.org/people/andrewcoop/xen.git
>   http://xenbits.xen.org/git-http/people/andrewcoop/xen.git
> The code has been a clean rewrite, using the v1 code as a reference but
> avoiding obsolete areas (e.g. how to modify the pagetables of a 32 non-pae
> guest on 32bit pae Xen).
> There are certainly some areas still in need improvement.  There are plans to
> combine the save logic for PV and HVM guests by adding a few more hooks to
> save_restore_ops.  While reviewing the series for posting, I have noticed some
> accidental duplication from attempting to merge PV and HVM together
> (ctx->save.p2m_size and ctx->x86_pv.max_pfn) which can be consolidated.
> Moving forward will be work to do with converting a legacy image to a v2
> image.
> Questions/issues needing discussing are:
> 1) What level should the conversion happen.  The v2 format is capable of
>    detecting a legacy stream, but libxc is arguably too low level for the
>    conversion decision to happen.
> 2) What to do about the layer violations which is the toolstack record and
>    device model record.  Libxl for HVM guests is the only known user of the
>    'toolstack' chunk, which contains some xenstore key/value pairs.  This data
>    should not be a blob in the middle of the libxc layer, and should be
>    promoted to a first class member of a libxl migration stream.
> Anyway - the code is presented here for initial comment/query/critism.
> ~Andrew
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.