[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH 3/3] remus: adjust x86 pv restore to support remus
On 07/10/2014 05:42 PM, Andrew Cooper wrote: On 10/07/14 10:32, Hongyang Yang wrote:On 07/10/2014 05:25 PM, Andrew Cooper wrote:On 10/07/14 04:30, Hongyang Yang wrote:On 07/09/2014 07:26 PM, Andrew Cooper wrote:On 09/07/14 12:16, Andrew Cooper wrote:On 09/07/14 08:47, Yang Hongyang wrote:cache vcpu context when restore, and set context when stream complete.Can you explain why this is needed? I can't see why it should be required.Actually, as part of reviewing this I have worked out why this is needed. It is a latent bug in the migration v2 series with all the x86 pv vcpu state (not just the basic state), which is not triggered by a well behaved sender. I shall fix it up in the base series.That's great, remember the bug I talked to you on IRC last time? This patch was targeted to avoid the bug, but I don't know why this bug happened, which I can tell is that if we don't cache the state, we will get mapping error when restore the CPU state next time, can you explain it in detail? Thanks in advance.Once you have loaded cr3 (and cr1 for 64bit guests) once, the pages containing pagetable data turn into real pagetables, after which the restorer can no longer map them RW and update their contents.That's the point, thank you for the explanation! I was wondering how you will fix it up? defer the load of the cr3 by cacheing the cpu state or something else? maybe pin/unpin pagetables will also help?The pagetable pinning is already deferred until after the end record. The vcpu basic records need deferring until the after the end record, but the rest of the vcpu state should also be deferred (even if only to avoid performing the hypercalls repeatedly). IC, thank you! Will wait for your next version. ~Andrew . -- Thanks, Yang. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |