[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC Patch v3 03/18] don't zero out ioreq page
> -----Original Message----- > From: Wen Congyang [mailto:wency@xxxxxxxxxxxxxx] > Sent: 05 September 2014 10:11 > To: xen devel > Cc: Ian Jackson; Ian Campbell; Eddie Dong; Jiang Yunhong; Lai Jiangshan; Yang > Hongyang; Wen Congyang; Paul Durrant > Subject: [RFC Patch v3 03/18] don't zero out ioreq page > > ioreq page may contain some pending I/O requests, and we need to > handle the pending I/O req after migration. > > TODO: > 1. update qemu to handle the pending I/O req > > Signed-off-by: Wen Congyang <wency@xxxxxxxxxxxxxx> > Cc: Paul Durrant <paul.durrant@xxxxxxxxxx> > --- > tools/libxc/xc_domain_restore.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/tools/libxc/xc_domain_restore.c > b/tools/libxc/xc_domain_restore.c > index fb4ddfc..21a6177 100644 > --- a/tools/libxc/xc_domain_restore.c > +++ b/tools/libxc/xc_domain_restore.c > @@ -2310,8 +2310,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd, > uint32_t dom, > } > > /* These comms pages need to be zeroed at the start of day */ > - if ( xc_clear_domain_page(xch, dom, tailbuf.u.hvm.magicpfns[0]) || > - xc_clear_domain_page(xch, dom, tailbuf.u.hvm.magicpfns[1]) || > + if ( xc_clear_domain_page(xch, dom, tailbuf.u.hvm.magicpfns[1]) || > xc_clear_domain_page(xch, dom, tailbuf.u.hvm.magicpfns[2]) ) If we're not nuking pfn[0] then we probably shouldn't nuke pfn[1] either (buffererd ioreq). Does qemu need any modification? I don't think it makes any assumptions about the initial state of ioreqs so all you may have to do is make sure each vcpu event channel is kicked on resume (which is harmless even if there's no pending ioreq... qemu will just ignore it and wait again). Paul > { > PERROR("error zeroing magic pages"); > -- > 1.9.3 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |