[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC Patch v3 03/18] don't zero out ioreq page



> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxx [mailto:xen-devel-
> bounces@xxxxxxxxxxxxx] On Behalf Of Wen Congyang
> Sent: 05 September 2014 10:33
> To: Paul Durrant; xen devel
> Cc: Ian Campbell; Jiang Yunhong; Eddie Dong; Ian Jackson; Yang Hongyang; Lai
> Jiangshan
> Subject: Re: [Xen-devel] [RFC Patch v3 03/18] don't zero out ioreq page
> 
> At 09/05/2014 05:25 PM, Paul Durrant Write:
> >> -----Original Message-----
> >> From: Wen Congyang [mailto:wency@xxxxxxxxxxxxxx]
> >> Sent: 05 September 2014 10:11
> >> To: xen devel
> >> Cc: Ian Jackson; Ian Campbell; Eddie Dong; Jiang Yunhong; Lai Jiangshan;
> Yang
> >> Hongyang; Wen Congyang; Paul Durrant
> >> Subject: [RFC Patch v3 03/18] don't zero out ioreq page
> >>
> >> ioreq page may contain some pending I/O requests, and we need to
> >> handle the pending I/O req after migration.
> >>
> >> TODO:
> >> 1. update qemu to handle the pending I/O req
> >>
> >> Signed-off-by: Wen Congyang <wency@xxxxxxxxxxxxxx>
> >> Cc: Paul Durrant <paul.durrant@xxxxxxxxxx>
> >> ---
> >>  tools/libxc/xc_domain_restore.c | 3 +--
> >>  1 file changed, 1 insertion(+), 2 deletions(-)
> >>
> >> diff --git a/tools/libxc/xc_domain_restore.c
> >> b/tools/libxc/xc_domain_restore.c
> >> index fb4ddfc..21a6177 100644
> >> --- a/tools/libxc/xc_domain_restore.c
> >> +++ b/tools/libxc/xc_domain_restore.c
> >> @@ -2310,8 +2310,7 @@ int xc_domain_restore(xc_interface *xch, int
> io_fd,
> >> uint32_t dom,
> >>      }
> >>
> >>      /* These comms pages need to be zeroed at the start of day */
> >> -    if ( xc_clear_domain_page(xch, dom, tailbuf.u.hvm.magicpfns[0]) ||
> >> -         xc_clear_domain_page(xch, dom, tailbuf.u.hvm.magicpfns[1]) ||
> >> +    if ( xc_clear_domain_page(xch, dom, tailbuf.u.hvm.magicpfns[1]) ||
> >>           xc_clear_domain_page(xch, dom, tailbuf.u.hvm.magicpfns[2]) )
> >
> > If we're not nuking pfn[0] then we probably shouldn't nuke pfn[1] either
> (buffererd ioreq).
> >
> 
> IIRC, in my early test, if we clear pfn[1], secondary vm doesn't response(I 
> use
> vnc connect to secondary vm).
> But I test it again, secondary vm doesn't response even if we don't clear
> pfn[1]. I will clear pfn[1] in
> the next version.
> 
> 
> > Does qemu need any modification? I don't think it makes any assumptions
> about the initial state of ioreqs so all you may have to do is make sure each
> vcpu event channel is kicked on resume (which is harmless even if there's no
> pending ioreq... qemu will just ignore it and wait again).
> 
> Do you mean hypervisor kickes each vcpu event channel on resume? I will try
> it.

Yes. AFAICT QEMU won't check an ioreq structure's state unless the event for 
that vcpu is pending. As I said, it's harmless to send the event if the ioreq 
is not pending, but *not* sending the event if it is pending will lead to 
things wedging up.

  Paul

> 
> Thanks
> Wen Congyang
> 
> >
> >   Paul
> >
> >>      {
> >>          PERROR("error zeroing magic pages");
> >> --
> >> 1.9.3
> >
> > .
> >
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.