[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v6 05/18] tools/libxc: support to resume uncooperative HVM guests
On Wed, Dec 30, 2015 at 10:28:55AM +0800, Wen Congyang wrote: > Befor this patch: s/Befor/Before > 1. suspend > a. PVHVM and PV: we use the same way to suspend the guest (send the suspend > request to the guest). If the guest doesn't support evtchn, the xenstore > variant will be used, suspending the guest via XenBus control node. > b. pure HVM: we call xc_domain_shutdown(..., SHUTDOWN_suspend) to suspend > the guest > > 2. Resume: > a. fast path s/fast path/fast path(fast=1) > In this case, we don't change the guest's state. And we will call > libxl__domain_resume(..., 1) to resume the guest. Do not change the guest state. We call libxl__domain_resume(.., 1) which calls xc_domain_resume(..., 1 /* fast=1*/) to resume the guest. > PV: modify the return code to 1, and than call the domctl s/domctl/domctl:/ > XEN_DOMCTL_resumedomain > PVHVM: same with PV > pure HVM: do nothing in modify_returncode, and than call the domctl: > XEN_DOMCTL_resumedomain > b. slow > Used when the guest's state have been changed. And we will call s/And we will/Will/ > libxl__domain_resume(..., 0) to resume the guest. > PV: update start info, and reset all secondary CPU states. Than call > the domctl: XEN_DOMCTL_resumedomain > PVHVM: can not be resumed. You will get the following error message: > "Cannot resume uncooperative HVM guests" > purt HVM: same with PVHVM > > After this patch: > 1. suspend > unchanged > > 2. Resume > a. fast path: > unchanged > b. slow > PV: unchanged > PVHVM: call XEN_DOMCTL_resumedomain to resume the guest. Because we > don't modify the return code, the PV driver will disconnect > and reconnect. I am not sure if we should update start info > and reset all secondary CPU states. The guest ends up doing the XENMAPSPACE_shared_info XENMEM_add_to_physmap hypercall and resetting all of its CPU states to point to the shared_info (well except the ones past 32). That is the Linux kernel does that - regardless whether the SCHEDOP_shutdown:SHUTDOWN_suspend returns 1 or not. > Pure HVM: call XEN_DOMCTL_resumedomain to resume the guest. > > Under COLO, we will update the guest's state(modify memory, cpu's registers, > device status...). In this case, we cannot use the fast path to resume it. > Keep the return code 0, and use a slow path to resume the guest. While > resuming HVM using slow path is not supported currently, this patch is to > make the resume call do not fail. s/do/to/ > > Signed-off-by: Wen Congyang <wency@xxxxxxxxxxxxxx> > Signed-off-by: Yang Hongyang <hongyang.yang@xxxxxxxxxxxx> > --- > tools/libxc/xc_resume.c | 24 ++++++++++++++++++++---- > 1 file changed, 20 insertions(+), 4 deletions(-) > > diff --git a/tools/libxc/xc_resume.c b/tools/libxc/xc_resume.c > index 87d4324..503e4f8 100644 > --- a/tools/libxc/xc_resume.c > +++ b/tools/libxc/xc_resume.c > @@ -108,6 +108,25 @@ static int xc_domain_resume_cooperative(xc_interface > *xch, uint32_t domid) > return do_domctl(xch, &domctl); > } > > +static int xc_domain_resume_hvm(xc_interface *xch, uint32_t domid) > +{ > + DECLARE_DOMCTL; > + > + /* > + * This domctl XEN_DOMCTL_resumedomain just unpause each vcpu. After s/This/The/ s/just// > + * this domctl, the guest will run. s/this/the/ > + * > + * If it is PVHVM, the guest called the hypercall HYPERVISOR_sched_op s/HYPERVISOR_sched_op/SCHEDOP_shutdown:SHUTDOWN_suspend/ > + * to suspend itself. We don't modify the return code, so the PV driver > + * will disconnect and reconnect. > + * > + * If it is a HVM, the guest will continue running. > + */ > + domctl.cmd = XEN_DOMCTL_resumedomain; > + domctl.domain = domid; > + return do_domctl(xch, &domctl); > +} > + > static int xc_domain_resume_any(xc_interface *xch, uint32_t domid) > { > DECLARE_DOMCTL; > @@ -137,10 +156,7 @@ static int xc_domain_resume_any(xc_interface *xch, > uint32_t domid) > */ > #if defined(__i386__) || defined(__x86_64__) > if ( info.hvm ) > - { > - ERROR("Cannot resume uncooperative HVM guests"); > - return rc; > - } > + return xc_domain_resume_hvm(xch, domid); > > if ( xc_domain_get_guest_width(xch, domid, &dinfo->guest_width) != 0 ) > { > -- > 2.5.0 > > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxx > http://lists.xen.org/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |