[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [XEN PATCH 1/2] hvm: Support more than 32 VCPUS when migrating.
From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> When we migrate an HVM guest, by default our shared_info can only hold up to 32 CPUs. As such the hypercall VCPUOP_register_vcpu_info was introduced which allowed us to setup per-page areas for VCPUs. This means we can boot PVHVM guest with more than 32 VCPUs. During migration the per-cpu structure is allocated fresh by the hypervisor (vcpu_info_mfn is set to INVALID_MFN) so that the newly migrated guest can do make the VCPUOP_register_vcpu_info hypercall. Unfortunatly we end up triggering this condition: /* Run this command on yourself or on other offline VCPUS. */ if ( (v != current) && !test_bit(_VPF_down, &v->pause_flags) ) which means we are unable to setup the per-cpu VCPU structures for running vCPUS. The Linux PV code paths make this work by iterating over every vCPU with: 1) is target CPU up (VCPUOP_is_up hypercall?) 2) if yes, then VCPUOP_down to pause it. 3) VCPUOP_register_vcpu_info 4) if it was down, then VCPUOP_up to bring it back up But since VCPUOP_down, VCPUOP_is_up, and VCPUOP_up are not allowed on HVM guests we can't do this. This patch enables this. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> --- xen/arch/x86/hvm/hvm.c | 3 +++ 1 files changed, 3 insertions(+), 0 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 38c491e..b5b92fe 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3470,6 +3470,9 @@ static long hvm_vcpu_op( case VCPUOP_stop_singleshot_timer: case VCPUOP_register_vcpu_info: case VCPUOP_register_vcpu_time_memory_area: + case VCPUOP_down: + case VCPUOP_up: + case VCPUOP_is_up: rc = do_vcpu_op(cmd, vcpuid, arg); break; default: -- 1.7.7.6 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |