[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 0/5] xen/pvh*: Support > 32 VCPUs at restore



This patch series fixes a bunch of issues in the xen_vcpu setup
logic.

Simplify xen_vcpu related code: code refactoring in advance of the
rest of the patch series.

Support > 32 VCPUs at restore: unify all vcpu restore logic in
xen_vcpu_restore() and support > 32 VCPUs for PVH*.

Remove vcpu info placement from restore (!SMP): some pv_ops are
marked RO after init so lets not redo xen_setup_vcpu_info_placement
at restore.

Handle xen_vcpu_setup() failure in hotplug: handle vcpu_info
registration failures by propagating them from the cpuhp-prepare
callback back up to the cpuhp logic.

Handle xen_vcpu_setup() failure at boot: pull CPUs (> MAX_VIRT_CPUS)
down if we fall back to xen_have_vcpu_info_placement = 0.

Tested with various combinations of PV/PVHv2/PVHVM save/restore
and cpu-hotadd-hotremove. Also tested by simulating failure in
VCPUOP_register_vcpu_info.

Please review.

Ankur Arora (5):
  xen/vcpu: Simplify xen_vcpu related code
  xen/pvh*: Support > 32 VCPUs at domain restore
  xen/pv: Fix OOPS on restore for a PV, !SMP domain
  xen/vcpu: Handle xen_vcpu_setup() failure in hotplug
  xen/vcpu: Handle xen_vcpu_setup() failure at boot

 arch/x86/xen/enlighten.c     | 154 +++++++++++++++++++++++++++++++------------
 arch/x86/xen/enlighten_hvm.c |  33 ++++------
 arch/x86/xen/enlighten_pv.c  |  87 +++++++++++-------------
 arch/x86/xen/smp.c           |  31 +++++++++
 arch/x86/xen/smp.h           |   2 +
 arch/x86/xen/smp_hvm.c       |  14 +++-
 arch/x86/xen/smp_pv.c        |   6 +-
 arch/x86/xen/suspend_hvm.c   |  11 +---
 arch/x86/xen/xen-ops.h       |   3 +-
 include/xen/xen-ops.h        |   2 +
 10 files changed, 218 insertions(+), 125 deletions(-)

-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.