[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/8] Various VPMU patches


I am trying the patch set on the trinity and ran into the following issue:

root@sos-dev02:/sandbox/xen/xen-git-boris# xl create /sandbox/vm-images/xen-configs/ubuntu-12.10_dom.hvm
Parsing config from /sandbox/vm-images/xen-configs/ubuntu-12.10_dom.hvm
libxl: error: libxl_create.c:416:libxl__domain_make: domain creation fail
libxl: error: libxl_create.c:644:initiate_domain_create: cannot make domain: -3
libxl: error: libxl.c:1377:libxl__destroy_domid: non-existant domain -1
libxl: error: libxl.c:1341:domain_destroy_callback: unable to destroy guest with domid 4294967295 libxl: error: libxl_create.c:1208:domcreate_destruction_cb: unable to destroy domain 4294967295 following failed creation

The same hvm config file starts up fine on older hypervisor.  Any clues?


On 4/9/2013 12:26 PM, Boris Ostrovsky wrote:
Here is a set of VPMU changes that I thought might be useful.

The first two patches are to avoid VMEXITs on certain MSR accesses. This
is already part of VMX so I added similar SVM code

The third patch should address the problem that Suravee mentioned on the
list a few weeks ago 
It's a slightly different solution then what he suggested.

4th patch stops counters on AMD when VCPU is de-scheduled

5th is trivial Haswell support (new model number)

6th patch is trying to factor out common code from VMX and SVM.

7th is lazy VPMU save/restore. It is optimized for the case when CPUs are
not over-subscribed and VCPU stays on the same processor most of the time.
It is more beneficial on Intel processors because HW will keep track of
guest/host global control register in VMCS and tehrefore we don't need to
explicitly stop the counters. On AMD we do need to do this and so while
there is improvement, it is not as pronounced.

Here are some numbers that I managed to collect while running guests with
oprofile. This is number of executed instructions in vpmu_save/vpmu_load.

               Eager VPMU                 Lazy VPMU
            Save      Restore       Save      Restore
Intel      181       225            46         50
AMD        132       104            80        102

When processors are oversubscribed, lazy restore may take about 2.5 times
as many instructions as in the dedicated case if new VCPU jumps onto the
processor (which doesn't happen on every context switch).


Boris Ostrovsky (8):
   x86/AMD: Allow more fine-grained control of VMCB MSR Permission Map
   x86/AMD: Do not intercept access to performance counters MSRs
   x86/AMD: Read VPMU MSRs from context when it is not loaded into HW
   x86/AMD: Stop counters on VPMU save
   x86/VPMU: Add Haswell support
   x86/VPMU: Factor out VPMU common code
   x86/VPMU: Save/restore VPMU only when necessary
   x86/AMD: Clean up context_update() in AMD VPMU code

  xen/arch/x86/domain.c                    |  14 ++-
  xen/arch/x86/hvm/svm/svm.c               |  19 ++--
  xen/arch/x86/hvm/svm/vpmu.c              | 188 +++++++++++++++++++++++--------
  xen/arch/x86/hvm/vmx/vmx.c               |   2 -
  xen/arch/x86/hvm/vmx/vpmu_core2.c        |  48 ++++----
  xen/arch/x86/hvm/vpmu.c                  | 114 +++++++++++++++++--
  xen/include/asm-x86/hvm/svm/vmcb.h       |   8 +-
  xen/include/asm-x86/hvm/vmx/vpmu_core2.h |   1 -
  xen/include/asm-x86/hvm/vpmu.h           |   6 +-
  9 files changed, 296 insertions(+), 104 deletions(-)

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.