[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 00/17] x86/PMU: Xen PMU PV(H) support



> From: Konrad Rzeszutek Wilk [mailto:konrad@xxxxxxxxxx]
> Sent: Monday, April 21, 2014 10:38 PM
> On Mon, Feb 17, 2014 at 12:55:47PM -0500, Boris Ostrovsky wrote:
> > Here is the fifth version of PV(H) PMU patches.
> >
> 
> Hey Jun, Kevin and Don,
> 
> Could you kindly take a look at these patches? I believe they only
> need an Ack from the Intel maintainers.

Hi, Konrad, I'll take a look at those patches.

> 
> Thank you!
> >
> > Changes in v5:
> >
> > * Dropped patch number 2 ("Stop AMD counters when called from
> vpmu_save_force()")
> >   as no longer needed
> > * Added patch number 2 that marks context as loaded before PMU registers
> are
> >   loaded. This prevents situation where a PMU interrupt may occur while
> context
> >   is still viewed as not loaded. (This is really a bug fix for exsiting VPMU
> >   code)
> > * Renamed xenpmu.h files to pmu.h
> > * More careful use of is_pv_domain(), is_hvm_domain(, is_pvh_domain and
> >   has_hvm_container_domain(). Also explicitly disabled support for PVH
> until
> >   patch 16 to make distinction between usage of the above macros more
> clear.
> > * Added support for disabling VPMU support during runtime.
> > * Disable VPMUs for non-privileged domains when switching to privileged
> >   profiling mode
> > * Added ARM stub for xen_arch_pmu_t
> > * Separated vpmu_mode from vpmu_features
> > * Moved CS register query to make sure we use appropriate query
> mechanism for
> >   various guest types.
> > * LVTPC is now set from value in shared area, not copied from dom0
> > * Various code and comments cleanup as suggested by Jan.
> >
> > Changes in v4:
> >
> > * Added support for PVH guests:
> >   o changes in pvpmu_init() to accommodate both PV and PVH guests, still
> in patch 10
> >   o more careful use of is_hvm_domain
> >   o Additional patch (16)
> > * Moved HVM interrupt handling out of vpmu_do_interrupt() for NMI-safe
> handling
> > * Fixed dom0's VCPU selection in privileged mode
> > * Added a cast in register copy for 32-bit PV guests cpu_user_regs_t in
> vpmu_do_interrupt.
> >   (don't want to expose compat_cpu_user_regs in a public header)
> > * Renamed public structures by prefixing them with "xen_"
> > * Added an entry for xenpf_symdata in xlat.lst
> > * Fixed pv_cpuid check for vpmu-specific cpuid adjustments
> > * Varios code style fixes
> > * Eliminated anonymous unions
> > * Added more verbiage to NMI patch description
> >
> >
> > Changes in v3:
> >
> > * Moved PMU MSR banks out from architectural context data structures to
> allow
> > for future expansion without protocol changes
> > * PMU interrupts can be either NMIs or regular vector interrupts (the latter
> > is the default)
> > * Context is now marked as PMU_CACHED by the hypervisor code to avoid
> certain
> > race conditions with the guest
> > * Fixed races with PV guest in MSR access handlers
> > * More Intel VPMU cleanup
> > * Moved NMI-unsafe code from NMI handler
> > * Dropped changes to vcpu->is_running
> > * Added LVTPC apic handling (cached for PV guests)
> > * Separated privileged profiling mode into a standalone patch
> > * Separated NMI handling into a standalone patch
> >
> >
> > Changes in v2:
> >
> > * Xen symbols are exported as data structure (as opoosed to a set of
> formatted
> > strings in v1). Even though one symbol per hypercall is returned
> performance
> > appears to be acceptable: reading whole file from dom0 userland takes on
> average
> > about twice as long as reading /proc/kallsyms
> > * More cleanup of Intel VPMU code to simplify publicly exported structures
> > * There is an architecture-independent and x86-specific public include files
> (ARM
> > has a stub)
> > * General cleanup of public include files to make them more presentable
> (and
> > to make auto doc generation better)
> > * Setting of vcpu->is_running is now done on ARM in schedule_tail as well
> (making
> > changes to common/schedule.c architecture-independent). Note that this is
> not
> > tested since I don't have access to ARM hardware.
> > * PCPU ID of interrupted processor is now passed to PV guest
> >
> >
> > The following patch series adds PMU support in Xen for PV(H)
> > guests. There is a companion patchset for Linux kernel. In addition,
> > another set of changes will be provided (later) for userland perf
> > code.
> >
> > This version has following limitations:
> > * For accurate profiling of dom0/Xen dom0 VCPUs should be pinned.
> > * Hypervisor code is only profiled on processors that have running dom0
> VCPUs
> > on them.
> > * No backtrace support.
> > * Will fail to load under XSM: we ran out of bits in permissions vector and
> > this needs to be fixed separately
> >
> > A few notes that may help reviewing:
> >
> > * A shared data structure (xenpmu_data_t) between each PV VPCU and
> hypervisor
> > CPU is used for passing registers' values as well as PMU state at the time 
> > of
> > PMU interrupt.
> > * PMU interrupts are taken by hypervisor either as NMIs or regular vector
> > interrupts for both HVM and PV(H). The interrupts are sent as NMIs to HVM
> guests
> > and as virtual interrupts to PV(H) guests
> > * PV guest's interrupt handler does not read/write PMU MSRs directly.
> Instead, it
> > accesses xenpmu_data_t and flushes it to HW it before returning.
> > * PMU mode is controlled at runtime via
> /sys/hypervisor/pmu/pmu/{pmu_mode,pmu_flags}
> > in addition to 'vpmu' boot option (which is preserved for back 
> > compatibility).
> > The following modes are provided:
> >   * disable: VPMU is off
> >   * enable: VPMU is on. Guests can profile themselves, dom0 profiles itself
> and Xen
> >   * priv_enable: dom0 only profiling. dom0 collects samples for everyone.
> Sampling
> >     in guests is suspended.
> > * /proc/xen/xensyms file exports hypervisor's symbols to dom0 (similar to
> > /proc/kallsyms)
> > * VPMU infrastructure is now used for HVM, PV and PVH and therefore has
> been moved
> > up from hvm subtree
> >
> >
> >
> >
> > Boris Ostrovsky (17):
> >   common/symbols: Export hypervisor symbols to privileged guest
> >   VPMU: Mark context LOADED before registers are loaded
> >   x86/VPMU: Minor VPMU cleanup
> >   intel/VPMU: Clean up Intel VPMU code
> >   x86/VPMU: Handle APIC_LVTPC accesses
> >   intel/VPMU: MSR_CORE_PERF_GLOBAL_CTRL should be initialized to
> zero
> >   x86/VPMU: Add public xenpmu.h
> >   x86/VPMU: Make vpmu not HVM-specific
> >   x86/VPMU: Interface for setting PMU mode and flags
> >   x86/VPMU: Initialize PMU for PV guests
> >   x86/VPMU: Add support for PMU register handling on PV guests
> >   x86/VPMU: Handle PMU interrupts for PV guests
> >   x86/VPMU: Add privileged PMU mode
> >   x86/VPMU: Save VPMU state for PV guests during context switch
> >   x86/VPMU: NMI-based VPMU support
> >   x86/VPMU: Suport for PVH guests
> >   x86/VPMU: Move VPMU files up from hvm/ directory
> >
> >  xen/arch/x86/Makefile                    |   1 +
> >  xen/arch/x86/domain.c                    |  15 +-
> >  xen/arch/x86/hvm/Makefile                |   1 -
> >  xen/arch/x86/hvm/hvm.c                   |   3 +-
> >  xen/arch/x86/hvm/svm/Makefile            |   1 -
> >  xen/arch/x86/hvm/svm/svm.c               |   6 +-
> >  xen/arch/x86/hvm/svm/vpmu.c              | 494 ----------------
> >  xen/arch/x86/hvm/vlapic.c                |   5 +-
> >  xen/arch/x86/hvm/vmx/Makefile            |   1 -
> >  xen/arch/x86/hvm/vmx/vmcs.c              |  55 ++
> >  xen/arch/x86/hvm/vmx/vmx.c               |   6 +-
> >  xen/arch/x86/hvm/vmx/vpmu_core2.c        | 931
> ------------------------------
> >  xen/arch/x86/hvm/vpmu.c                  | 266 ---------
> >  xen/arch/x86/oprofile/op_model_ppro.c    |   8 +-
> >  xen/arch/x86/platform_hypercall.c        |  18 +
> >  xen/arch/x86/traps.c                     |  35 +-
> >  xen/arch/x86/vpmu.c                      | 720
> +++++++++++++++++++++++
> >  xen/arch/x86/vpmu_amd.c                  | 509
> +++++++++++++++++
> >  xen/arch/x86/vpmu_intel.c                | 940
> +++++++++++++++++++++++++++++++
> >  xen/arch/x86/x86_64/compat/entry.S       |   4 +
> >  xen/arch/x86/x86_64/entry.S              |   4 +
> >  xen/arch/x86/x86_64/platform_hypercall.c |   2 +
> >  xen/common/event_channel.c               |   1 +
> >  xen/common/symbols.c                     |  50 +-
> >  xen/common/vsprintf.c                    |   2 +-
> >  xen/include/asm-x86/domain.h             |   2 +
> >  xen/include/asm-x86/hvm/vcpu.h           |   3 -
> >  xen/include/asm-x86/hvm/vmx/vmcs.h       |   4 +-
> >  xen/include/asm-x86/hvm/vmx/vpmu_core2.h |  51 --
> >  xen/include/asm-x86/hvm/vpmu.h           | 104 ----
> >  xen/include/asm-x86/vpmu.h               |  99 ++++
> >  xen/include/public/arch-arm.h            |   3 +
> >  xen/include/public/arch-x86/pmu.h        |  63 +++
> >  xen/include/public/platform.h            |  16 +
> >  xen/include/public/pmu.h                 |  99 ++++
> >  xen/include/public/xen.h                 |   2 +
> >  xen/include/xen/hypercall.h              |   4 +
> >  xen/include/xen/softirq.h                |   1 +
> >  xen/include/xen/symbols.h                |   6 +-
> >  xen/include/xlat.lst                     |   1 +
> >  40 files changed, 2661 insertions(+), 1875 deletions(-)
> >  delete mode 100644 xen/arch/x86/hvm/svm/vpmu.c
> >  delete mode 100644 xen/arch/x86/hvm/vmx/vpmu_core2.c
> >  delete mode 100644 xen/arch/x86/hvm/vpmu.c
> >  create mode 100644 xen/arch/x86/vpmu.c
> >  create mode 100644 xen/arch/x86/vpmu_amd.c
> >  create mode 100644 xen/arch/x86/vpmu_intel.c
> >  delete mode 100644 xen/include/asm-x86/hvm/vmx/vpmu_core2.h
> >  delete mode 100644 xen/include/asm-x86/hvm/vpmu.h
> >  create mode 100644 xen/include/asm-x86/vpmu.h
> >  create mode 100644 xen/include/public/arch-x86/pmu.h
> >  create mode 100644 xen/include/public/pmu.h
> >
> > --
> > 1.8.1.4
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxx
> > http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.