[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v2 0/9] x86/vmx: Don't leak EFER.NXE into guest context



The purpose of this patchset is to fix a longstanding bug whereby Xen's NXE
setting leaks into HVM context, and has a visible effect on the guests
pagewalk.

To allow patch 9 to function, there are 8 patches of improvements to the MSR
load/save infrastructure, purely to support first-gen VT-x hardware.

This series is based on the x86-next branch, which has the prerequisite
content to split out MSR_DEBUGCTL handling.

Major changes from v1:
  * MSR_DEBUGCTL handling fixes disentangled, and already completed.
  * Improvements to LBR handling to fix a logic bug with the behavioural
    changes to vmx_add_msr().
  * Fix a regression introduced on certain Nehalem/Westmere hardware.  See
    patch 9 for more details.

Andrew Cooper (9):
  x86/vmx: API improvements for MSR load/save infrastructure
  x86/vmx: Internal cleanup for MSR load/save infrastructure
  x86/vmx: Factor locate_msr_entry() out of vmx_find_msr() and vmx_add_msr()
  x86/vmx: Support remote access to the MSR lists
  x86/vmx: Improvements to LBR MSR handling
  x86/vmx: Pass an MSR value into vmx_msr_add()
  x86/vmx: Support load-only guest MSR list entries
  x86/vmx: Support removing MSRs from the host/guest load/save lists
  x86/vmx: Don't leak EFER.NXE into guest context

 xen/arch/x86/cpu/vpmu_intel.c      |  14 +-
 xen/arch/x86/domain.c              |  10 --
 xen/arch/x86/hvm/vmx/vmcs.c        | 313 ++++++++++++++++++++++++++-----------
 xen/arch/x86/hvm/vmx/vmx.c         | 127 +++++++++------
 xen/include/asm-x86/hvm/hvm.h      |   4 +-
 xen/include/asm-x86/hvm/vmx/vmcs.h | 118 +++++++++++---
 xen/include/xen/sched.h            |   2 +-
 7 files changed, 414 insertions(+), 174 deletions(-)

-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.