|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v3 0/6] Fixes to pagetable handling
This series has been a long time in preparation (i.e. most of the 4.8 and 4.9
dev cycles). It started when I tried to make an XTF PoC for XSA-176, and
stumbled upon the the _PAGE_PAGED aliasing issue (see patch 7) which caused by
PoC to be descheduled waiting for (a non-existent) paging agent to respond.
This series is built upon:
1) The switch to using _Bool. I spent rather too long trying to debug why
CR0.WP wasn't behaving properly, and it was down to static inline bool_t
guest_wp_enabled().
2) The series to switch emulation to using system-segment relative memory
accesses (directly relevant to patches 1 and 2), which in turn resulted
in the discovery of XSA-191.
3) The CPUID improvement work to get maxphysaddr into a sensibly audited
state, and sensibly accessible location.
Since v2, some patches have been committed. All other fixes are from review.
Andrew Cooper (6):
x86/pagewalk: Clean up guest_supports_* predicates
x86/pagewalk: Helpers for reserved bit handling
x86/pagewalk: Re-implement the pagetable walker
x86/shadow: Use the pagewalk reserved bits helpers
x86/pagewalk: Improve the logic behind setting access and dirty bits
x86/pagewalk: non-functional cleanup
xen/arch/x86/mm/guest_walk.c | 575 +++++++++++++++++++++-----------------
xen/arch/x86/mm/hap/guest_walk.c | 21 +-
xen/arch/x86/mm/hap/nested_ept.c | 2 +-
xen/arch/x86/mm/p2m.c | 17 +-
xen/arch/x86/mm/shadow/multi.c | 102 ++++---
xen/include/asm-x86/cpufeature.h | 1 +
xen/include/asm-x86/guest_pt.h | 228 ++++++++++++---
xen/include/asm-x86/hvm/hvm.h | 4 -
xen/include/asm-x86/p2m.h | 2 +-
xen/include/asm-x86/page.h | 3 -
xen/include/asm-x86/processor.h | 2 +
xen/include/asm-x86/x86_64/page.h | 6 -
12 files changed, 579 insertions(+), 384 deletions(-)
--
2.1.4
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |