[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v5 02/10] x86/mm: Avoid hard-coding PAT in get_page_from_l1e()
get_page_from_l1e() relied on Xen's choice of PAT, which is brittle in the face of future PAT changes. Instead, compute the actual cacheability used by the CPU and switch on that, as this will work no matter what PAT Xen uses. No functional change intended. This code is itself questionable and may be removed in the future, but removing it would be an observable behavior change and so is out of scope for this patch series. Signed-off-by: Demi Marie Obenour <demi@xxxxxxxxxxxxxxxxxxxxxx> --- Changes since v4: - Do not add new pte_flags_to_cacheability() helper, as this code may be removed in the near future and so adding a new helper for it is a bad idea. - Do not BUG() in the event of an unexpected cacheability. This cannot happen, but it is simpler to force such types to UC than to prove that the BUG() is not reachable. Changes since v3: - Compute and use the actual cacheability as seen by the processor. Changes since v2: - Improve commit message. --- xen/arch/x86/mm.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 78b1972e4170cacccc9c37c6e64e76e66a7da87f..dba6c77ef2f7ed7fcb7f7e526583ccadd35e62cc 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -959,14 +959,16 @@ get_page_from_l1e( flip = _PAGE_RW; } - switch ( l1f & PAGE_CACHE_ATTRS ) + switch ( 0xFF & (XEN_MSR_PAT >> (8 * pte_flags_to_cacheattr(l1f))) ) { - case 0: /* WB */ - flip |= _PAGE_PWT | _PAGE_PCD; + case X86_MT_UC: + case X86_MT_UCM: + case X86_MT_WC: + /* not cacheable */ break; - case _PAGE_PWT: /* WT */ - case _PAGE_PWT | _PAGE_PAT: /* WP */ - flip |= _PAGE_PCD | (l1f & _PAGE_PAT); + default: + /* cacheable */ + flip |= ((l1f & PAGE_CACHE_ATTRS) ^ _PAGE_UC); break; } -- Sincerely, Demi Marie Obenour (she/her/hers) Invisible Things Lab
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |