[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH for-4.12 v3 3/8] amd/npt/shadow: replace assert that prevents creating 2M/1G MMIO entries



The assert was originally added to make sure that higher order
regions (> PAGE_ORDER_4K) could not be used to bypass the
mmio_ro_ranges check performed by p2m_type_to_flags.

This however is already checked in set_mmio_p2m_entry, which makes
sure that higher order mappings don't overlap with mmio_ro_ranges,
thus allowing the creation of high order MMIO mappings safely.

Replace the assert to allow 2M/1G entries to be created for MMIO
regions and add some extra asserts as a replacement to make sure
there's no overlapping with MMIO read-only ranges.

Note that 1G MMIO entries will not be created unless mmio_order is
changed to allow it.

Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
---
Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
Cc: Jan Beulich <jbeulich@xxxxxxxx>
Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
Cc: Juergen Gross <jgross@xxxxxxxx>
---
Without this patch trying to create a PVH dom0 will trigger an assert
on certain hardware depending on the memory map.
---
Changes since v2:
 - Unify checks into a helper function.

Changes since v1:
 - Fix subject.
 - Replace the assert with a suitable one.
---
 xen/arch/x86/mm/p2m-pt.c | 23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 12f92cf1f0..52eaa24b18 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -479,6 +479,23 @@ int p2m_pt_handle_deferred_changes(uint64_t gpa)
     return rc;
 }
 
+/* Checks only applicable to entries with order > PAGE_ORDER_4K */
+static void check_entry(mfn_t mfn, p2m_type_t new, p2m_type_t old,
+                        unsigned int order)
+{
+    ASSERT(order > PAGE_ORDER_4K);
+    ASSERT(old != p2m_ioreq_server);
+    if ( new == p2m_mmio_direct )
+        ASSERT(!mfn_eq(mfn, INVALID_MFN) &&
+               !rangeset_overlaps_range(mmio_ro_ranges, mfn_x(mfn),
+                                        mfn_x(mfn) + (1ul << order)));
+    else if ( p2m_allows_invalid_mfn(new) || new == p2m_invalid ||
+              new == p2m_mmio_dm )
+        ASSERT(mfn_valid(mfn) || mfn_eq(mfn, INVALID_MFN));
+    else
+        ASSERT(mfn_valid(mfn));
+}
+
 /* Returns: 0 for success, -errno for failure */
 static int
 p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
@@ -575,8 +592,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t 
mfn,
             }
         }
 
-        ASSERT(p2m_flags_to_type(flags) != p2m_ioreq_server);
-        ASSERT(!mfn_valid(mfn) || p2mt != p2m_mmio_direct);
+        check_entry(mfn, p2mt, p2m_flags_to_type(flags), page_order);
         l3e_content = mfn_valid(mfn) || p2m_allows_invalid_mfn(p2mt)
             ? p2m_l3e_from_pfn(mfn_x(mfn),
                                p2m_type_to_flags(p2m, p2mt, mfn, 2))
@@ -667,8 +683,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t 
mfn,
             }
         }
 
-        ASSERT(p2m_flags_to_type(flags) != p2m_ioreq_server);
-        ASSERT(!mfn_valid(mfn) || p2mt != p2m_mmio_direct);
+        check_entry(mfn, p2mt, p2m_flags_to_type(flags), page_order);
         l2e_content = mfn_valid(mfn) || p2m_allows_invalid_mfn(p2mt)
             ? p2m_l2e_from_pfn(mfn_x(mfn),
                                p2m_type_to_flags(p2m, p2mt, mfn, 1))
-- 
2.17.2 (Apple Git-113)


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.