|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH] x86/ept: simplify detection of special pages for EMT calculation
The current way to detect whether a page handled to
epte_get_entry_emt() is special and needs a forced write-back cache
attribute involves iterating over all the smaller 4K pages for
superpages.
Such loop consumes a high amount of CPU time for 1GiB pages (order
18): on a Xeon® Silver 4216 (Cascade Lake) at 2GHz this takes an
average amount of time of 1.5ms. Note that this figure just accounts
for the is_special_page() loop, and not the whole code of
epte_get_entry_emt(). Also the resolve_misconfig() operation that
calls into epte_get_entry_emt() is done while holding the p2m lock in
write (exclusive) mode, which blocks concurrent EPT_MISCONFIG faults
and prevents most guest hypercalls for progressing due to the need to
take the p2m lock in read mode to access any guest provided hypercall
buffers.
Simplify the checking in epte_get_entry_emt() and remove the loop,
assuming that there won't be superpages being only partially special.
So far we have no special superpages added to the guest p2m, and in
any case the forcing of the write-back cache attribute is a courtesy
to the guest to avoid such ranges being accessed as uncached when not
really needed. It's not acceptable for such assistance to tax the
system so badly.
Fixes: 60d1adfa18 ('x86/ept: fix shattering of special pages')
Fixes: ca24b2ffdb ('x86/hvm: set 'ipat' in EPT for special pages')
Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
---
xen/arch/x86/mm/p2m-ept.c | 24 ++++++++----------------
1 file changed, 8 insertions(+), 16 deletions(-)
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index b4919bad51..d0e1c31612 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -491,7 +491,6 @@ int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t
mfn,
{
int gmtrr_mtype, hmtrr_mtype;
struct vcpu *v = current;
- unsigned long i, special_pgs;
*ipat = false;
@@ -518,26 +517,19 @@ int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t
mfn,
return MTRR_TYPE_UNCACHABLE;
}
- if ( type != p2m_mmio_direct && !is_iommu_enabled(d) &&
- !cache_flush_permitted(d) )
+ if ( (type != p2m_mmio_direct && !is_iommu_enabled(d) &&
+ !cache_flush_permitted(d)) ||
+ /*
+ * Assume the whole page to be special if the first 4K chunk is:
+ * iterating over all possible 4K sub-pages for higher order pages is
+ * too expensive.
+ */
+ is_special_page(mfn_to_page(mfn)) )
{
*ipat = true;
return MTRR_TYPE_WRBACK;
}
- for ( special_pgs = i = 0; i < (1ul << order); i++ )
- if ( is_special_page(mfn_to_page(mfn_add(mfn, i))) )
- special_pgs++;
-
- if ( special_pgs )
- {
- if ( special_pgs != (1ul << order) )
- return -1;
-
- *ipat = true;
- return MTRR_TYPE_WRBACK;
- }
-
switch ( type )
{
case p2m_mmio_direct:
--
2.37.3
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |