[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [xen staging] x86/ept: fix shattering of special pages
commit 60d1adfa18793f4ddb70c8e901562d8d3e9be3dc Author: Roger Pau Monne <roger.pau@xxxxxxxxxx> AuthorDate: Thu Jun 30 18:34:49 2022 +0200 Commit: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> CommitDate: Thu Jun 30 18:07:13 2022 +0100 x86/ept: fix shattering of special pages The current logic in epte_get_entry_emt() will split any page marked as special with order greater than zero, without checking whether the super page is all special. Fix this by only splitting the page only if it's not all marked as special, in order to prevent unneeded super page shuttering. The unconditional special super page shattering has caused a performance regression on some XenServer GPU pass through workloads. Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx> --- xen/arch/x86/mm/p2m-ept.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index b04ca6dbe8..b4919bad51 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -491,7 +491,7 @@ int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t mfn, { int gmtrr_mtype, hmtrr_mtype; struct vcpu *v = current; - unsigned long i; + unsigned long i, special_pgs; *ipat = false; @@ -525,15 +525,17 @@ int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t mfn, return MTRR_TYPE_WRBACK; } - for ( i = 0; i < (1ul << order); i++ ) - { + for ( special_pgs = i = 0; i < (1ul << order); i++ ) if ( is_special_page(mfn_to_page(mfn_add(mfn, i))) ) - { - if ( order ) - return -1; - *ipat = true; - return MTRR_TYPE_WRBACK; - } + special_pgs++; + + if ( special_pgs ) + { + if ( special_pgs != (1ul << order) ) + return -1; + + *ipat = true; + return MTRR_TYPE_WRBACK; } switch ( type ) -- generated by git-patchbot for /home/xen/git/xen.git#staging
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |