|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen stable-4.5] x86/PoD: skip eager reclaim when possible
commit 644aa81d1e7ceabc30af950cc268dc00ef74e2af
Author: Jan Beulich <jbeulich@xxxxxxxx>
AuthorDate: Fri May 27 14:48:58 2016 +0200
Commit: Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Fri May 27 14:48:58 2016 +0200
x86/PoD: skip eager reclaim when possible
Reclaiming pages is pointless when the cache can already satisfy all
outstanding PoD entries, and doing reclaims in that case can be very
harmful to performance when that memory gets used by the guest, but
only to store zeroes there.
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Release-acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>
master commit: 556c69f4efb09dd06e6bce4cbb0455287f19d02e
master date: 2016-05-12 18:02:21 +0200
---
xen/arch/x86/mm/p2m-pod.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index b3aa7b1..1810eea 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1004,7 +1004,6 @@ static void pod_eager_record(struct p2m_domain *p2m,
{
struct pod_mrp_list *mrp = &p2m->pod.mrp;
- ASSERT(mrp->list[mrp->idx] == INVALID_GFN);
ASSERT(gfn != INVALID_GFN);
mrp->list[mrp->idx++] =
@@ -1052,7 +1051,9 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned
long gfn,
return 0;
}
- pod_eager_reclaim(p2m);
+ /* Only reclaim if we're in actual need of more cache. */
+ if ( p2m->pod.entry_count > p2m->pod.count )
+ pod_eager_reclaim(p2m);
/* Only sweep if we're actually out of memory. Doing anything else
* causes unnecessary time and fragmentation of superpages in the p2m. */
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.5
_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |