[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH 1 of 3 v3] xen, pod: Try to reclaim superpages when ballooning down
# HG changeset patch # User George Dunlap <george.dunlap@xxxxxxxxxxxxx> # Date 1340893080 -3600 # Node ID fb0187ae8a20d0850dea0cd3e4167503411e5950 # Parent 52f1b8a4f9a4cb454b6fea1220cc6a09cf401a42 xen,pod: Try to reclaim superpages when ballooning down Windows balloon drivers can typically only get 4k pages from the kernel, and so hand them back at that level. Try to regain superpages by checking the superpage frame that the 4k page is in to see if we can reclaim the whole thing for the PoD cache. This also modifies p2m_pod_zero_check_superpage() to return SUPERPAGE_PAGES on success. v2: - Rewritten to simply to the check as in demand-fault case, without needing to know that the p2m entry is a superpage. - Also, took out the re-writing of the reclaim loop, leaving it optimized for 4k pages (by far the most common case), and simplifying the patch. v3: - Add SoB Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx> Acked-by: Tim Deegan <tim@xxxxxxx> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -488,6 +488,10 @@ p2m_pod_offline_or_broken_replace(struct return; } +static int +p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn); + + /* This function is needed for two reasons: * + To properly handle clearing of PoD entries * + To "steal back" memory being freed for the PoD cache, rather than @@ -505,8 +509,8 @@ p2m_pod_decrease_reservation(struct doma int i; struct p2m_domain *p2m = p2m_get_hostp2m(d); - int steal_for_cache = 0; - int pod = 0, nonpod = 0, ram = 0; + int steal_for_cache; + int pod, nonpod, ram; gfn_lock(p2m, gpfn, order); pod_lock(p2m); @@ -516,13 +520,15 @@ p2m_pod_decrease_reservation(struct doma if ( p2m->pod.entry_count == 0 ) goto out_unlock; + if ( unlikely(d->is_dying) ) + goto out_unlock; + +recount: + pod = nonpod = ram = 0; + /* Figure out if we need to steal some freed memory for our cache */ steal_for_cache = ( p2m->pod.entry_count > p2m->pod.count ); - if ( unlikely(d->is_dying) ) - goto out_unlock; - - /* See what's in here. */ /* FIXME: Add contiguous; query for PSE entries? */ for ( i=0; i<(1<<order); i++) { @@ -556,7 +562,16 @@ p2m_pod_decrease_reservation(struct doma goto out_entry_check; } - /* FIXME: Steal contig 2-meg regions for cache */ + /* Try to grab entire superpages if possible. Since the common case is for drivers + * to pass back singleton pages, see if we can take the whole page back and mark the + * rest PoD. */ + if ( steal_for_cache + && p2m_pod_zero_check_superpage(p2m, gpfn & ~(SUPERPAGE_PAGES-1))) + { + /* Since order may be arbitrary, we may have taken more or less + * than we were actually asked to; so just re-count from scratch */ + goto recount; + } /* Process as long as: * + There are PoD entries to handle, or @@ -758,6 +773,8 @@ p2m_pod_zero_check_superpage(struct p2m_ p2m_pod_cache_add(p2m, mfn_to_page(mfn0), PAGE_ORDER_2M); p2m->pod.entry_count += SUPERPAGE_PAGES; + ret = SUPERPAGE_PAGES; + out_reset: if ( reset ) set_p2m_entry(p2m, gfn, mfn0, 9, type0, p2m->default_access); _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |