[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] x86/mm: Ensure liveness of pages involved in a guest page table walk



# HG changeset patch
# User Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
# Date 1322154325 0
# Node ID f011ebdce88bf95c5021777d4377bb016a2f0275
# Parent  573b1acedcc0953cfb103a54990396dcf7ac2339
x86/mm: Ensure liveness of pages involved in a guest page table walk

Instead of risking deadlock by holding onto the gfn's acquired during
a guest page table walk, acquire an extra reference within the get_gfn/
put_gfn critical section, and drop the extra reference when done with
the map. This ensures liveness of the map, i.e. the underlying page
won't be paged out.

Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
Acked-by: Tim Deegan <tim@xxxxxxx>
Committed-by: Tim Deegan <tim@xxxxxxx>
---


diff -r 573b1acedcc0 -r f011ebdce88b xen/arch/x86/mm/guest_walk.c
--- a/xen/arch/x86/mm/guest_walk.c      Thu Nov 24 17:00:33 2011 +0000
+++ b/xen/arch/x86/mm/guest_walk.c      Thu Nov 24 17:05:25 2011 +0000
@@ -87,7 +87,7 @@
 }
 
 /* If the map is non-NULL, we leave this function having 
- * called get_gfn, you need to call put_gfn. */
+ * acquired an extra ref on mfn_to_page(*mfn) */
 static inline void *map_domain_gfn(struct p2m_domain *p2m,
                                    gfn_t gfn, 
                                    mfn_t *mfn,
@@ -95,6 +95,7 @@
                                    uint32_t *rc) 
 {
     p2m_access_t p2ma;
+    void *map;
 
     /* Translate the gfn, unsharing if shared */
     *mfn = get_gfn_type_access(p2m, gfn_x(gfn), p2mt, &p2ma, p2m_unshare, 
NULL);
@@ -120,7 +121,12 @@
     }
     ASSERT(mfn_valid(mfn_x(*mfn)));
     
-    return map_domain_page(mfn_x(*mfn));
+    /* Get an extra ref to the page to ensure liveness of the map.
+     * Then we can safely put gfn */
+    page_get_owner_and_reference(mfn_to_page(mfn_x(*mfn)));
+    map = map_domain_page(mfn_x(*mfn));
+    __put_gfn(p2m, gfn_x(gfn));
+    return map;
 }
 
 
@@ -357,20 +363,20 @@
     if ( l3p ) 
     {
         unmap_domain_page(l3p);
-        __put_gfn(p2m, gfn_x(guest_l4e_get_gfn(gw->l4e)));
+        put_page(mfn_to_page(mfn_x(gw->l3mfn)));
     }
 #endif
 #if GUEST_PAGING_LEVELS >= 3
     if ( l2p ) 
     {
         unmap_domain_page(l2p);
-        __put_gfn(p2m, gfn_x(guest_l3e_get_gfn(gw->l3e))); 
+        put_page(mfn_to_page(mfn_x(gw->l2mfn)));
     }
 #endif
     if ( l1p ) 
     {
         unmap_domain_page(l1p);
-        __put_gfn(p2m, gfn_x(guest_l2e_get_gfn(gw->l2e)));
+        put_page(mfn_to_page(mfn_x(gw->l1mfn)));
     }
 
     return rc;

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.