[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH RESEND v9 04/14] arch/arm: unmap partially-mapped I/O-memory regions



This commit changes the function apply_p2m_changes() to unwind changes
performed while mapping an I/O-memory region, if errors are seen during
the operation. This is useful to avoid that I/O-memory areas are
partially accessible to guests.

Signed-off-by: Arianna Avanzini <avanzini.arianna@xxxxxxxxx>
Cc: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
Cc: Paolo Valente <paolo.valente@xxxxxxxxxx>
Cc: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
Cc: Julien Grall <julien.grall@xxxxxxxxxx>
Cc: Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>
Cc: Jan Beulich <jbeulich@xxxxxxxx>
Cc: Keir Fraser <keir@xxxxxxx>
Cc: Tim Deegan <tim@xxxxxxx>
Cc: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Cc: Eric Trudeau <etrudeau@xxxxxxxxxxxx>
Cc: Viktor Kleinik <viktor.kleinik@xxxxxxxxxxxxxxx>

---

    v9:
        - Let apply_p2m_ranges() unwind its own progress instead of relying on
          the caller to unmap partially-mapped I/O-memory regions.
        - Adapt to rework of p2m-related functions for ARM.

---
 xen/arch/arm/p2m.c | 33 +++++++++++++++++++++++++++++----
 1 file changed, 29 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 1e6579f..e9f7c96 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -694,7 +694,7 @@ static int apply_p2m_changes(struct domain *d,
     unsigned long cur_first_page = ~0,
                   cur_first_offset = ~0,
                   cur_second_offset = ~0;
-    unsigned long count = 0;
+    unsigned long count = 0, inserted = 0;
     bool_t flush = false;
     bool_t flush_pt;
 
@@ -706,6 +706,7 @@ static int apply_p2m_changes(struct domain *d,
 
     spin_lock(&p2m->lock);
 
+unwind:
     addr = start_gpaddr;
     while ( addr < end_gpaddr )
     {
@@ -749,7 +750,15 @@ static int apply_p2m_changes(struct domain *d,
                               start_gpaddr, end_gpaddr,
                               &addr, &maddr, &flush,
                               mattr, t);
-        if ( ret < 0 ) { rc = ret ; goto out; }
+        if ( ret < 0 )
+        {
+            rc = ret ;
+            if ( op != INSERT ) goto out;
+            end_gpaddr = start_gpaddr + inserted * PAGE_SIZE;
+            op = REMOVE;
+            goto unwind;
+        }
+        if ( op == INSERT ) inserted++;
         count += ret;
         if ( ret != P2M_ONE_DESCEND ) continue;
 
@@ -768,7 +777,15 @@ static int apply_p2m_changes(struct domain *d,
                               start_gpaddr, end_gpaddr,
                               &addr, &maddr, &flush,
                               mattr, t);
-        if ( ret < 0 ) { rc = ret ; goto out; }
+        if ( ret < 0 )
+        {
+            rc = ret ;
+            if ( op != INSERT ) goto out;
+            end_gpaddr = start_gpaddr + inserted * PAGE_SIZE;
+            op = REMOVE;
+            goto unwind;
+        }
+        if ( op == INSERT ) inserted++;
         count += ret;
         if ( ret != P2M_ONE_DESCEND ) continue;
 
@@ -787,9 +804,17 @@ static int apply_p2m_changes(struct domain *d,
                               start_gpaddr, end_gpaddr,
                               &addr, &maddr, &flush,
                               mattr, t);
-        if ( ret < 0 ) { rc = ret ; goto out; }
+        if ( ret < 0 )
+        {
+            rc = ret ;
+            if ( op != INSERT ) goto out;
+            end_gpaddr = start_gpaddr + inserted * PAGE_SIZE;
+            op = REMOVE;
+            goto unwind;
+        }
         /* L3 had better have done something! We cannot descend any further */
         BUG_ON(ret == P2M_ONE_DESCEND);
+        if ( op == INSERT ) inserted++;
         count += ret;
     }
 
-- 
1.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.