[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v4 13/39] arm/p2m: Add altp2m table flushing routine



The current implementation differentiates between flushing and
destroying altp2m views. This commit adds the function altp2m_flush,
which allows to release all of the alternate p2m views. To make sure
that we flush alternate p2m's only if they are not used by any vCPU, we
introduce a counter that tracks the vCPUs that are currently using the
particular p2m.

Signed-off-by: Sergej Proskurin <proskurin@xxxxxxxxxxxxx>
---
Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>
Cc: Julien Grall <julien.grall@xxxxxxx>
---
v2: Pages in p2m->pages are not cleared in p2m_flush_table anymore.
    VMID is freed in p2m_free_one.
    Cosmetic fixes.

v3: Changed the locking mechanism to "p2m_write_lock" inside the
    function "altp2m_flush".

    Do not flush but rather teardown the altp2m in the function
    "altp2m_flush".

    Exchanged the check "altp2m_vttbr[idx] == INVALID_VTTBR" for
    "altp2m_p2m[idx] == NULL" in "altp2m_flush".

v4: Removed the p2m locking instructions in "altp2m_flush", as they are
    not needed at this point. At this point, altp2m should be inactive and
    thus the particular p2m should not be used by a vCPU. Therefore, we
    added an ASSERT statement to ensure this.

    We introduce the counter active_vcpus as part of this patch.

    Rename the function altp2m_flush to altp2m_flush_complete.
---
 xen/arch/arm/altp2m.c        | 32 ++++++++++++++++++++++++++++++++
 xen/include/asm-arm/altp2m.h |  3 +++
 xen/include/asm-arm/p2m.h    |  5 +++++
 3 files changed, 40 insertions(+)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index e73b69d99d..9c06055a94 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -28,6 +28,38 @@ int altp2m_init(struct domain *d)
     return 0;
 }
 
+void altp2m_flush_complete(struct domain *d)
+{
+    unsigned int i;
+    struct p2m_domain *p2m;
+
+    /*
+     * If altp2m is active, we are not allowed to flush altp2m[0]. This special
+     * view is considered as the hostp2m as long as altp2m is active.
+     */
+    ASSERT(!altp2m_active(d));
+
+    altp2m_lock(d);
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        p2m = d->arch.altp2m_p2m[i];
+
+        if ( p2m == NULL )
+            continue;
+
+        ASSERT(!atomic_read(&p2m->active_vcpus));
+
+        /* We do not need to lock the p2m, as altp2m is inactive. */
+        p2m_teardown_one(p2m);
+
+        xfree(p2m);
+        d->arch.altp2m_p2m[i] = NULL;
+    }
+
+    altp2m_unlock(d);
+}
+
 void altp2m_teardown(struct domain *d)
 {
     unsigned int i;
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 1706f61f0c..e116cce25f 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -43,4 +43,7 @@ static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
 int altp2m_init(struct domain *d);
 void altp2m_teardown(struct domain *d);
 
+/* Flush all the alternate p2m's for a domain. */
+void altp2m_flush_complete(struct domain *d);
+
 #endif /* __ASM_ARM_ALTP2M_H */
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 9bb38e689a..e8a2116081 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -10,6 +10,8 @@
 #include <xen/p2m-common.h>
 #include <public/memory.h>
 
+#include <asm/atomic.h>
+
 #define paddr_bits PADDR_BITS
 
 #define p2m_switch_vttbr_and_get_flags(ovttbr, nvttbr, flags)       \
@@ -132,6 +134,9 @@ struct p2m_domain {
     /* Keeping track on which CPU this p2m was used and for which vCPU */
     uint8_t last_vcpu_ran[NR_CPUS];
 
+    /* Alternate p2m: count of vcpu's currently using this p2m. */
+    atomic_t active_vcpus;
+
     /* Choose between: host/alternate. */
     p2m_class_t p2m_class;
 };
-- 
2.13.3


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.