[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 15/18] arm/altp2m: Add altp2m paging mechanism.



(CC Tamas)

Hello Sergej,

On 04/07/2016 12:45, Sergej Proskurin wrote:
This commit adds the function p2m_altp2m_lazy_copy implementing the
altp2m paging mechanism. The function p2m_altp2m_lazy_copy lazily copies
the hostp2m's mapping into the currently active altp2m view on 2nd stage
instruction or data access violations. Every altp2m violation generates
a vm_event.

I have been working on clean up the abort path (see [1]). Please rebase your code on top of it.

Signed-off-by: Sergej Proskurin <proskurin@xxxxxxxxxxxxx>
---
Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>
Cc: Julien Grall <julien.grall@xxxxxxx>
---

[...]

+/*
+ * If the fault is for a not present entry:
+ *     if the entry in the host p2m has a valid mfn, copy it and retry
+ *     else indicate that outer handler should handle fault
+ *
+ * If the fault is for a present entry:
+ *     indicate that outer handler should handle fault
+ */
+bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
+                            unsigned long gva, struct npfec npfec,
+                            struct p2m_domain **ap2m)
+{
+    struct domain *d = v->domain;
+    struct p2m_domain *hp2m = p2m_get_hostp2m(v->domain);
+    p2m_type_t p2mt;
+    xenmem_access_t xma;
+    paddr_t maddr, mask = 0;
+    gfn_t gfn = _gfn(paddr_to_pfn(gpa));
+    unsigned int level;
+    unsigned long mattr;
+    int rc = 0;
+
+    static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+        ACCESS(n),
+        ACCESS(r),
+        ACCESS(w),
+        ACCESS(rw),
+        ACCESS(x),
+        ACCESS(rx),
+        ACCESS(wx),
+        ACCESS(rwx),
+        ACCESS(rx2rw),
+        ACCESS(n2rwx),
+#undef ACCESS
+    };
+
+    *ap2m = p2m_get_altp2m(v);
+    if ( *ap2m == NULL)
+        return 0;
+
+    /* Check if entry is part of the altp2m view */
+    spin_lock(&(*ap2m)->lock);
+    maddr = __p2m_lookup(*ap2m, gpa, NULL);
+    spin_unlock(&(*ap2m)->lock);
+    if ( maddr != INVALID_PADDR )
+        return 0;
+
+    /* Check if entry is part of the host p2m view */
+    spin_lock(&hp2m->lock);
+    maddr = __p2m_lookup(hp2m, gpa, &p2mt);
+    if ( maddr == INVALID_PADDR )
+        goto out;
+
+    rc = __p2m_get_mem_access(hp2m, gfn, &xma);
+    if ( rc )
+        goto out;
+
+    rc = p2m_get_gfn_level_and_attr(hp2m, gpa, &level, &mattr);
+    if ( rc )
+        goto out;

Can we introduce a function which return the xma, mfn, order, attribute at once? It will avoid to browse the p2m 3 times which is really expensive on ARMv7 because the p2m is not mapped in the virtual address space of Xen.

+    spin_unlock(&hp2m->lock);
+
+    mask = level_masks[level];
+
+    rc = apply_p2m_changes(d, *ap2m, INSERT,
+                           pfn_to_paddr(gfn_x(gfn)) & mask,
+                           (pfn_to_paddr(gfn_x(gfn)) + level_sizes[level]) & 
mask,
+                           maddr & mask, mattr, 0, p2mt,
+                           memaccess[xma]);

The page associated to the MFN is not locked, so another thread could decide to remove the page from the domain and then the altp2m would contain an entry to something that does not belong to the domain anymore. Note that x86 is doing the same. So I am not sure why it is considered safe there...

+    if ( rc )
+    {
+        gdprintk(XENLOG_ERR, "failed to set entry for %lx -> %lx p2m %lx\n",
+                (unsigned long)pfn_to_paddr(gfn_x(gfn)), (unsigned 
long)(maddr), (unsigned long)*ap2m);
+        domain_crash(hp2m->domain);
+    }
+
+    return 1;
+
+out:
+    spin_unlock(&hp2m->lock);
+    return 0;
+}
+
 static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
 {
     struct p2m_domain *p2m = d->arch.altp2m_p2m[i];

[...]

@@ -2429,6 +2460,8 @@ static void do_trap_data_abort_guest(struct cpu_user_regs 
*regs,
                                      const union hsr hsr)
 {
     const struct hsr_dabt dabt = hsr.dabt;
+    struct vcpu *v = current;
+    struct p2m_domain *p2m = NULL;
     int rc;
     mmio_info_t info;

@@ -2449,6 +2482,12 @@ static void do_trap_data_abort_guest(struct 
cpu_user_regs *regs,
         info.gpa = get_faulting_ipa();
     else
     {
+        /*
+         * When using altp2m, this flush is required to get rid of old TLB
+         * entries and use the new, lazily copied, ap2m entries.
+         */
+        flush_tlb_local();

Can you give more details why this flush is required?

+

Regards,

[1] https://lists.xen.org/archives/html/xen-devel/2016-06/msg02853.html

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.