[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 05/11] mm: move modify_identity_mmio to global file and drop __init



Hi,

On 12/09/17 12:38, Roger Pau Monné wrote:
On Tue, Sep 12, 2017 at 11:53:49AM +0100, Julien Grall wrote:
Hi Roger,

On 14/08/17 15:28, Roger Pau Monne wrote:
And also allow it to do non-identity mappings by adding a new
parameter.

This function will be needed in order to map the BARs from PCI devices
into the Dom0 p2m (and is also used by the x86 Dom0 builder). While
there fix the function to use gfn_t and mfn_t instead of unsigned long
for memory addresses. >
Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
---
Cc: Jan Beulich <jbeulich@xxxxxxxx>
Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
---
Changes since v4:
   - Guard the function with CONFIG_HAS_PCI only.
   - s/non-trival/non-negligible in the comment.
   - Change XENLOG_G_WARNING to XENLOG_WARNING like the original
     function.

Changes since v3:
   - Remove the dummy modify_identity_mmio helper in dom0_build.c
   - Try to make the comment in modify MMIO less scary.
   - Clarify commit message.
   - Only build the function for x86 or if there's PCI support.

Changes since v2:
   - Use mfn_t and gfn_t.
   - Remove stray newline.
---
   xen/arch/x86/hvm/dom0_build.c | 30 ++----------------------------
   xen/common/memory.c           | 40 ++++++++++++++++++++++++++++++++++++++++
   xen/include/xen/p2m-common.h  |  3 +++
   3 files changed, 45 insertions(+), 28 deletions(-)

diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index 04a8682d33..c65eb8503f 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -61,32 +61,6 @@ static struct acpi_madt_interrupt_override __initdata 
*intsrcovr;
   static unsigned int __initdata acpi_nmi_sources;
   static struct acpi_madt_nmi_source __initdata *nmisrc;
-static int __init modify_identity_mmio(struct domain *d, unsigned long pfn,
-                                       unsigned long nr_pages, const bool map)
-{
-    int rc;
-
-    for ( ; ; )
-    {
-        rc = (map ? map_mmio_regions : unmap_mmio_regions)
-             (d, _gfn(pfn), nr_pages, _mfn(pfn));
-        if ( rc == 0 )
-            break;
-        if ( rc < 0 )
-        {
-            printk(XENLOG_WARNING
-                   "Failed to identity %smap [%#lx,%#lx) for d%d: %d\n",
-                   map ? "" : "un", pfn, pfn + nr_pages, d->domain_id, rc);
-            break;
-        }
-        nr_pages -= rc;
-        pfn += rc;
-        process_pending_softirqs();
-    }
-
-    return rc;
-}
-
   /* Populate a HVM memory range using the biggest possible order. */
   static int __init pvh_populate_memory_range(struct domain *d,
                                               unsigned long start,
@@ -397,7 +371,7 @@ static int __init pvh_setup_p2m(struct domain *d)
        * Memory below 1MB is identity mapped.
        * NB: this only makes sense when booted from legacy BIOS.
        */
-    rc = modify_identity_mmio(d, 0, MB1_PAGES, true);
+    rc = modify_mmio(d, _gfn(0), _mfn(0), MB1_PAGES, true);
       if ( rc )
       {
           printk("Failed to identity map low 1MB: %d\n", rc);
@@ -964,7 +938,7 @@ static int __init pvh_setup_acpi(struct domain *d, paddr_t 
start_info)
           nr_pages = PFN_UP((d->arch.e820[i].addr & ~PAGE_MASK) +
                             d->arch.e820[i].size);
-        rc = modify_identity_mmio(d, pfn, nr_pages, true);
+        rc = modify_mmio(d, _gfn(pfn), _mfn(pfn), nr_pages, true);
           if ( rc )
           {
               printk("Failed to map ACPI region [%#lx, %#lx) into Dom0 memory 
map\n",
diff --git a/xen/common/memory.c b/xen/common/memory.c
index b2066db07e..86824edb09 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1465,6 +1465,46 @@ int prepare_ring_for_helper(
       return 0;
   }
+#if defined(CONFIG_HAS_PCI)
+int modify_mmio(struct domain *d, gfn_t gfn, mfn_t mfn, unsigned long nr_pages,
+                bool map)
+{

I don't think this is correct to move this function in common code without
making sure that all arch have *map_mmio_regions supporting preemption.

This is actually not the case on ARM and IHMO should be fixed before getting
this function common. Otherwise you will expose a security issue the day
vCPI will get supported on ARM.

I could add something like:

#ifndef CONFIG_X86 /* XXX ARM!? */
     ret = -E2BIG;
     /* Must break hypercall up as this could take a while. */
     if ( nr_mfns > 64 )
         break;
#endif

That's what's done in XEN_DOMCTL_memory_mapping.

Well in the case of XEN_DOMCTL_memory_mapping, the caller will take care of splitting the batch. Here you will not split the batch and just fail.


+    int rc;
+
+    /*
+     * ATM this function should only be used by the hardware domain
+     * because it doesn't support preemption/continuation, and as such
+     * can take a non-negligible amount of time. Note that it periodically
+     * calls process_pending_softirqs in order to avoid stalling the system.
+     */
+    ASSERT(is_hardware_domain(d));
+
+    for ( ; ; )
+    {
+        rc = (map ? map_mmio_regions : unmap_mmio_regions)

As mentioned in an earlier version, on ARM map_mmio_regions will map the
MMIO with very strict attribute (no-unaligned access, non-gatherable,...).
This will not be correct for some BARs. So I think we should provide the
attribute type in parameter of modify_mmio to know the memory attribute.

As I understand it, current mapping attributes albeit slow will work
fine?

No. The guest will receive an abort if it does unaligned access on that region (such as when memcpy is used).


At least they are the same as the ones used by
XEN_DOMCTL_memory_mapping which is how MMIO is mapped to a DomU ATM,
hence I don't think this is a priority at such early state.

Why do you speak about Domu? This is used for mapping Dom0 at the moment... So this is kind of priority in order to get PCI in Dom0.

For Dom0, we relaxed the mapping and rely on the OS to tighten the attribute (p2m_mmio_direct_c is used). This is something we could not possibly do by default on DomU at the moment.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.