[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [v6][PATCH 06/16] hvmloader/pci: skip reserved ranges



When allocating mmio address for PCI bars, we need to make
sure they don't overlap with reserved regions.

CC: Keir Fraser <keir@xxxxxxx>
CC: Jan Beulich <jbeulich@xxxxxxxx>
CC: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
CC: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
CC: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
CC: Ian Campbell <ian.campbell@xxxxxxxxxx>
CC: Wei Liu <wei.liu2@xxxxxxxxxx>
Signed-off-by: Tiejun Chen <tiejun.chen@xxxxxxxxx>
---
v6:

* Nothing is changed.

v5:

* Rename that field, is_64bar, inside struct bars with flag, and
  then extend to also indicate if this bar is already allocated.

v4:

* We have to re-design this as follows:

  #1. Goal

  MMIO region should exclude all reserved device memory

  #2. Requirements

  #2.1 Still need to make sure MMIO region is fit all pci devices as before

  #2.2 Accommodate the not aligned reserved memory regions

  If I'm missing something let me know.

  #3. How to

  #3.1 Address #2.1

  We need to either of populating more RAM, or of expanding more highmem. But
  we should know just 64bit-bar can work with highmem, and as you mentioned we
  also should avoid expanding highmem as possible. So my implementation is to 
  allocate 32bit-bar and 64bit-bar orderly.

  1>. The first allocation round just to 32bit-bar

  If we can finish allocating all 32bit-bar, we just go to allocate 64bit-bar
  with all remaining resources including low pci memory.

  If not, we need to calculate how much RAM should be populated to allocate the 
  remaining 32bit-bars, then populate sufficient RAM as exp_mem_resource to go
  to the second allocation round 2>.

  2>. The second allocation round to the remaining 32bit-bar

  We should can finish allocating all 32bit-bar in theory, then go to the third
  allocation round 3>.

  3>. The third allocation round to 64bit-bar

  We'll try to first allocate from the remaining low memory resource. If that
  isn't enough, we try to expand highmem to allocate for 64bit-bar. This process
  should be same as the original.

  #3.2 Address #2.2

  I'm trying to accommodate the not aligned reserved memory regions:

  We should skip all reserved device memory, but we also need to check if other
  smaller bars can be allocated if a mmio hole exists between resource->base and
  reserved device memory. If a hole exists between base and reserved device
  memory, lets go out simply to try allocate for next bar since all bars are in
  descending order of size. If not, we need to move resource->base to 
reserved_end
  just to reallocate this bar.

 tools/firmware/hvmloader/pci.c | 194 ++++++++++++++++++++++++++++++++++-------
 1 file changed, 164 insertions(+), 30 deletions(-)

diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index 5ff87a7..397f3b7 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -38,6 +38,31 @@ uint64_t pci_hi_mem_start = 0, pci_hi_mem_end = 0;
 enum virtual_vga virtual_vga = VGA_none;
 unsigned long igd_opregion_pgbase = 0;
 
+static void relocate_ram_for_pci_memory(unsigned long cur_pci_mem_start)
+{
+    struct xen_add_to_physmap xatp;
+    unsigned int nr_pages = min_t(
+        unsigned int,
+        hvm_info->low_mem_pgend - (cur_pci_mem_start >> PAGE_SHIFT),
+        (1u << 16) - 1);
+    if ( hvm_info->high_mem_pgend == 0 )
+        hvm_info->high_mem_pgend = 1ull << (32 - PAGE_SHIFT);
+    hvm_info->low_mem_pgend -= nr_pages;
+    printf("Relocating 0x%x pages from "PRIllx" to "PRIllx\
+           " for lowmem MMIO hole\n",
+           nr_pages,
+           PRIllx_arg(((uint64_t)hvm_info->low_mem_pgend)<<PAGE_SHIFT),
+           PRIllx_arg(((uint64_t)hvm_info->high_mem_pgend)<<PAGE_SHIFT));
+    xatp.domid = DOMID_SELF;
+    xatp.space = XENMAPSPACE_gmfn_range;
+    xatp.idx   = hvm_info->low_mem_pgend;
+    xatp.gpfn  = hvm_info->high_mem_pgend;
+    xatp.size  = nr_pages;
+    if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 )
+        BUG();
+    hvm_info->high_mem_pgend += nr_pages;
+}
+
 void pci_setup(void)
 {
     uint8_t is_64bar, using_64bar, bar64_relocate = 0;
@@ -50,17 +75,22 @@ void pci_setup(void)
     /* Resources assignable to PCI devices via BARs. */
     struct resource {
         uint64_t base, max;
-    } *resource, mem_resource, high_mem_resource, io_resource;
+    } *resource, mem_resource, high_mem_resource, io_resource, 
exp_mem_resource;
 
     /* Create a list of device BARs in descending order of size. */
     struct bars {
-        uint32_t is_64bar;
+#define PCI_BAR_IS_64BIT        0x1
+#define PCI_BAR_IS_ALLOCATED    0x2
+        uint32_t flag;
         uint32_t devfn;
         uint32_t bar_reg;
         uint64_t bar_sz;
     } *bars = (struct bars *)scratch_start;
-    unsigned int i, nr_bars = 0;
-    uint64_t mmio_hole_size = 0;
+    unsigned int i, j, n, nr_bars = 0;
+    uint64_t mmio_hole_size = 0, reserved_start, reserved_end, reserved_size;
+    bool bar32_allocating = 0;
+    uint64_t mmio32_unallocated_total = 0;
+    unsigned long cur_pci_mem_start = 0;
 
     const char *s;
     /*
@@ -222,7 +252,7 @@ void pci_setup(void)
             if ( i != nr_bars )
                 memmove(&bars[i+1], &bars[i], (nr_bars-i) * sizeof(*bars));
 
-            bars[i].is_64bar = is_64bar;
+            bars[i].flag = is_64bar ? PCI_BAR_IS_64BIT : 0;
             bars[i].devfn   = devfn;
             bars[i].bar_reg = bar_reg;
             bars[i].bar_sz  = bar_sz;
@@ -309,29 +339,31 @@ void pci_setup(void)
     }
 
     /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */
+    cur_pci_mem_start = pci_mem_start;
     while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
+        relocate_ram_for_pci_memory(cur_pci_mem_start);
+
+    /*
+     * Check if reserved device memory conflicts current pci memory.
+     * If yes, we need to first allocate bar32 since reserved devices
+     * always occupy low memory, and also enable relocating some BARs
+     * to 64bit as possible.
+     */
+    for ( i = 0; i < memory_map.nr_map ; i++ )
     {
-        struct xen_add_to_physmap xatp;
-        unsigned int nr_pages = min_t(
-            unsigned int,
-            hvm_info->low_mem_pgend - (pci_mem_start >> PAGE_SHIFT),
-            (1u << 16) - 1);
-        if ( hvm_info->high_mem_pgend == 0 )
-            hvm_info->high_mem_pgend = 1ull << (32 - PAGE_SHIFT);
-        hvm_info->low_mem_pgend -= nr_pages;
-        printf("Relocating 0x%x pages from "PRIllx" to "PRIllx\
-               " for lowmem MMIO hole\n",
-               nr_pages,
-               PRIllx_arg(((uint64_t)hvm_info->low_mem_pgend)<<PAGE_SHIFT),
-               PRIllx_arg(((uint64_t)hvm_info->high_mem_pgend)<<PAGE_SHIFT));
-        xatp.domid = DOMID_SELF;
-        xatp.space = XENMAPSPACE_gmfn_range;
-        xatp.idx   = hvm_info->low_mem_pgend;
-        xatp.gpfn  = hvm_info->high_mem_pgend;
-        xatp.size  = nr_pages;
-        if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 )
-            BUG();
-        hvm_info->high_mem_pgend += nr_pages;
+        reserved_start = memory_map.map[i].addr;
+        reserved_size = memory_map.map[i].size;
+        reserved_end = reserved_start + reserved_size;
+        if ( check_overlap(pci_mem_start, pci_mem_end - pci_mem_start,
+                           reserved_start, reserved_size) )
+        {
+            printf("Reserved device memory conflicts current PCI memory,"
+                   " so first to allocate 32-bit BAR and trying to"
+                   " relocating some BARs to 64-bit\n");
+            bar32_allocating = 1;
+            if ( !bar64_relocate )
+                bar64_relocate = 1;
+        }
     }
 
     high_mem_resource.base = ((uint64_t)hvm_info->high_mem_pgend) << 
PAGE_SHIFT;
@@ -352,6 +384,7 @@ void pci_setup(void)
     io_resource.base = 0xc000;
     io_resource.max = 0x10000;
 
+ further_allocate:
     /* Assign iomem and ioport resources in descending order of size. */
     for ( i = 0; i < nr_bars; i++ )
     {
@@ -359,6 +392,17 @@ void pci_setup(void)
         bar_reg = bars[i].bar_reg;
         bar_sz  = bars[i].bar_sz;
 
+        /* Check if this bar is allocated. */
+        if ( bars[i].flag & PCI_BAR_IS_ALLOCATED )
+            continue;
+
+        /*
+         * This means we'd like to first allocate 32bit bar to make sure
+         * all 32bit bars can be allocated as possible.
+         */
+        if ( (bars[i].flag & PCI_BAR_IS_64BIT) && bar32_allocating )
+            continue;
+
         /*
          * Relocate to high memory if the total amount of MMIO needed
          * is more than the low MMIO available.  Because devices are
@@ -377,7 +421,7 @@ void pci_setup(void)
          *   the code here assumes it to be.)
          * Should either of those two conditions change, this code will break.
          */
-        using_64bar = bars[i].is_64bar && bar64_relocate
+        using_64bar = (bars[i].flag & PCI_BAR_IS_64BIT) && bar64_relocate
             && (mmio_total > (mem_resource.max - mem_resource.base));
         bar_data = pci_readl(devfn, bar_reg);
 
@@ -395,7 +439,14 @@ void pci_setup(void)
                 bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
             } 
             else {
-                resource = &mem_resource;
+                /*
+                 * This menas we're trying to use that expanded
+                 * memory to reallocate 32bars.
+                 */
+                if ( mmio32_unallocated_total )
+                    resource = &exp_mem_resource;
+                else
+                    resource = &mem_resource;
                 bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
             }
             mmio_total -= bar_sz;
@@ -406,9 +457,44 @@ void pci_setup(void)
             bar_data &= ~PCI_BASE_ADDRESS_IO_MASK;
         }
 
-        base = (resource->base  + bar_sz - 1) & ~(uint64_t)(bar_sz - 1);
+ reallocate_bar:
+        base = (resource->base + bar_sz - 1) & ~(uint64_t)(bar_sz - 1);
         bar_data |= (uint32_t)base;
         bar_data_upper = (uint32_t)(base >> 32);
+        /*
+         * We should skip all reserved device memory, but we also need
+         * to check if other smaller bars can be allocated if a mmio hole
+         * exists between resource->base and reserved device memory.
+         */
+        for ( j = 0; j < memory_map.nr_map ; j++ )
+        {
+            if ( memory_map.map[j].type != E820_RAM )
+            {
+                reserved_start = memory_map.map[i].addr;
+                reserved_size = memory_map.map[i].size;
+                reserved_end = reserved_start + reserved_size;
+                if ( check_overlap(base, bar_sz,
+                                   reserved_start, reserved_size) )
+                {
+                    /*
+                     * If a hole exists between base and reserved device
+                     * memory, lets go out simply to try allocate for next
+                     * bar since all bars are in descending order of size.
+                     */
+                    if ( resource->base < reserved_start )
+                        continue;
+                    /*
+                     * If not, we need to move resource->base to
+                     * reserved_end just to reallocate this bar.
+                     */
+                    else
+                    {
+                        resource->base = reserved_end;
+                        goto reallocate_bar;
+                    }
+                }
+            }
+        }
         base += bar_sz;
 
         if ( (base < resource->base) || (base > resource->max) )
@@ -428,7 +514,7 @@ void pci_setup(void)
                devfn>>3, devfn&7, bar_reg,
                PRIllx_arg(bar_sz),
                bar_data_upper, bar_data);
-                       
+        bars[i].flag |= PCI_BAR_IS_ALLOCATED;
 
         /* Now enable the memory or I/O mapping. */
         cmd = pci_readw(devfn, PCI_COMMAND);
@@ -439,6 +525,54 @@ void pci_setup(void)
         else
             cmd |= PCI_COMMAND_IO;
         pci_writew(devfn, PCI_COMMAND, cmd);
+
+        /* If we finish allocating bar32 at the first time. */
+        if ( i == nr_bars && bar32_allocating )
+        {
+            /*
+             * We won't repeat to populate more RAM to finalize
+             * allocate all 32bars, so just go to allocate 64bit-bars.
+             */
+            if ( mmio32_unallocated_total )
+            {
+                bar32_allocating = 0;
+                mmio32_unallocated_total = 0;
+                high_mem_resource.base =
+                        ((uint64_t)hvm_info->high_mem_pgend) << PAGE_SHIFT;
+                goto further_allocate;
+            }
+
+            /* Calculate the remaining 32bars. */
+            for ( n = 0; n < nr_bars ; n++ )
+            {
+                if ( !(bars[n].flag & PCI_BAR_IS_64BIT) )
+                {
+                    uint32_t devfn32, bar_reg32, bar_data32;
+                    uint64_t bar_sz32;
+                    devfn32   = bars[n].devfn;
+                    bar_reg32 = bars[n].bar_reg;
+                    bar_sz32  = bars[n].bar_sz;
+                    bar_data32 = pci_readl(devfn32, bar_reg32);
+                    if ( !bar_data32 )
+                        mmio32_unallocated_total  += bar_sz32;
+                }
+            }
+
+            /*
+             * We have to populate more RAM to further allocate
+             * the remaining 32bars.
+             */
+            if ( mmio32_unallocated_total )
+            {
+                cur_pci_mem_start = pci_mem_start - mmio32_unallocated_total;
+                relocate_ram_for_pci_memory(cur_pci_mem_start);
+                exp_mem_resource.base = cur_pci_mem_start;
+                exp_mem_resource.max = pci_mem_start;
+            }
+            else
+                bar32_allocating = 0;
+            goto further_allocate;
+        }
     }
 
     if ( pci_hi_mem_start )
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.