[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 11/25] xen/arm: introduce allocate_memory



Hi,

On 23/10/2018 03:02, Stefano Stabellini wrote:
Introduce an allocate_memory function able to allocate memory for DomUs
and map it at the right guest addresses, according to the guest memory
map: GUEST_RAM0_BASE and GUEST_RAM1_BASE.

This is under #if 0 as not used for now.

Signed-off-by: Stefano Stabellini <stefanos@xxxxxxxxxx>
---
Changes in v5:
- improve commit message
- code style
- remove unneeded local var
- while loop in allocate_bank_memory to allocate memory so that it
   doesn't have to be contiguos
- combile while loops in allocate_memory

Changes in v4:
- move earlier, add #if 0
- introduce allocate_bank_memory, remove insert_bank
- allocate_bank_memory allocate memory and inserts the bank, while
   allocate_memory specifies where to do that

Changes in v3:
- new patch
---
  xen/arch/arm/domain_build.c | 99 +++++++++++++++++++++++++++++++++++++++++++++
  1 file changed, 99 insertions(+)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index c61a27f..146d81e 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -368,6 +368,105 @@ static void __init allocate_memory_11(struct domain *d,
      }
  }
+#if 0
+static bool __init allocate_bank_memory(struct domain *d,
+                                        struct kernel_info *kinfo,
+                                        gfn_t sgfn,
+                                        unsigned int order)

I don't think your implementation below is equivalent to the one I suggested earlier on ([1]). While you handled the contiguous problem, you didn't address the 2 others points:
       1) The memory specify by the user may not be in power of 2 pages. For
instance, a user can specify 40KB. With your algo, we will end to
allocate 32KB in the first bank and 8KB in the second bank. However what
we want is allocate 40KB in a single bank.
       2) You don't check whether the leftover memory is bigger than the size
of the second bank.

Because of 1), you can't reason in term of order here. You have to reason in bytes or number of pages.

+{
+    int res;
+    struct page_info *pg;
+    struct membank *bank;
+    paddr_t size = pfn_to_paddr(1UL << order);
+    gfn_t _sgfn = sgfn;
+    gfn_t _egfn = gfn_add(sgfn, 1UL << order);
+
+    while ( gfn_x(_sgfn) < gfn_x(_egfn) )
+    {
+        pg = alloc_domheap_pages(d, order, 0);
+        if ( pg != NULL )
+        {
+            res = guest_physmap_add_page(d, _sgfn, page_to_mfn(pg), order);
+            if ( res )
+            {
+                dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res);
+                goto fail;
+            }
+            _sgfn = gfn_add(_sgfn, 1UL << order);
+        }
+        else
+        {
+            order--;

order may be equal to 0. So here you will underflow.

Overall, it would be best if you re-use the code I sent. While not tested, it addresses all the problems. Fixing the potential bug in that patch so be quite easily.

+            if ( order == 0 )
+            {
+                dprintk(XENLOG_ERR, "Failed to allocated DOMU memory");
+                goto fail;
+            }
+        }
+    }
+
+    bank = &kinfo->mem.bank[kinfo->mem.nr_banks];
+    bank->start = gfn_to_gaddr(sgfn);
+    bank->size = size;
+    kinfo->mem.nr_banks++;
+    kinfo->unassigned_mem -= size;
+
+    return true;
+
+fail:
+    /*
+     * Fatal failure, don't attempt to free memory.
+     */
+    return false;
+}
+
+static void __init allocate_memory(struct domain *d, struct kernel_info *kinfo)
+{
+    unsigned int order = get_allocation_size(kinfo->unassigned_mem);
+    unsigned int order_req;
+    unsigned int i;
+
+    dprintk(XENLOG_INFO, "Allocating mappings totalling %ldMB for dom%d:\n",
+            /* Don't want format this as PRIpaddr (16 digit hex) */
+            (unsigned long)(kinfo->unassigned_mem >> 20), d->domain_id);
+
+    kinfo->mem.nr_banks = 0;
+
+    order = get_allocation_size(kinfo->unassigned_mem);
+    if ( order > get_order_from_bytes(GUEST_RAM0_SIZE) )
+        order_req = get_order_from_bytes(GUEST_RAM0_SIZE);
+    else
+        order_req = order;
+    if ( !allocate_bank_memory(d, kinfo, gaddr_to_gfn(GUEST_RAM0_BASE), 
order_req) )
+        goto fail;
+
+    order -= order_req;
+    if ( order > 0 &&
+         !allocate_bank_memory(d, kinfo, gaddr_to_gfn(GUEST_RAM1_BASE), order) 
)
+        goto fail;
+
+    for( i = 0; i < kinfo->mem.nr_banks; i++ )
+    {
+        dprintk(XENLOG_INFO, "Dom%d BANK[%d] %#"PRIpaddr"-%#"PRIpaddr" 
(%ldMB)\n",
+                d->domain_id,
+                i,
+                kinfo->mem.bank[i].start,
+                kinfo->mem.bank[i].start + kinfo->mem.bank[i].size,
+                /* Don't want format this as PRIpaddr (16 digit hex) */
+                (unsigned long)(kinfo->mem.bank[i].size >> 20));
+    }
+
+    return;
+
+fail:
+    dprintk(XENLOG_ERR, "Failed to allocate requested domain memory."
+            /* Don't want format this as PRIpaddr (16 digit hex) */
+            " %ldKB unallocated. Fix the VMs configurations.\n",
+            (unsigned long)kinfo->unassigned_mem >> 10);
+    BUG();
+}
+#endif
+
  static int __init write_properties(struct domain *d, struct kernel_info 
*kinfo,
                                     const struct dt_device_node *node)
  {


[1] https://pastebin.com/FQ9k9CbL

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.