[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 10/28] xsplice: Implement payload loading



On Mon, Apr 04, 2016 at 03:44:44PM -0400, Konrad Rzeszutek Wilk wrote:
> On Fri, Apr 01, 2016 at 03:18:45AM -0600, Jan Beulich wrote:
> > >>> On 31.03.16 at 23:26, <konrad@xxxxxxxxxx> wrote:
> > >>  Also - how well will this O(n^2) lookup work once there are enough
> > >> payloads? I think this calls for the alternative vmap() extension I've
> > >> been suggesting earlier.
> > > 
> > > Could you elaborate on the vmap extension a bit please?
> > > 
> > > Your earlier email seems to say: drop the vmap API and just 
> > > allocate the underlaying pages yourself.
> > 
> > Actually I had also said in that earlier mail: "If, otoh, you left that
> > VA management to (an extended version of) vmap(), by e.g.
> > allowing the caller to request allocation from a different VA range
> > (much like iirc x86-64 Linux handles its modules address range
> > allocation), things would be different. After all the VA
> > management is the important part here, while the backing
> > memory allocation is just a trivial auxiliary operation."
> > 
> > I.e. elaboration here really just consists of the referral to the
> > respective Linux approach.
> 
> I am in need here of guidance I am afraid.
> 
> Let me explain (did this in IRC but this will have a broader scope):
> 
> In Linux we have the 'struct vm_area' which internally contains the start
> and end address (amongst other things). The callers usually use 
> __vmalloc_node_range
> to an provide those addresses. Internally the vmalloc API allocates the
> 'struct vm_area' from the normal SLAB allocator. Vmalloc API also has an
> vmap block area (allocated within vmalloc area) which is a red and black tree
> for all the users of its API. When vm_size() is called this tree is searched
> to find the 'vm_area' for the provided virtual address. There is a lot
> of code in this. Copying it and jamming does not look easy so it would be
> better to take concepts of this an implement this.. 
> 
> 
> On Xen we setup a bitmap that covers the full vmalloc area (128MB on my
> 4GB box, but can go up to 64GB) - the 128MB vmalloc area requires about
> 32K bits.
> 
> For every allocation  we "waste" an page (and a bit) so that there is a gap.
> This gap is needed when trying to to determine the size of the allocated
> region - when scanning the bitmap we can easily figure out the cleared
> bit which is akin to a fencepost.
> 
> 
> To make Xen's vmalloc API be generic I need to wholesale make it able
> to deal with virtual addresses that are not part of its space (as in
> not in VMAP_VIRT_START to vm_top). At the start I the input to vm_size()
> needs to get the size of the virtual address (either the ones from
> the vmalloc areas or the ones provided earlier by vmalloc_cb).
> 
> One easy mechanism is to embedded an array of simplified 'struct vm_area' 
> structure:
> 
> struct vm_area {
>       unsigned long va;
> }
> 
> for every slot in the VMAP_VIRT_START area (that is have 32K entries).
> The vm_size and all the rest can check for this array if the virtual
> address provided is not within the vmalloc virtual addresses. If there
> is a match we just need to consult the vm_bitmap at the same index and
> figure out where the empty bit is set.
> The downside is that I've to walk the full array (32K entries).
> 
> But when you think about it - most of the time we use normal vmalloc addresses
> and only in exceptional cases do we need the alternate ones. And the only 
> reason
> to keep track of it is to know the size.
> 
> The easier way would be to track them via a linked list:
> 
> struct vm_area {
>       struct list_head list;
>       unsigned long va;
>       size_t nr;
> }
> 
> And vm_size, vm_index, etc would consult this list for the virtual address and
> could get the proper information. (See inline patch)
> 
> But if we are doing that this, then why even put it in the vmalloc API? Why 
> not
> track all of this with the user of this? (like it was done in v4 of this 
> patch series?)
> 
> Please advise.

I re-read your previous email and I think you were leaning
towards not even having a callback but rather supplying
the virtual address to the vmalloc APIs and it tracking
it afterwards. Like this:

From 738ed247bf214a061c6822ad183c365a4f5731b9 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date: Mon, 14 Mar 2016 12:02:05 -0400
Subject: [PATCH] vmap: Add vmalloc_range

For those users who want to supply their own virtual
address for which to allocate the underlaying pages.

The vmap API also keeps track of this virtual address (along
with the size) so that vunmap, vm_size, and vm_free can operate
on these virtual addresses.

This allows users (such as xSplice) to provide their own
mechanism to change the the page flags, and also use virtual
addresses closer to the hypervisor virtual addresses (at least
on x86) while not having to deal with the allocation of
pages.

For example of users, see patch titled "xsplice: Implement payload
loading".

Note that the displacement of the hypervisor virtual addresses to the
vmalloc (on x86) is more than 32-bits - which means that ELF relocations
(which are limited to 32-bits) - won't work (we truncate the 34 and 33th
bit). Hence we cannot use on vmalloc virtual addresses but must
supply our own ranges.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>

---
Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
Cc: Jan Beulich <jbeulich@xxxxxxxx>
Cc: Keir Fraser <keir@xxxxxxx>
Cc: Tim Deegan <tim@xxxxxxx>

v4: New patch.
v5: Update per Jan's comments.
v6: Drop the stray parentheses on typedefs.
    Ditch the vunmap callback. Stash away the virtual addresses in lists.
    Ditch the vmap callback. Just provide virtual address.
---
---
 xen/common/vmap.c      | 104 +++++++++++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/vmap.h |  10 +++++
 2 files changed, 114 insertions(+)

diff --git a/xen/common/vmap.c b/xen/common/vmap.c
index 134eda0..b63886b 100644
--- a/xen/common/vmap.c
+++ b/xen/common/vmap.c
@@ -19,6 +19,10 @@ static unsigned int __read_mostly vm_end;
 /* lowest known clear bit in the bitmap */
 static unsigned int vm_low;
 
+static LIST_HEAD(vm_area_list);
+
+static DEFINE_SPINLOCK(vm_area_lock);
+
 void __init vm_init(void)
 {
     unsigned int i, nr;
@@ -146,12 +150,34 @@ static unsigned int vm_index(const void *va)
            test_bit(idx, vm_bitmap) ? idx : 0;
 }
 
+static const struct vm_area *vm_find(const void *va)
+{
+    const struct vm_area *found = NULL, *vm;
+
+    spin_lock(&vm_area_lock);
+    list_for_each_entry( vm, &vm_area_list, list )
+    {
+        if ( vm->va != va )
+            continue;
+        found = vm;
+        break;
+    }
+    spin_unlock(&vm_area_lock);
+
+    return found;
+}
+
 static unsigned int vm_size(const void *va)
 {
     unsigned int start = vm_index(va), end;
 
     if ( !start )
+    {
+        const struct vm_area *vm = vm_find(va);
+        if ( vm )
+            return vm->pages;
         return 0;
+    }
 
     end = find_next_zero_bit(vm_bitmap, vm_top, start + 1);
 
@@ -164,6 +190,17 @@ void vm_free(const void *va)
 
     if ( !bit )
     {
+        struct vm_area *vm = (struct vm_area *)vm_find(va);
+
+        if ( vm )
+        {
+            spin_lock(&vm_area_lock);
+            list_del(&vm->list);
+            spin_unlock(&vm_area_lock);
+            xfree(vm->mfn);
+            xfree(vm);
+            return;
+        }
         WARN_ON(va != NULL);
         return;
     }
@@ -199,6 +236,23 @@ void *__vmap(const mfn_t *mfn, unsigned int granularity,
     return va;
 }
 
+static bool_t vmap_range(const mfn_t *mfn, unsigned long va, unsigned int nr)
+{
+    unsigned long cur = va;
+
+    for ( ; va && nr--; ++mfn, cur += PAGE_SIZE )
+    {
+        if ( map_pages_to_xen(cur, mfn_x(*mfn), 1, PAGE_HYPERVISOR) )
+        {
+            if ( cur != va )
+                destroy_xen_mappings(va, cur);
+            return 0;
+        }
+    }
+
+    return 1;
+}
+
 void *vmap(const mfn_t *mfn, unsigned int nr)
 {
     return __vmap(mfn, 1, nr, 1, PAGE_HYPERVISOR);
@@ -216,6 +270,56 @@ void vunmap(const void *va)
     vm_free(va);
 }
 
+struct vm_area *vmalloc_range(size_t size, unsigned long start)
+{
+    mfn_t *mfn;
+    size_t pages, i;
+    struct page_info *pg;
+    struct vm_area *vm = NULL;
+
+    ASSERT(size);
+
+    pages = PFN_UP(size);
+    mfn = xmalloc_array(mfn_t, pages);
+    if ( mfn == NULL )
+        return NULL;
+
+    vm = xmalloc(struct vm_area);
+    if ( !vm )
+    {
+        xfree(mfn);
+        return NULL;
+    }
+    vm->mfn = mfn;
+
+    for ( i = 0; i < pages; i++ )
+    {
+        pg = alloc_domheap_page(NULL, 0);
+        if ( pg == NULL )
+            goto error;
+        mfn[i] = _mfn(page_to_mfn(pg));
+    }
+
+    if ( !vmap_range(mfn, start, pages) )
+        goto error;
+
+    vm->va = (void *)start;
+    vm->pages = pages;
+
+    spin_lock(&vm_area_lock);
+    list_add(&vm->list, &vm_area_list);
+    spin_unlock(&vm_area_lock);
+
+    return vm;
+
+ error:
+    while ( i-- )
+        free_domheap_page(mfn_to_page(mfn_x(mfn[i])));
+    xfree(vm->mfn);
+    xfree(vm);
+    return NULL;
+}
+
 void *vmalloc(size_t size)
 {
     mfn_t *mfn;
diff --git a/xen/include/xen/vmap.h b/xen/include/xen/vmap.h
index 5671ac8..4c9a350 100644
--- a/xen/include/xen/vmap.h
+++ b/xen/include/xen/vmap.h
@@ -12,6 +12,16 @@ void *__vmap(const mfn_t *mfn, unsigned int granularity,
 void *vmap(const mfn_t *mfn, unsigned int nr);
 void vunmap(const void *);
 void *vmalloc(size_t size);
+
+struct vm_area {
+    struct list_head list;
+    mfn_t *mfn;
+    void *va;
+    unsigned int pages;
+};
+
+struct vm_area *vmalloc_range(size_t size, unsigned long start);
+
 void *vzalloc(size_t size);
 void vfree(void *va);
 
-- 
2.5.0


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.