[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Improve performance of IOCTL_PRIVCMD_MMAPBATCH_V2



On 16/11/12 15:48, David Vrabel wrote:
On 16/11/12 14:45, Mats Petersson wrote:

Add "xen/privcmd:" prefix to subject.

This patch makes the IOCTL_PRIVCMD_MMAPBATCH_V2 (and older V1 version)
map multiple (typically 1024) pages at a time rather than one page at a
time, despite the pages being non-consecutive MFNs. The main change is
to pass a pointer to an array of mfns, rather than one mfn. To support
error reporting, we also pass an err_ptr. If err_ptr is NULL, it indicates
we want the contiguous pages behaviour, so the mfn value is incremented
rather than the pointer itself. Performance of mapping guest memory into
Dom0 is improved by a factor of around 6 with this change.
Can you include details on the test and the raw figures as well?

Signed-off-by: Mats Petersson <mats.petersson@xxxxxxxxxx>
---
  arch/x86/xen/mmu.c    |   47 ++++++++++++++++++++++++++-------
  drivers/xen/privcmd.c |   70 ++++++++++++++++++++++++++++++++++++++++++-------
  include/xen/xen-ops.h |    5 ++--
  3 files changed, 100 insertions(+), 22 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index dcf5f2d..c5e23ba 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2477,7 +2477,8 @@ void __init xen_hvm_init_mmu_ops(void)
  #define REMAP_BATCH_SIZE 16
struct remap_data {
-       unsigned long mfn;
+       unsigned long *mfn;
+       int contiguous;
bool.

        pgprot_t prot;
        struct mmu_update *mmu_update;
  };
@@ -2486,7 +2487,13 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t 
token,
                                 unsigned long addr, void *data)
  {
        struct remap_data *rmd = data;
-       pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
+       pte_t pte = pte_mkspecial(pfn_pte(*rmd->mfn, rmd->prot));
+       /* If we have a contigious range, just update the mfn itself,
+          else update pointer to be "next mfn". */
+       if (rmd->contiguous)
+               (*rmd->mfn)++;
+       else
+               rmd->mfn++;
rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
        rmd->mmu_update->val = pte_val_ma(pte);
@@ -2495,16 +2502,17 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t 
token,
        return 0;
  }
+
  int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
                               unsigned long addr,
-                              unsigned long mfn, int nr,
-                              pgprot_t prot, unsigned domid)
+                              unsigned long *mfn, int nr,
+                              int *err_ptr, pgprot_t prot,
+                              unsigned domid)
  {
+       int err;
        struct remap_data rmd;
        struct mmu_update mmu_update[REMAP_BATCH_SIZE];
-       int batch;
        unsigned long range;
-       int err = 0;
if (xen_feature(XENFEAT_auto_translated_physmap))
                return -EINVAL;
@@ -2515,9 +2523,15 @@ int xen_remap_domain_mfn_range(struct vm_area_struct 
*vma,
rmd.mfn = mfn;
        rmd.prot = prot;
+       /* We use the err_ptr to indicate if there we are doing a contigious
+        * mapping or a discontigious mapping. */
+       rmd.contiguous = !err_ptr;
This is non-obvious for an API call.  Suggest having two wrapper
functions for the two different use cases that share a common internal
implementation.
I originally had another argument to the function of "contiguous" - but found that in both places this function is called, it had both contiguous and err_ptr either "1, NULL" or "0, not-NULL". So I thought "why not save one argument".

Do you foresee these functions being called from many other places?

e.g.,

int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
                       unsigned long addr,
                       unsigned long *mfns, int nr,
                       int *err_ptr,
                        pgprot_t prot, unsigned domid)

int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
                       unsigned long addr,
                       unsigned long start_mfn, unsigned long end_mfn,
                       pgprot_t prot, unsigned domid)
The original xen_remap_mfn_range() takes mfn and nr, so I think if we have two functions, the original function should remain as before (to reduce the amount of changes). It also makes the common function more symmetrical between the two cases, as we'd have to back-calculate the "nr" argument - we clearly can't know "end_mfn" in the array case.


It would be nice if the API calls had some docs as well (e.g., in
kernel-doc style).
Yes, that would be nice, wouldn't it. I'll add something.

--
Mats

David




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.