|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 03/19] libxc: allocate memory with vNUMA information for PV guest
On Mon, 2014-12-01 at 15:33 +0000, Wei Liu wrote:
No longer description of what is going on? Or at least comments on what
the new fields mean. (I figure two of them form an array mapping v to p,
I'm not sure what the other one is...)
> Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
> Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>
> Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
> Cc: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> Cc: Elena Ufimtseva <ufimtseva@xxxxxxxxx>
> ---
> tools/libxc/include/xc_dom.h | 5 +++
> tools/libxc/xc_dom_x86.c | 72
> +++++++++++++++++++++++++++++++++++-------
> tools/libxc/xc_private.h | 2 ++
> 3 files changed, 68 insertions(+), 11 deletions(-)
>
> diff --git a/tools/libxc/include/xc_dom.h b/tools/libxc/include/xc_dom.h
> index 07d7224..577a017 100644
> --- a/tools/libxc/include/xc_dom.h
> +++ b/tools/libxc/include/xc_dom.h
> @@ -167,6 +167,11 @@ struct xc_dom_image {
> struct xc_dom_loader *kernel_loader;
> void *private_loader;
>
> + /* vNUMA information */
> + unsigned int *vnode_to_pnode;
> + uint64_t *vnode_size;
> + unsigned int nr_vnodes;
> +
> /* kernel loader */
> struct xc_dom_arch *arch_hooks;
> /* allocate up to virt_alloc_end */
> diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c
> index bf06fe4..3286232 100644
> --- a/tools/libxc/xc_dom_x86.c
> +++ b/tools/libxc/xc_dom_x86.c
> @@ -759,7 +759,8 @@ static int x86_shadow(xc_interface *xch, domid_t domid)
> int arch_setup_meminit(struct xc_dom_image *dom)
> {
> int rc;
> - xen_pfn_t pfn, allocsz, i, j, mfn;
> + xen_pfn_t pfn, allocsz, mfn, total, pfn_base;
> + int i, j;
>
> rc = x86_compat(dom->xch, dom->guest_domid, dom->guest_type);
> if ( rc )
> @@ -811,18 +812,67 @@ int arch_setup_meminit(struct xc_dom_image *dom)
> /* setup initial p2m */
> for ( pfn = 0; pfn < dom->total_pages; pfn++ )
> dom->p2m_host[pfn] = pfn;
> -
> +
> + /* setup dummy vNUMA information if it's not provided */
> + if ( dom->nr_vnodes == 0 )
> + {
> + dom->nr_vnodes = 1;
> + dom->vnode_to_pnode = xc_dom_malloc(dom,
> +
> sizeof(*dom->vnode_to_pnode));
> + dom->vnode_to_pnode[0] = XC_VNUMA_NO_NODE;
Does this indicate there is no vnode 0, or that vnode 0 maps to an
arbitrary (but unknown) pnode?
> + dom->vnode_size = xc_dom_malloc(dom, sizeof(*dom->vnode_size));
Might as well to &dummy_foo, which is an on stack thing of the right
dummy type.
> + dom->vnode_size[0] = ((dom->total_pages << PAGE_SHIFT) >> 20);
One unnecessary set of ()s
> + }
> +
> + total = 0;
> + for ( i = 0; i < dom->nr_vnodes; i++ )
> + total += ((dom->vnode_size[i] << 20) >> PAGE_SHIFT);
> + if ( total != dom->total_pages )
> + {
> + xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
> + "%s: number of pages requested by vNUMA
> (0x%"PRIpfn") mismatches number of pages configured for domain
> (0x%"PRIpfn")\n",
Can you formulate a terser message and avoid the long line?
Perhaps "vNUMA page count mismatch 0x%PRIpfn != 0x%PRIpfn" conveys all
the useful bits?
Alternatively what if we were to mandate that dom->total_pages must be
set to zero by the caller iff they are passing nr_vnodes != 0? Then
calculate total_pages internally.
Might have too many knockon effects on callers?
> + __FUNCTION__, total, dom->total_pages);
> + return -EINVAL;
> + }
> +
> /* allocate guest memory */
> - for ( i = rc = allocsz = 0;
> - (i < dom->total_pages) && !rc;
> - i += allocsz )
> + pfn_base = 0;
> + for ( i = 0; i < dom->nr_vnodes; i++ )
> {
> - allocsz = dom->total_pages - i;
> - if ( allocsz > 1024*1024 )
> - allocsz = 1024*1024;
> - rc = xc_domain_populate_physmap_exact(
> - dom->xch, dom->guest_domid, allocsz,
> - 0, 0, &dom->p2m_host[i]);
> + unsigned int memflags;
> + uint64_t pages;
> +
> + memflags = 0;
> + if ( dom->vnode_to_pnode[i] != XC_VNUMA_NO_NODE )
> + {
> + memflags |= XENMEMF_exact_node(dom->vnode_to_pnode[i]);
> + memflags |= XENMEMF_exact_node_request;
> + }
> +
> + pages = (dom->vnode_size[i] << 20) >> PAGE_SHIFT;
> +
> + for ( j = 0; j < pages; j += allocsz )
> + {
> + allocsz = pages - j;
> + if ( allocsz > 1024*1024 )
> + allocsz = 1024*1024;
> +
> + rc = xc_domain_populate_physmap_exact(dom->xch,
> + dom->guest_domid, allocsz, 0, memflags,
> + &dom->p2m_host[pfn_base+j]);
> +
> + if ( rc )
> + {
> + if ( dom->vnode_to_pnode[i] != XC_VNUMA_NO_NODE )
> + xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
> + "%s: fail to allocate 0x%"PRIx64" pages
> for vnode %d on pnode %d out of 0x%"PRIpfn"\n",
"vNUMA: failed to allocate 0x%"PRIx64"/0x%"PRIx64" pages (v=%d, p=%d)."?
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |