[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/7] libxc: vnodes allocation on NUMA nodes.



On lun, 2013-11-18 at 15:24 -0500, Elena Ufimtseva wrote:
> Signed-off-by: Elena Ufimtseva <ufimtseva@xxxxxxxxx>

> diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c
> index e034d62..1839360 100644

> @@ -802,27 +802,47 @@ int arch_setup_meminit(struct xc_dom_image *dom)
>      else
>      {
>          /* try to claim pages for early warning of insufficient memory avail 
> */
> +        rc = 0;
>          if ( dom->claim_enabled ) {
>              rc = xc_domain_claim_pages(dom->xch, dom->guest_domid,
>                                         dom->total_pages);
>              if ( rc )
> +            {
> +                xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
> +                             "%s: Failed to claim mem for dom\n",
> +                             __FUNCTION__);
>                  return rc;
> +            }
>
Mmm... here you're just adding some log output. I think it's a minor
thing, i.e., it's not terrible to have this in this patch. However,
ideally, this should happen in its own patch, as it is not much related
to vNUMA, is it?

>          /* allocate guest memory */
> -        for ( i = rc = allocsz = 0;
> -              (i < dom->total_pages) && !rc;
> -              i += allocsz )
> +        if ( dom->nr_vnodes > 0 )
>          {
> -            allocsz = dom->total_pages - i;
> -            if ( allocsz > 1024*1024 )
> -                allocsz = 1024*1024;
> -            rc = xc_domain_populate_physmap_exact(
> -                dom->xch, dom->guest_domid, allocsz,
> -                0, 0, &dom->p2m_host[i]);
> +            rc = arch_boot_numa_alloc(dom);
> +            if ( rc )
> +            {
> +                xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
> +                             "%s: Failed to allocate memory on NUMA nodes\n",
> +                             __FUNCTION__);
> +                return rc;
> +            }
> +        }
> +        else
> +        {
> +            for ( i = rc = allocsz = 0;
> +                  (i < dom->total_pages) && !rc;
> +                  i += allocsz )
> +            {
> +                allocsz = dom->total_pages - i;
> +                if ( allocsz > 1024*1024 )
> +                    allocsz = 1024*1024;
> +                rc = xc_domain_populate_physmap_exact(
> +                    dom->xch, dom->guest_domid, allocsz,
> +                    0, 0, &dom->p2m_host[i]);
> +            }
>          }
I think I agree with George's comment from last round: do unify the two
paths (yes, I saw it's on the TODO for this patch... so, go ahead and do
it :-) ).

> +int arch_boot_numa_alloc(struct xc_dom_image *dom)
> +{ 
> +    int rc;
> +    unsigned int n;
> +    unsigned long long guest_pages;
> +    unsigned long long allocsz = 0, node_pfn_base, i;
> +    unsigned long memflags;
> +
> +    rc = allocsz = node_pfn_base = 0;
>  
> +    allocsz = 0;
> +    for ( n = 0; n < dom->nr_vnodes; n++ )
> +    {
> +        memflags = 0;
> +        if ( dom->vnode_to_pnode[n] != VNUMA_NO_NODE )
> +        {
> +            memflags |= XENMEMF_exact_node(dom->vnode_to_pnode[n]);
> +            memflags |= XENMEMF_exact_node_request;
> +        }
> +        guest_pages = (dom->vnuma_memszs[n] << 20) >> PAGE_SHIFT_X86;
>
I probably would call this 'pages_node' (or 'node_pages', or
'guest_pages_node', etc.), but that's also something not that
important. :-)

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.