[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 15/21] libxc: allocate memory with vNUMA information for HVM guest



On Fri, 2015-01-23 at 11:13 +0000, Wei Liu wrote:
> The algorithm is more or less the same as the one used for PV guest.

Any reason the code can't be shared then? :-D
>  
> +    if ( nr_pages > target_pages )
> +        memflags |= XENMEMF_populate_on_demand;

OOI how does vNUMA and PoD interact? Do you prefer to fill a node as
much as possible (leaving other nodes entirely PoD) or do you prefer to
balance the PoD pages between nodes?

I think the former?

I suspect this depends a lot on the guest behaviour, so there probably
isn't a right answer. Are there corner cases where this might go wrong
though?

> +
> +    if ( args->nr_vnuma_info == 0 )
> +    {
> +        /* Build dummy vnode information */
> +        dummy_vnuma_info.vnode = 0;
> +        dummy_vnuma_info.pnode = XC_VNUMA_NO_NODE;
> +        dummy_vnuma_info.pages = args->mem_size >> PAGE_SHIFT;
> +        args->nr_vnuma_info = 1;
> +        args->vnuma_info = &dummy_vnuma_info;

You could have done this in the PV case too, I nearly suggested it then
but realised I already had, so I figured there was a reason not to?

> +    }
> +    else
> +    {
> +        if ( nr_pages > target_pages )
> +        {
> +            PERROR("Cannot enable vNUMA and PoD at the same time");

And there's the answer to my question above ;-)

> +            goto error_out;
>  
> +    for ( i = 0; i < args->nr_vnuma_info; i++ )
>      {

Reindenting in a precursor patch made this diff a lot easier to read,
thanks.

>              if ( count > max_pages )
> @@ -388,19 +440,20 @@ static int setup_guest(xc_interface *xch,
>                  unsigned long nr_extents = count >> SUPERPAGE_1GB_SHIFT;
>                  xen_pfn_t sp_extents[nr_extents];
>  
> -                for ( i = 0; i < nr_extents; i++ )
> -                    sp_extents[i] =
> -                        page_array[cur_pages+(i<<SUPERPAGE_1GB_SHIFT)];
> +                for ( j = 0; j < nr_extents; j++ )
> +                    sp_extents[j] =
> +                        page_array[cur_pages+(j<<SUPERPAGE_1GB_SHIFT)];

You might condider s/i/j/ for a precursor patch too. Or given the scope
of the outermost loop is quite large a more specific name than i (like
vnid) might be more appropriate.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.