[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] libxl: avoid considering pCPUs outside of the cpupool during NUMA placement
On Fri, Oct 21, 2016 at 12:50:58PM +0200, Juergen Gross wrote: > On 21/10/16 12:29, Wei Liu wrote: > > On Fri, Oct 21, 2016 at 11:56:14AM +0200, Dario Faggioli wrote: > >> During NUMA automatic placement, the information > >> of how many vCPUs can run on what NUMA nodes is used, > >> in order to spread the load as evenly as possible. > >> > >> Such information is derived from vCPU hard and soft > >> affinity, but that is not enough. In fact, affinity > >> can be set to be a superset of the pCPUs that belongs > >> to the cpupool in which a domain is but, of course, > >> the domain will never run on pCPUs outside of its > >> cpupool. > >> > >> Take this into account in the placement algorithm. > >> > >> Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx> > >> Reported-by: George Dunlap <george.dunlap@xxxxxxxxxx> > >> --- > >> Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx> > >> Cc: Wei Liu <wei.liu2@xxxxxxxxxx> > >> Cc: George Dunlap <george.dunlap@xxxxxxxxxx> > >> Cc: Juergen Gross <jgross@xxxxxxxx> > >> Cc: Anshul Makkar <anshul.makkar@xxxxxxxxxx> > >> --- > >> Wei, this is bugfix, so I think it should go in 4.8. > >> > > > > Yes. I agree. > > > >> Ian, this is bugfix, so I think it is a backporting candidate. > >> > >> Also, note that this function does not respect the libxl coding style, as > >> far > >> as error handling is concerned. However, given that I'm asking for it to > >> go in > >> now and to be backported, I've tried to keep the changes to the minimum. > >> > >> I'm up for a follow up patch for 4.9 to make the style compliant. > >> > >> Thanks, Dario > >> --- > >> tools/libxl/libxl_numa.c | 25 ++++++++++++++++++++++--- > >> 1 file changed, 22 insertions(+), 3 deletions(-) > >> > >> diff --git a/tools/libxl/libxl_numa.c b/tools/libxl/libxl_numa.c > >> index 33289d5..f2a719d 100644 > >> --- a/tools/libxl/libxl_numa.c > >> +++ b/tools/libxl/libxl_numa.c > >> @@ -186,9 +186,12 @@ static int nr_vcpus_on_nodes(libxl__gc *gc, > >> libxl_cputopology *tinfo, > >> { > >> libxl_dominfo *dinfo = NULL; > >> libxl_bitmap dom_nodemap, nodes_counted; > >> + libxl_cpupoolinfo cpupool_info; > >> int nr_doms, nr_cpus; > >> int i, j, k; > >> > >> + libxl_cpupoolinfo_init(&cpupool_info); > >> + > > > > Please move this into the loop below, see (*). > > Why? libxl_cpupoolinfo_dispose() will clear cpupool_info. > One _init should pair with one _dipose. Even _dispose is idempotent at the moment, it might not be so in the future. > > > >> dinfo = libxl_list_domain(CTX, &nr_doms); > >> if (dinfo == NULL) > >> return ERROR_FAIL; > >> @@ -205,12 +208,18 @@ static int nr_vcpus_on_nodes(libxl__gc *gc, > >> libxl_cputopology *tinfo, > >> } > >> > >> for (i = 0; i < nr_doms; i++) { > >> - libxl_vcpuinfo *vinfo; > >> - int nr_dom_vcpus; > >> + libxl_vcpuinfo *vinfo = NULL; > > > > This is not necessary because vinfo is written right away. > > No, the first "goto next" is before vinfo is being written. > Yes, this is necessary. Thanks for catching this. Wei. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |