[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 10/13] libxl/xl: deprecate the build_info->cpumap field



On mar, 2014-06-17 at 11:16 +0100, Wei Liu wrote:
> On Fri, Jun 13, 2014 at 03:10:28PM +0200, Dario Faggioli wrote:
> [...]
> > meaning we want vcpu 0 to be pinned to pcpu 3,4 and vcpu 1 to
> > be pinned to pcpu 2,3,4,5,6. Before this change, in fact, the
> > list variant (["xx", "yy"]) supported only single values.
> > 
> > BEWARE that, although still being there, for backward
> > compatibility reasons, the cpumap field in build_info is no
> > longer used anywhere in libxl.
> > 
> 
> This paragraph needs update.
> 
Sure.

> > Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> > ---
> >  docs/man/xl.cfg.pod.5       |    8 +++---
> >  tools/libxl/libxl_create.c  |    6 ----
> >  tools/libxl/libxl_dom.c     |    4 +--
> >  tools/libxl/libxl_types.idl |    6 ++++
> >  tools/libxl/xl_cmdimpl.c    |   61 
> > +++++++++++++++++--------------------------
> >  5 files changed, 34 insertions(+), 51 deletions(-)
> > 
> > diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> > index c087cbc..af48622 100644
> > --- a/docs/man/xl.cfg.pod.5
> > +++ b/docs/man/xl.cfg.pod.5
> > @@ -143,11 +143,11 @@ Combining this with "all" is also possible, meaning 
> > "all,^nodes:1"
> >  results in all the vcpus of the guest running on all the cpus on the
> >  host, except for the cpus belonging to the host NUMA node 1.
> >  
> > -=item ["2", "3"] (or [2, 3])
> > +=item ["2", "3-8,^5"]
> >  
> 
> What happens if I still have [2, 3] in my config? From the lexer's PoV
> 2 and "2" are different things.
> 
It will continue to work. I am changing this line because I think the
manual should advertise the "xx" syntax, which is more powerful, i.e.,
allows ranges. In fact, while [2, 3] works, [2-3, 4-5] does not.

I really think this is fine as:
 - existing config will continue working
 - new configs should use "" syntax, which as said is more powerful

> > diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> > index d015cf4..443fe7d 100644
> > --- a/tools/libxl/libxl_create.c
> > +++ b/tools/libxl/libxl_create.c
> > @@ -187,12 +187,6 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
> >      } else if (b_info->avail_vcpus.size > HVM_MAX_VCPUS)
> >          return ERROR_FAIL;
> >  
> > -    if (!b_info->cpumap.size) {
> > -        if (libxl_cpu_bitmap_alloc(CTX, &b_info->cpumap, 0))
> > -            return ERROR_FAIL;
> > -        libxl_bitmap_set_any(&b_info->cpumap);
> > -    }
> > -
> 
> As you retain libxl_set_vcpuaffinity_all(b_info->cpumap) you might also
> want to retain this one. I believe you will figure this out. :-)
> 
Even if retaining the call to libxl_set_vcpuaffinity_all(), killing this
would be a good thing. IanC himself suggested during one round of review
of this series, that it would be best to treat !map.size as 'no
constraints'.

_However_, yes, I think I will keep the above if() for now, and will
give it a go at changing the semantic of unallocated bitmaps with
another series.

> >      libxl_defbool_setdefault(&b_info->numa_placement, true);
> >  
> >      if (!b_info->nodemap.size) {
> > diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> > index 1767659..0b00470 100644
> > --- a/tools/libxl/libxl_dom.c
> > +++ b/tools/libxl/libxl_dom.c
> > @@ -250,7 +250,7 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
> >       * whatever that turns out to be.
> >       */
> >      if (libxl_defbool_val(info->numa_placement)) {
> > -        if (!libxl_bitmap_is_full(&info->cpumap)) {
> > +        if (d_config->b_info.num_vcpu_hard_affinity) {
> >              LOG(ERROR, "Can run NUMA placement only if no vcpu "
> >                         "affinity is specified");
> >              return ERROR_INVAL;
> > @@ -261,8 +261,6 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
> >              return rc;
> >      }
> >      libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
> > -    libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
> > -                               &info->cpumap, NULL);
> >  
> 
> This hunk, as I said in my other email, should stay.
> 
Yep.

> >      /* If we have the vcpu hard affinity list, honour it */
> >      if (d_config->b_info.num_vcpu_hard_affinity) {
> > diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> > index 05978d7..cd5c0d4 100644
> > --- a/tools/libxl/libxl_types.idl
> > +++ b/tools/libxl/libxl_types.idl
> > @@ -297,7 +297,11 @@ libxl_domain_sched_params = 
> > Struct("domain_sched_params",[
> >  libxl_domain_build_info = Struct("domain_build_info",[
> >      ("max_vcpus",       integer),
> >      ("avail_vcpus",     libxl_bitmap),
> > -    ("cpumap",          libxl_bitmap),
> > +    ("cpumap",          libxl_bitmap), # DEPRECATED!
> > +    # The cpumap field above has been deprecated by the introduction of the
> > +    # vcpu_hard_affinity array. It is no longer used anywhere in libxl 
> > code,
> > +    # so one better avoid setting and, in general, using it at all. To do 
> > so,
> > +    # is indeed harmless, but won't produce any actual effect on the 
> > domain.
> 
> This comments needs update: cpumap is still functional but
> vcpu_hard_affinity takes precedence.
> 
> (I skip the parser part as it looks mostly like code motion to me)
> 
Indeed. And thanks for the review. :-)

Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.