|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 8/8] tools: add tools support for Intel CAT
On Wed, 2015-04-01 at 15:55 +0800, Chao Peng wrote:
> On Tue, Mar 31, 2015 at 05:28:54PM +0100, Ian Campbell wrote:
> > On Thu, 2015-03-26 at 20:38 +0800, Chao Peng wrote:
> > > This is the xc/xl changes to support Intel Cache Allocation
> > > Technology(CAT). Two commands are introduced:
> > > - xl psr-cat-cbm-set [-s socket] <domain> <cbm>
> > > Set cache capacity bitmasks(CBM) for a domain.
> > > - xl psr-cat-show <domain>
> > > Show Cache Allocation Technology information.
> >
> > Please could you show an example of the output from this one.
>
> I did put the sample output in the cover letter, but sounds like here is
> the right place.
Sorry, I didn't think to look there at the time.
> >
> > >
> > > Signed-off-by: Chao Peng <chao.p.peng@xxxxxxxxxxxxxxx>
> > > ---
> > > +
> > > +=item B<psr-cat-cbm-set> [I<OPTIONS>] [I<domain-id>]
> > > +
> > > +Set cache capacity bitmasks(CBM) for a domain.
> >
> > What is the syntax of these bitmaps, and where do I pass them?
>
> [I<cbm>] is missing here.
>
> >
> > I think there is also a bunch of terminology (CBM, COS) which need
> > explaining, otherwise no one will know how to use it. Perhaps that
> > belongs in a separate document or the wiki though?
>
> Sounds it's worthy to create docs/misc/xl-psr.markdown.
I think the syntax of I<cbm> needs to be in this document, for sure, but
I suspect that this stuff is complex enough that a more complete
overview in a separate document might be useful too (possibly it should
cover other related things, like the existing stuff like CMT and MBM
too, assuming they are related enough?
>
> > > +
> > > +#define LIBXL_PSR_TARGET_ALL (~0U)
> > > +int libxl_psr_cat_set_cbm(libxl_ctx *ctx, uint32_t domid,
> > > + libxl_psr_cat_type type, uint32_t target,
> > > + uint64_t cbm);
> > > +int libxl_psr_cat_get_cbm(libxl_ctx *ctx, uint32_t domid,
> > > + libxl_psr_cat_type type, uint32_t target,
> > > + uint64_t *cbm_r);
> >
> > What are the units of the various cbm*
?
> >
> > If they are now more precisely typed (i.e. not the opaque data from last
> > time) then is the type parameter still needed?
>
> 'type' is still used because in the future the possible value can be
> L3_CODE_CBM/L3_DATA_CBM or even L2_CBM. We want to keep it common.
OK. And all future "cat_type" values will all have similar
types/semantics?
> > > +int libxl_psr_cat_get_l3_info(libxl_ctx *ctx, uint32_t socket,
> > > + uint32_t *cos_max_r, uint32_t *cbm_len_r);
> >
> > Is there going to be any user documentation regarding what cos and cbm
> > are and how to interpret them and set them?
>
> So I'd like to create docs/misc/xl-psr.markdown to answer all these
> questions.
OK.
> >
> > > + printf("%5d%25s", dominfo->domid, domain_name);
> > > + free(domain_name);
> > > +
> > > + if (!libxl_psr_cat_get_cbm(ctx, dominfo->domid,
> > > + LIBXL_PSR_CAT_TYPE_L3_CBM, socket, &cbm))
> > > + printf("%#16"PRIx64, cbm);
> > > +
> > > + printf("\n");
> > > +}
> > > +
> > > +static int psr_cat_print_socket(uint32_t domid, uint32_t socket)
> > > +{
> > > + uint32_t l3_cache_size, cos_max, cbm_len;
> > > + int rc;
> > > +
> > > + rc = libxl_psr_cmt_get_l3_cache_size(ctx, socket, &l3_cache_size);
> > > + if (rc) {
> > > + fprintf(stderr, "Failed to get l3 cache size for socket:%d\n",
> > > socket);
> > > + return -1;
> > > + }
> > > +
> > > + rc = libxl_psr_cat_get_l3_info(ctx, socket, &cos_max, &cbm_len);
> >
> > Would an interface which returns a list with information for all sockets
> > not be more convenient?
>
> If this one returns all sockets but not a specified socket data (which I
> agreed)
> and not consider legacy cmt code, then I think I can make
> libxl_count_physical_sockets() private and move it to libxl/libxl_psr.c.
What is the legacy cmt code? But otherwise I agree, yes.
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |