[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 1/6] x86: detect and initialize Cache QoS Monitoring feature



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: Tuesday, March 04, 2014 4:11 PM
> To: Xu, Dongxiao
> Cc: andrew.cooper3@xxxxxxxxxx; Ian.Campbell@xxxxxxxxxx;
> Ian.Jackson@xxxxxxxxxxxxx; stefano.stabellini@xxxxxxxxxxxxx;
> xen-devel@xxxxxxxxxxxxx; konrad.wilk@xxxxxxxxxx; dgdegra@xxxxxxxxxxxxx;
> keir@xxxxxxx
> Subject: RE: [PATCH v9 1/6] x86: detect and initialize Cache QoS Monitoring
> feature
> 
> >>> On 03.03.14 at 14:21, "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx> wrote:
> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> >> >>> On 19.02.14 at 07:32, Dongxiao Xu <dongxiao.xu@xxxxxxxxx> wrote:
> >> > +    /* Allocate CQM buffer size in initialization stage */
> >> > +    cqm_pages = ((cqm->max_rmid + 1) * sizeof(domid_t) +
> >> > +                (cqm->max_rmid + 1) * sizeof(uint64_t) * NR_CPUS)/
> >>
> >> Does this really need to be NR_CPUS (rather than nr_cpu_ids)?
> >
> > Okay.
> > As you mentioned in later comment, the CQM data is indexed per-socket.
> > Here we use NR_CPUS or nr_cpu_ids because it is big enough to cover the
> > possible socket number in the system (even consider hotplug case).
> > Is there a better way that we can get the system socket number (including
> > the case even there is no CPU in that socket)?
> 
> I think we should at least get the estimation as close as possible:
> Count the sockets that we know of (i.e. that have at least one
> core/thread) and add the number of "disabled" (hot-pluggable)
> CPUs if ACPI doesn't surface enough information to associate
> them with a socket (but I think MADT provides all the needed data).

It seems that MADT table doesn't contain the socket number information...

Considering that it is difficult to get the accurate socket number at system 
initialization time, what about we allocate/free the CQM related memory at 
runtime when user admin really issues the QoS query command?
With this approach, the data sharing between Xen and Dom0 tools should be much 
less since we know:
 - How many processor sockets are active in the system.
 - How many active RMIDs are in use in the system.

With above, we didn't need to always using the max_rmid * max_socket memory to 
share a lot of unnecessary "zero" data, and the new sharing data should be less 
than one page.
What's your opinion about that?

Thanks,
Dongxiao

> 
> >> > +                PAGE_SIZE + 1;
> >> > +    cqm->buffer_size = cqm_pages * PAGE_SIZE;
> >> > +
> >> > +    cqm->buffer =
> >> alloc_xenheap_pages(get_order_from_pages(cqm_pages), 0);
> >>
> >> And does the allocation really need to be physically contiguous?
> >> If so - did you calculate how much more memory you allocate
> >> (due to the rounding up to the next power of 2), to decide
> >> whether it's worthwhile freeing the unused portion?
> >
> > The buffer needs to be contiguous since userspace will map it as read-only 
> > to
> > avoid data copy.
> 
> That's not an argument for other that coding simplicity - user space
> could equally well map an array of dis-contiguous MFNs.
> 
> > According to my rough calculation, the cqm_pages value is about 29 pages on
> > my test machine.
> 
> Your test machine doesn't count; what counts is the maximum
> possible with the currently enforced limits.
> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.