|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 7/7] xen: sched_rt: print useful affinity info when dumping
On 03/16/2015 05:05 PM, Dario Faggioli wrote:
> In fact, printing the cpupool's CPU online mask
> for each vCPU is just redundant, as that is the
> same for all the vCPUs of all the domains in the
> same cpupool, while hard affinity is already part
> of the output of dumping domains info.
>
> Instead, print the intersection between hard
> affinity and online CPUs, which is --in case of this
> scheduler-- the effective affinity always used for
> the vCPUs.
>
> This change also takes the chance to add a scratch
> cpumask, to avoid having to create one more
> cpumask_var_t on the stack of the dumping routine.
> Such scratch area can be used to kill most of the
> cpumasks_var_t local variables in other functions
> in the file, but that is *NOT* done in this chage.
>
> Finally, convert the file to use keyhandler scratch,
> instead of open coded string buffers.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
> Cc: Meng Xu <xumengpanda@xxxxxxxxx>
> Cc: Jan Beulich <JBeulich@xxxxxxxx>
> Cc: Keir Fraser <keir@xxxxxxx>
> ---
> xen/common/sched_rt.c | 40 ++++++++++++++++++++++++++++++++--------
> 1 file changed, 32 insertions(+), 8 deletions(-)
>
> diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
> index 167a484..7aa3ac8 100644
> --- a/xen/common/sched_rt.c
> +++ b/xen/common/sched_rt.c
> @@ -124,6 +124,12 @@
> #define TRC_RTDS_BUDGET_REPLENISH TRC_SCHED_CLASS_EVT(RTDS, 4)
> #define TRC_RTDS_SCHED_TASKLET TRC_SCHED_CLASS_EVT(RTDS, 5)
>
> + /*
> + * Useful to avoid too many cpumask_var_t on the stack.
> + */
> +static cpumask_t **_cpumask_scratch;
> +#define cpumask_scratch _cpumask_scratch[smp_processor_id()]
> +
> /*
> * Systme-wide private data, include global RunQueue/DepletedQ
> * Global lock is referenced by schedule_data.schedule_lock from all
> @@ -218,7 +224,6 @@ __q_elem(struct list_head *elem)
> static void
> rt_dump_vcpu(const struct scheduler *ops, const struct rt_vcpu *svc)
> {
> - char cpustr[1024];
> cpumask_t *cpupool_mask;
>
> ASSERT(svc != NULL);
> @@ -229,10 +234,22 @@ rt_dump_vcpu(const struct scheduler *ops, const struct
> rt_vcpu *svc)
> return;
> }
>
> - cpumask_scnprintf(cpustr, sizeof(cpustr), svc->vcpu->cpu_hard_affinity);
> + cpupool_mask = cpupool_scheduler_cpumask(svc->vcpu->domain->cpupool);
> + /*
> + * We can't just use 'cpumask_scratch' because the dumping can
> + * happen from a pCPU outside of this scheduler's cpupool, and
> + * hence it's not right to use the pCPU's scratch mask (which
> + * may even not exist!). On the other hand, it is safe to use
> + * svc->vcpu->processor's own scratch space, since we own the
> + * runqueue lock.
Since we *hold* the lock.
> + */
> + cpumask_and(_cpumask_scratch[svc->vcpu->processor], cpupool_mask,
> + svc->vcpu->cpu_hard_affinity);
> + cpulist_scnprintf(keyhandler_scratch, sizeof(keyhandler_scratch),
> + _cpumask_scratch[svc->vcpu->processor]);
Just a suggestion, would it be worth making a local variable to avoid
typing this long thing twice? Then you could also put the comment about
using the svc->vcpu->processor's scratch space above the place where you
set the local variable, while avoiding breaking up the logic of the
cpumask operations.
Other than that, looks good!
-George
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |