|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 1/3] xen/sched: introduce cpupool_update_node_affinity()
On 02.08.2022 15:27, Juergen Gross wrote:
> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
> @@ -1790,28 +1790,14 @@ int vcpu_affinity_domctl(struct domain *d, uint32_t
> cmd,
> return ret;
> }
>
> -void domain_update_node_affinity(struct domain *d)
> +void domain_update_node_affinity_noalloc(struct domain *d,
> + const cpumask_t *online,
> + struct affinity_masks *affinity)
> {
> - cpumask_var_t dom_cpumask, dom_cpumask_soft;
> cpumask_t *dom_affinity;
> - const cpumask_t *online;
> struct sched_unit *unit;
> unsigned int cpu;
>
> - /* Do we have vcpus already? If not, no need to update node-affinity. */
> - if ( !d->vcpu || !d->vcpu[0] )
> - return;
> -
> - if ( !zalloc_cpumask_var(&dom_cpumask) )
> - return;
> - if ( !zalloc_cpumask_var(&dom_cpumask_soft) )
> - {
> - free_cpumask_var(dom_cpumask);
> - return;
> - }
Instead of splitting the function, did you consider using
cond_zalloc_cpumask_var() here, thus allowing (but not requiring)
callers to pre-allocate the masks? Would imo be quite a bit less
code churn (I think).
> --- a/xen/common/sched/cpupool.c
> +++ b/xen/common/sched/cpupool.c
> @@ -410,6 +410,48 @@ int cpupool_move_domain(struct domain *d, struct cpupool
> *c)
> return ret;
> }
>
> +/* Update affinities of all domains in a cpupool. */
> +static int cpupool_alloc_affin_masks(struct affinity_masks *masks)
> +{
> + if ( !alloc_cpumask_var(&masks->hard) )
> + return -ENOMEM;
> + if ( alloc_cpumask_var(&masks->soft) )
> + return 0;
> +
> + free_cpumask_var(masks->hard);
> + return -ENOMEM;
> +}
Wouldn't this be a nice general helper function, also usable from
outside of this CU?
As a nit - right now the only caller treats the return value as boolean,
so perhaps the function better would return bool?
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |