|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC 30/49] xen: let vcpu_create() select processor
On 29/03/2019 15:09, Juergen Gross wrote:
> Today there are two distinct scenarios for vcpu_create(): either for
> creation of idle-domain vcpus (vcpuid == processor) or for creation of
> "normal" domain vcpus (including dom0), where the caller selects the
> initial processor on a round-robin scheme of the allowed processors
> (allowed being based on cpupool and affinities).
>
> Instead of passing the initial processor to vcpu_create() and passing
> on to sched_init_vcpu() let sched_init_vcpu() do the processor
> selection. For supporting dom0 vcpu creation use the node_affinity of
> the domain as a base for selecting the processors. User domains will
> have initially all nodes set, so this is no different behavior compared
> to today.
>
> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
Good riddance to the parameter! This will definitely simply some of my
further domcreate changes.
> index d9836779d1..d5294b0d26 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1986,12 +1986,11 @@ static int __init construct_domain(struct domain *d,
> struct kernel_info *kinfo)
> }
> #endif
>
> - for ( i = 1, cpu = 0; i < d->max_vcpus; i++ )
> + for ( i = 1; i < d->max_vcpus; i++ )
> {
> - cpu = cpumask_cycle(cpu, &cpu_online_map);
> - if ( vcpu_create(d, i, cpu) == NULL )
> + if ( vcpu_create(d, i) == NULL )
> {
> - printk("Failed to allocate dom0 vcpu %d on pcpu %d\n", i, cpu);
> + printk("Failed to allocate dom0 vcpu %d\n", i);
Mind adjusting this to d0v%u as it is changing anyway?
> diff --git a/xen/common/schedule.c b/xen/common/schedule.c
> index ae2a6d0323..9b5527c1eb 100644
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -318,14 +318,40 @@ static struct sched_item *sched_alloc_item(struct vcpu
> *v)
> return NULL;
> }
>
> -int sched_init_vcpu(struct vcpu *v, unsigned int processor)
> +static unsigned int sched_select_initial_cpu(struct vcpu *v)
> +{
> + struct domain *d = v->domain;
> + nodeid_t node;
> + cpumask_t cpus;
> +
> + cpumask_clear(&cpus);
> + for_each_node_mask ( node, d->node_affinity )
> + cpumask_or(&cpus, &cpus, &node_to_cpumask(node));
> + cpumask_and(&cpus, &cpus, cpupool_domain_cpumask(d));
> + if ( cpumask_empty(&cpus) )
> + cpumask_copy(&cpus, cpupool_domain_cpumask(d));
> +
> + if ( v->vcpu_id == 0 )
> + return cpumask_first(&cpus);
> +
> + /* We can rely on previous vcpu being available. */
Only if you ASSERT(!is_idle_domain(d)), which is safe given the sole caller.
idle->vcpu[] can be sparse in some corner cases.
Ideally with both of these suggestions, Acked-by: Andrew Cooper
<andrew.cooper3@xxxxxxxxxx>
> + return cpumask_cycle(d->vcpu[v->vcpu_id - 1]->processor, &cpus);
> +}
> +
> +int sched_init_vcpu(struct vcpu *v)
> {
> struct domain *d = v->domain;
> struct sched_item *item;
> + unsigned int processor;
>
> if ( (item = sched_alloc_item(v)) == NULL )
> return 1;
>
> + if ( is_idle_domain(d) || d->is_pinned )
> + processor = v->vcpu_id;
> + else
> + processor = sched_select_initial_cpu(v);
> +
> sched_set_res(item, per_cpu(sched_res, processor));
>
> /* Initialise the per-vcpu timers. */
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |