[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 32/47] xen/sched: support allocating multiple vcpus into one sched unit
On 14.09.2019 10:52, Juergen Gross wrote: > @@ -366,18 +380,38 @@ static void sched_free_unit(struct sched_unit *unit) > xfree(unit); > } > > +static void sched_unit_add_vcpu(struct sched_unit *unit, struct vcpu *v) > +{ > + v->sched_unit = unit; > + if ( !unit->vcpu_list || unit->vcpu_list->vcpu_id > v->vcpu_id ) Is the right side needed? Aren't vCPU-s created in increasing order of their IDs, and aren't we relying on this elsewhere too? > + { > + unit->vcpu_list = v; > + unit->unit_id = v->vcpu_id; This makes for a pretty strange set of IDs (non-successive), and explains why patch 24 uses a local "unit_idx" instead of switching from v->vcpu_id as array index to unit->unit_id. Is there a reason you don't divide by the granularity here, eliminating the division done e.g. ... > + } > + unit->runstate_cnt[v->runstate.state]++; > +} > + > static struct sched_unit *sched_alloc_unit(struct vcpu *v) > { > struct sched_unit *unit, **prev_unit; > struct domain *d = v->domain; > > + for_each_sched_unit ( d, unit ) > + if ( unit->vcpu_list->vcpu_id / sched_granularity == ... here. (I also don't see why you don't use unit->unit_id here.) > @@ -622,9 +659,16 @@ void sched_destroy_vcpu(struct vcpu *v) > kill_timer(&v->poll_timer); > if ( test_and_clear_bool(v->is_urgent) ) > atomic_dec(&per_cpu(sched_urgent_count, v->processor)); > - sched_remove_unit(vcpu_scheduler(v), unit); > - sched_free_vdata(vcpu_scheduler(v), unit->priv); > - sched_free_unit(unit); > + /* > + * Vcpus are being destroyed top-down. So being the first vcpu of an unit > + * is the same as being the only one. > + */ > + if ( unit->vcpu_list == v ) Interestingly here you rely on there being a certain order. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |