[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 1/8] xen/cpupool: support moving domain between cpupools with different granularity
On 17.12.20 08:49, Jan Beulich wrote: On 16.12.2020 18:52, Dario Faggioli wrote:On Wed, 2020-12-09 at 17:09 +0100, Juergen Gross wrote:When moving a domain between cpupools with different scheduling granularity the sched_units of the domain need to be adjusted. Do that by allocating new sched_units and throwing away the old ones in sched_move_domain(). Signed-off-by: Juergen Gross <jgross@xxxxxxxx>This looks fine, and can have: Reviewed-by: Dario Faggioli <dfaggioli@xxxxxxxx> I would only have one request. It's not a huge deal, and probably not worth a resend only for that, but if either you or the committer are up for complying with that in whatever way you find the most suitable, that would be great.I'd certainly be okay making this adjustment while committing, as long as Jürgen agrees. With ...I.e., can we...--- xen/common/sched/core.c | 121 ++++++++++++++++++++++++++++++-------- -- 1 file changed, 90 insertions(+), 31 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index a429fc7640..2a61c879b3 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c [...] - old_ops = dom_scheduler(d); old_domdata = d->sched_priv;Move *here* (i.e., above this new call to cpumask_first()) the comment that is currently inside the loop?+ new_p = cpumask_first(d->cpupool->cpu_valid);for_each_sched_unit ( d, unit ) { + spinlock_t *lock; + + /* + * Temporarily move all units to same processor to make locking + * easier when moving the new units to the new processors. + */This one here, basically ^^^... this comment moved out of here, I'd be tempted to suggest to make ...+ lock = unit_schedule_lock_irq(unit);... this the variable's initializer then at the same time. Fine with me. Juergen Attachment:
OpenPGP_0xB0DE9DD628BF132F.asc Attachment:
OpenPGP_signature
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |