|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] x86: make "dom0_nodes=" work with credit2
On 12.04.2022 18:14, Dario Faggioli wrote:
> On Tue, 2022-04-12 at 16:11 +0000, Dario Faggioli wrote:
>> On Tue, 2022-04-12 at 15:48 +0000, Dario Faggioli wrote:
>>> On Fri, 2022-04-08 at 14:36 +0200, Jan Beulich wrote:
>>>
>>>
>>> And while doing that, I think we should consolidate touching the
>>> affinity only there, avoiding altering it twice. After all, we
>>> already
>>> know how it should look like, so let's go for it.
>>>
>>> I'll send a patch to that effect, to show what I mean with this.
>>>
>> Here it is.
>>
> And here's Jan's patch, ported on top of it.
>
> As for the one before, let me know what you think.
> ---
> From: Dario Faggioli <dfaggioli@xxxxxxxx>
> Subject: [PATCH 2/2] (Kind of) rebase of "x86: make "dom0_nodes=" work with
> credit2"
>
> i.e., Jan's patch, on top of the commit that unifies the affinity
> handling for dom0 vCPUs.
>
> Although not technically necessary any longer, for fixing the issue
> at hand, I think it still makes sense to have it in the code.
I don't think so. My suggestion in this regard was only since with my
patch adjustments to affinity remained in sched_setup_dom0_vcpus().
With them all gone, I don't see why vCPU-s might still need migrating.
In fact I would consider making such "adjustment" here would be
papering over issues elsewhere.
As a result, if you think it's still wanted, ...
> Signed-off-by: Dario Faggioli <dfaggioli@xxxxxxxx>
> ---
> * Changelog is so much RFC that is not even a changelog... Jan, if we go
> ahead with this approach, let me know how you prefer to handle the
> authorship, the S-o-b, etc, of this patch.
>
> I believe it should be From: you, with my S-o-b added after yours, but
> I'm fine being the author, if you don't want to.
... I think you should be considered the author. It would make sense
for me to be the author only if affinity adjustments remained in the
function after your earlier patch.
Jan
> ---
> xen/common/sched/core.c | 18 ++++++++++++++----
> 1 file changed, 14 insertions(+), 4 deletions(-)
>
> diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
> index dc2ed890e0..e11acd7b88 100644
> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
> @@ -3416,6 +3416,7 @@ void wait(void)
> void __init sched_setup_dom0_vcpus(struct domain *d)
> {
> unsigned int i;
> + struct sched_unit *unit;
>
> for ( i = 1; i < d->max_vcpus; i++ )
> vcpu_create(d, i);
> @@ -3423,11 +3424,20 @@ void __init sched_setup_dom0_vcpus(struct domain *d)
> /*
> * sched_vcpu_init(), called by vcpu_create(), will setup the hard and
> * soft affinity of all the vCPUs, by calling sched_set_affinity() on
> each
> - * one of them. We can now make sure that the domain's node affinity is
> - * also updated accordingly, and we can do that here, once and for all
> - * (which is more efficient than calling domain_update_node_affinity()
> - * on all the vCPUs).
> + * one of them. What's remaining for us to do here is:
> + * - make sure that the vCPUs are actually migrated to suitable CPUs
> + * - update the domain's node affinity (and we can do that here, once and
> + * for all, as it's more efficient than calling
> domain_update_node_affinity()
> + * on all the vCPUs).
> */
> + for_each_sched_unit ( d, unit )
> + {
> + spinlock_t *lock = unit_schedule_lock_irq(unit);
> + sched_unit_migrate_start(unit);
> + unit_schedule_unlock_irq(lock, unit);
> + sched_unit_migrate_finish(unit);
> + }
> +
> domain_update_node_affinity(d);
> }
> #endif
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |