|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 07/15] x86/IRQ: target online CPUs when binding guest IRQ
On Fri, May 17, 2019 at 04:48:21AM -0600, Jan Beulich wrote:
> fixup_irqs() skips interrupts without action. Hence such interrupts can
> retain affinity to just offline CPUs. With "noirqbalance" in effect,
> pirq_guest_bind() so far would have left them alone, resulting in a non-
> working interrupt.
>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> ---
> v3: New.
> ---
> I've not observed this problem in practice - the change is just the
> result of code inspection after having noticed action-less IRQs in 'i'
> debug key output pointing at all parked/offline CPUs.
>
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -1683,9 +1683,27 @@ int pirq_guest_bind(struct vcpu *v, stru
>
> desc->status |= IRQ_GUEST;
>
> - /* Attempt to bind the interrupt target to the correct CPU. */
> - if ( !opt_noirqbalance && (desc->handler->set_affinity != NULL) )
> - desc->handler->set_affinity(desc, cpumask_of(v->processor));
> + /*
> + * Attempt to bind the interrupt target to the correct (or at least
> + * some online) CPU.
> + */
> + if ( desc->handler->set_affinity )
> + {
> + const cpumask_t *affinity = NULL;
> +
> + if ( !opt_noirqbalance )
> + affinity = cpumask_of(v->processor);
> + else if ( !cpumask_intersects(desc->affinity, &cpu_online_map) )
> + {
> + cpumask_setall(desc->affinity);
> + affinity = &cpumask_all;
> + }
> + else if ( !cpumask_intersects(desc->arch.cpu_mask,
> + &cpu_online_map) )
I'm not sure I see the purpose of the desc->arch.cpu_mask check,
wouldn't it be better to just use else and set the affinity to
desc->affinity?
Or it's just an optimization to avoid doing the set_affinity call if
the interrupt it already bound to an online CPU?
Thanks, Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |