[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 7/7] x86/irq: forward pending interrupts to new destination in fixup_irqs()
On 12.06.2024 13:23, Roger Pau Monné wrote: > On Tue, Jun 11, 2024 at 03:50:42PM +0200, Jan Beulich wrote: >> On 10.06.2024 16:20, Roger Pau Monne wrote: >>> @@ -2649,6 +2649,25 @@ void fixup_irqs(const cpumask_t *mask, bool verbose) >>> !cpumask_test_cpu(cpu, &cpu_online_map) && >>> cpumask_test_cpu(cpu, desc->arch.old_cpu_mask) ) >>> { >>> + /* >>> + * This to be offlined CPU was the target of an interrupt >>> that's >>> + * been moved, and the new destination target hasn't yet >>> + * acknowledged any interrupt from it. >>> + * >>> + * We know the interrupt is configured to target the new CPU at >>> + * this point, so we can check IRR for any pending vectors and >>> + * forward them to the new destination. >>> + * >>> + * Note the difference between move_in_progress or >>> + * move_cleanup_count being set. For the later we know the new >>> + * destination has already acked at least one interrupt from >>> this >>> + * source, and hence there's no need to forward any stale >>> + * interrupts. >>> + */ >> >> I'm a little confused by this last paragraph: It talks about a difference, >> yet ... >> >>> + if ( apic_irr_read(desc->arch.old_vector) ) >>> + send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)), >>> + desc->arch.vector); >> >> ... in the code being commented there's no difference visible. Hmm, I guess >> this is related to the enclosing if(). Maybe this could be worded a little >> differently, e.g. starting with "Note that for the other case - >> move_cleanup_count being non-zero - we know ..."? > > Hm, I see. Yes, the difference is that for interrupts that have > move_cleanup_count set we don't forward pending interrupts in IRR on > this CPU. I put this here because I think it's more naturally > arranged with the rest of the comment. I can pull the whole comment > ahead if the if() if that's better. I actually agree with you that the placement right now is "more natural". I'm really just after making more clear what difference it is that is being talked about. Assuming of course ... >>> + if ( !cpu_online(cpu) && cpumask_test_cpu(cpu, >>> desc->arch.cpu_mask) ) >>> + check_irr = true; >>> + >>> if ( desc->handler->set_affinity ) >>> desc->handler->set_affinity(desc, affinity); >>> else if ( !(warned++) ) >>> set_affinity = false; >>> >>> + if ( check_irr && apic_irr_read(vector) ) >>> + /* >>> + * Forward pending interrupt to the new destination, this CPU >>> is >>> + * going offline and otherwise the interrupt would be lost. >>> + */ >>> + send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)), >>> + desc->arch.vector); >>> + >>> if ( desc->handler->enable ) >>> desc->handler->enable(desc); >>> >> >> Down from here, after the loop, there's a 1ms window where latched but not >> yet delivered interrupts can be received. How's that playing together with >> the changes you're making? Aren't we then liable to get two interrupts, one >> at the old and one at the new source, in unknown order? > > I was mistakenly thinking that clear_local_APIC() would block > interrupt delivery, but that's not the case, so yes, interrupts should > still be delivered in the window below. > > Let me test without this last patch. ... the patch wants / needs retaining in the first place. Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |