[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 2/5] arm/irq: Migrate IRQs during CPU up/down operations


  • To: Mykyta Poturai <Mykyta_Poturai@xxxxxxxx>
  • From: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • Date: Thu, 5 Feb 2026 14:07:02 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 4.158.2.129) smtp.rcpttodomain=epam.com smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com])
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=V6oOM/no9Hpzb1JgOlBielFeUA61pJW69yO25S8iU/s=; b=R75j+NfipzVHTDATPwpdPpEWeK285MrGh2M4PeflelPC/qqQ0cbjw+nyOxywDYtUAhtqpJgGImmkFvxqP6eyscY0ol4pqWPkJODDEEJDnzg12sSobru97M0tILvR1M0eHJisxlFvukZ7QijvMZAulgaW2PL7+n1pyNdmINgR3g+wCMyMHxmr3dl0Jk+zG+qrqGYGd3Z0Bc+AjrzygT70yYvVbhze+JYzU6N3a9OiBimePU4/0VGhyLxq3tKK5zEzs7VxtTUc9I+zGm9NtWWDGN/msdCMN3sWLWjTTm+Z5Q9F33OJRbxTHTVEel+UWV3GUf59tK05jus6Td3VfpFLfg==
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=V6oOM/no9Hpzb1JgOlBielFeUA61pJW69yO25S8iU/s=; b=eB5QvtUVt7jmvGj8WWRAGBW1VTGGhyp8WMJX9OpFbZiSXf/zEaH0ADCfyYlcaCuzoRo3jAf2LoFTSqgUZD4ww7ImEqCBhcm18Umj/oscen3nuuFS7PBNnWmGR2/uKC2ZFSdkPc7v30bhS6fOPvqpN/FtLgzSBFul5Xnb6QuFz5NI2b/Tvdd5QeqHGv5rLL9Ke5tlyQatEeo2LJ4AgrxBJe+r+YBC7i3C7iEAAQ+t38epg2JIfWLzij9bcreziFMoLkRUeYoUGHMGq/VVGl7O9G1zgkG+U6PE706TDihdwwB6/LgczVGRwo7Wpw6utXnWIcfQkkSfk9PArTxh5Ylt0w==
  • Arc-seal: i=2; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=pass; b=KEsc/pk5mvXKxACFlbCOw4FU/JM/Y8BasYuRXJeeAe2gFVLQGGUrD9Q8Lw90XJ5iM0/qz8V8UCKZh4/1wEJyebnNZyrZFIByyS4Wk7mNkhf5g5cOQnM7BZvgNr6tVgqr/nC3/bguqCCa/OB0++SN8fhRyktZAqA0ZGv/LYjQAd5L9YJpvY5UF9h8idqs8ze+hSG5yMjvrRJ3CcQUhA0kX2+O9Km4xAJyOBy9mYBBeP9T9JAoAGa9GGonveB5f1nykN4QQcjwKxmPCAZGakgNmj3z7okdVpPp865HqR7nOGfsShA+krnk+r5NUenMt+JFTV0HM+aeNP2L0UFNYUVTjg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=OOSrbkBnpoCEWOWGWjTVdBvNxBXAvzKBGL7nqinol/LaWs3JfGYV5Anv9SHT7poHgGUvM3Lyf9L4FqdPEOO/10GyLlLhpX7AivREotIRE+CWvopsd883KzeGy0zQipwKjUwebNZVBWT59HDIlv09l9eXPyJ/rHeT1tFSacFqI7wOZcuCJSKQyPJtRbns6dMJZgMjDT+gPBodnwpN6aUY/A3cmwVSzNGQfZz1WE+AXZn2zG94FV7L4Vd9HzD/QXbloN+HCfw6glbySxoQA4u2LZqZ2a2NJ9VN9Uii6pehz//eC8e2xEmFACfjVnInG5FD8U3E7nphP2A5Vk7+VzNplw==
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Thu, 05 Feb 2026 14:08:30 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Thread-index: AQHchGj73A4kTs3O/EWyR3N/eFmlXrVyue0AgAGCbQCAAAwtgA==
  • Thread-topic: [PATCH v5 2/5] arm/irq: Migrate IRQs during CPU up/down operations

Hi Mykyta,

> On 5 Feb 2026, at 14:23, Mykyta Poturai <Mykyta_Poturai@xxxxxxxx> wrote:
> 
> On 04.02.26 16:20, Bertrand Marquis wrote:
>> Hi Mykyta.
>> 
>>> On 13 Jan 2026, at 09:45, Mykyta Poturai <Mykyta_Poturai@xxxxxxxx> wrote:
>>> 
>>> Move IRQs from dying CPU to the online ones when a CPU is getting
>>> offlined. When onlining, rebalance all IRQs in a round-robin fashion.
>>> Guest-bound IRQs are already handled by scheduler in the process of
>>> moving vCPUs to active pCPUs, so we only need to handle IRQs used by Xen
>>> itself.
>>> 
>>> Signed-off-by: Mykyta Poturai <mykyta_poturai@xxxxxxxx>
>>> ---
>>> v4->v5:
>>> * handle CPU onlining as well
>>> * more comments
>>> * fix crash when ESPI is disabled
>>> * don't assume CPU 0 is a boot CPU
>>> * use insigned int for irq number
>>> * remove assumption that all irqs a bound to CPU 0 by default from the
>>>  commit message
>>> 
>>> v3->v4:
>>> * patch introduced
>>> ---
>>> xen/arch/arm/include/asm/irq.h |  2 ++
>>> xen/arch/arm/irq.c             | 54 ++++++++++++++++++++++++++++++++++
>>> xen/arch/arm/smpboot.c         |  6 ++++
>>> 3 files changed, 62 insertions(+)
>>> 
>>> diff --git a/xen/arch/arm/include/asm/irq.h b/xen/arch/arm/include/asm/irq.h
>>> index 09788dbfeb..a0250bac85 100644
>>> --- a/xen/arch/arm/include/asm/irq.h
>>> +++ b/xen/arch/arm/include/asm/irq.h
>>> @@ -126,6 +126,8 @@ bool irq_type_set_by_domain(const struct domain *d);
>>> void irq_end_none(struct irq_desc *irq);
>>> #define irq_end_none irq_end_none
>>> 
>>> +void rebalance_irqs(unsigned int from, bool up);
>>> +
>>> #endif /* _ASM_HW_IRQ_H */
>>> /*
>>>  * Local variables:
>>> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
>>> index 7204bc2b68..a32dc729f8 100644
>>> --- a/xen/arch/arm/irq.c
>>> +++ b/xen/arch/arm/irq.c
>>> @@ -158,6 +158,58 @@ static int init_local_irq_data(unsigned int cpu)
>>>     return 0;
>>> }
>>> 
>>> +static int cpu_next;
>>> +
>>> +static void balance_irq(int irq, unsigned int from, bool up)
>>> +{
>>> +    struct irq_desc *desc = irq_to_desc(irq);
>>> +    unsigned long flags;
>>> +
>>> +    ASSERT(!cpumask_empty(&cpu_online_map));
>>> +
>>> +    spin_lock_irqsave(&desc->lock, flags);
>>> +    if ( likely(!desc->action) )
>>> +        goto out;
>>> +
>>> +    if ( likely(test_bit(_IRQ_GUEST, &desc->status) ||
>>> +                test_bit(_IRQ_MOVE_PENDING, &desc->status)) )
>>> +        goto out;
>>> +
>>> +    /*
>>> +     * Setting affinity to a mask of multiple CPUs causes the GIC drivers 
>>> to
>>> +     * select one CPU from that mask. If the dying CPU was included in the 
>>> IRQ's
>>> +     * affinity mask, we cannot determine exactly which CPU the interrupt 
>>> is
>>> +     * currently routed to, as GIC drivers lack a concrete get_affinity 
>>> API. So
>>> +     * to be safe we must reroute it to a new, definitely online, CPU. In 
>>> the
>>> +     * case of CPU going down, we move only the interrupt that could 
>>> reside on
>>> +     * it. Otherwise, we rearrange all interrupts in a round-robin fashion.
>>> +     */
>>> +    if ( !up && !cpumask_test_cpu(from, desc->affinity) )
>>> +        goto out;
>> 
>> I am a bit lost here on what you are trying to do in the case where
>> a cpu is coming up here, it feels like you are trying to change the
>> affinity of all interrupts in this case to cycle everything.
>> Is it really what is expected ?
>> If affinity was set by a VM on its interrupts, I would not expect
>> Xen to round-robin everything each time a cpu comes up.
>> 
> 
> The idea is to evenly spread interrupts between CPUs when the new ones 
> are being brought online. This is needed to prevent Xen-bound IRQs from 
> piling up on CPU 0 when other cores are being offlined and then onlined 
> back. It shouldn’t mess with guest affinities, as the code skips 
> everything that is assigned to guests and leaves it to be handled by the 
> scheduler/vgic. Performance-wise, it should also be okay, as from what 
> I’ve seen, there are not many interrupts used by Xen, and I expect CPU 
> hotplug operations to be fairly infrequent.

My fear here is a bit that by removing and adding a cpu we will completely
change irq affinities. I am not so sure that those kind of random assignments
are compatible with embedded or safety use cases and here there is no way
to configure this.

@Julien, Stefano and Michal: What do you think here ?

Cheers
Bertrand

> 
>>> +
>>> +    cpu_next = cpumask_cycle(cpu_next, &cpu_online_map);
>>> +    irq_set_affinity(desc, cpumask_of(cpu_next));
>>> +
>>> +out:
>>> +    spin_unlock_irqrestore(&desc->lock, flags);
>>> +}
>>> +
>>> +void rebalance_irqs(unsigned int from, bool up)
>>> +{
>>> +    int irq;
>>> +
>>> +    if ( cpumask_empty(&cpu_online_map) )
>>> +        return;
>>> +
>>> +    for ( irq = NR_LOCAL_IRQS; irq < NR_IRQS; irq++ )
>>> +        balance_irq(irq, from, up);
>>> +
>>> +#ifdef CONFIG_GICV3_ESPI
>>> +    for ( irq = ESPI_BASE_INTID; irq < ESPI_MAX_INTID; irq++ )
>>> +        balance_irq(irq, from, up);
>>> +#endif
>>> +}
>>> +
>>> static int cpu_callback(struct notifier_block *nfb, unsigned long action,
>>>                         void *hcpu)
>>> {
>>> @@ -172,6 +224,8 @@ static int cpu_callback(struct notifier_block *nfb, 
>>> unsigned long action,
>>>             printk(XENLOG_ERR "Unable to allocate local IRQ for CPU%u\n",
>>>                    cpu);
>>>         break;
>>> +    case CPU_ONLINE:
>>> +        rebalance_irqs(cpu, true);
>>>     }
>>> 
>>>     return notifier_from_errno(rc);
>>> diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
>>> index 7f3cfa812e..e1b9f94458 100644
>>> --- a/xen/arch/arm/smpboot.c
>>> +++ b/xen/arch/arm/smpboot.c
>>> @@ -425,6 +425,12 @@ void __cpu_disable(void)
>>> 
>>>     smp_mb();
>>> 
>>> +    /*
>>> +     * Now that the interrupts are cleared and the CPU marked as offline,
>>> +     * move interrupts out of it
>>> +     */
>>> +    rebalance_irqs(cpu, false);
>>> +
>> 
>> I would expect this to only be useful when HOTPLUG is enabled, maybe
>> we could have a static inline doing nothing when HOTPLUG is not on
>> and only do something if HOTPLUG is enabled here ?
>> 
> 
> Yes I will add this in the next version.
> 
>> Cheers
>> Bertrand
>> 
>>>     /* Return to caller; eventually the IPI mechanism will unwind and the
>>>      * scheduler will drop to the idle loop, which will call stop_cpu(). */
>>> }
>>> -- 
>>> 2.51.2
>> 
> 
> -- 
> Mykyta



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.