[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH for-4.19 6/9] x86/irq: restrict CPU movement in set_desc_affinity()
If external interrupts are using logical mode it's possible to have an overlap between the current ->arch.cpu_mask and the provided mask (or TARGET_CPUS). If that's the case avoid assigning a new vector and just move the interrupt to a member of ->arch.cpu_mask that overlaps with the provided mask and is online. While there also add an extra assert to ensure the mask containing the possible destinations is not empty before calling cpu_mask_to_apicid(), as at that point having an empty mask is not expected. Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> --- xen/arch/x86/irq.c | 34 +++++++++++++++++++++++++++------- 1 file changed, 27 insertions(+), 7 deletions(-) diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c index 1b7127090377..ae8fa574d4e7 100644 --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -846,19 +846,38 @@ void cf_check irq_complete_move(struct irq_desc *desc) unsigned int set_desc_affinity(struct irq_desc *desc, const cpumask_t *mask) { - int ret; - unsigned long flags; cpumask_t dest_mask; if ( mask && !cpumask_intersects(mask, &cpu_online_map) ) return BAD_APICID; - spin_lock_irqsave(&vector_lock, flags); - ret = _assign_irq_vector(desc, mask ?: TARGET_CPUS); - spin_unlock_irqrestore(&vector_lock, flags); + /* + * mask input set con contain CPUs that are not online. To decide whether + * the interrupt needs to be migrated restrict the input mask to the CPUs + * that are online. + */ + if ( mask ) + cpumask_and(&dest_mask, mask, &cpu_online_map); + else + cpumask_copy(&dest_mask, TARGET_CPUS); - if ( ret < 0 ) - return BAD_APICID; + /* + * Only move the interrupt if there are no CPUs left in ->arch.cpu_mask + * that can handle it, otherwise just shuffle it around ->arch.cpu_mask + * to an available destination. + */ + if ( !cpumask_intersects(desc->arch.cpu_mask, &dest_mask) ) + { + int ret; + unsigned long flags; + + spin_lock_irqsave(&vector_lock, flags); + ret = _assign_irq_vector(desc, mask ?: TARGET_CPUS); + spin_unlock_irqrestore(&vector_lock, flags); + + if ( ret < 0 ) + return BAD_APICID; + } if ( mask ) { @@ -871,6 +890,7 @@ unsigned int set_desc_affinity(struct irq_desc *desc, const cpumask_t *mask) cpumask_copy(&dest_mask, desc->arch.cpu_mask); } cpumask_and(&dest_mask, &dest_mask, &cpu_online_map); + ASSERT(!cpumask_empty(&dest_mask)); return cpu_mask_to_apicid(&dest_mask); } -- 2.44.0
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |