[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v2 1/7] x86/smp: do not use shorthand IPI destinations in CPU hot{,un}plug contexts
Due to the current rwlock logic, if the CPU calling get_cpu_maps() does so from a cpu_hotplug_{begin,done}() region the function will still return success, because a CPU taking the rwlock in read mode after having taken it in write mode is allowed. Such behavior however defeats the purpose of get_cpu_maps(), as it should always return false when called with a CPU hot{,un}plug operation is in progress. Otherwise the logic in send_IPI_mask() is wrong, as it could decide to use the shorthand even when a CPU operation is in progress. Introduce a new helper to detect whether the current caller is between a cpu_hotplug_{begin,done}() region and use it in send_IPI_mask() to restrict shorthand usage. Fixes: 5500d265a2a8 ('x86/smp: use APIC ALLBUT destination shorthand when possible') Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> --- Changes since v1: - Modify send_IPI_mask() to detect CPU hotplug context. --- xen/arch/x86/smp.c | 2 +- xen/common/cpu.c | 5 +++++ xen/include/xen/cpu.h | 10 ++++++++++ xen/include/xen/rwlock.h | 2 ++ 4 files changed, 18 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c index 7443ad20335e..04c6a0572319 100644 --- a/xen/arch/x86/smp.c +++ b/xen/arch/x86/smp.c @@ -88,7 +88,7 @@ void send_IPI_mask(const cpumask_t *mask, int vector) * the system have been accounted for. */ if ( system_state > SYS_STATE_smp_boot && - !unaccounted_cpus && !disabled_cpus && + !unaccounted_cpus && !disabled_cpus && !cpu_in_hotplug_context() && /* NB: get_cpu_maps lock requires enabled interrupts. */ local_irq_is_enabled() && (cpus_locked = get_cpu_maps()) && (park_offline_cpus || diff --git a/xen/common/cpu.c b/xen/common/cpu.c index 8709db4d2957..6e35b114c080 100644 --- a/xen/common/cpu.c +++ b/xen/common/cpu.c @@ -68,6 +68,11 @@ void cpu_hotplug_done(void) write_unlock(&cpu_add_remove_lock); } +bool cpu_in_hotplug_context(void) +{ + return rw_is_write_locked_by_me(&cpu_add_remove_lock); +} + static NOTIFIER_HEAD(cpu_chain); void __init register_cpu_notifier(struct notifier_block *nb) diff --git a/xen/include/xen/cpu.h b/xen/include/xen/cpu.h index e1d4eb59675c..6bf578675008 100644 --- a/xen/include/xen/cpu.h +++ b/xen/include/xen/cpu.h @@ -13,6 +13,16 @@ void put_cpu_maps(void); void cpu_hotplug_begin(void); void cpu_hotplug_done(void); +/* + * Returns true when the caller CPU is between a cpu_hotplug_{begin,done}() + * region. + * + * This is required to safely identify hotplug contexts, as get_cpu_maps() + * would otherwise succeed because a caller holding the lock in write mode is + * allowed to acquire the same lock in read mode. + */ +bool cpu_in_hotplug_context(void); + /* Receive notification of CPU hotplug events. */ void register_cpu_notifier(struct notifier_block *nb); diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h index a2e98cad343e..4e7802821859 100644 --- a/xen/include/xen/rwlock.h +++ b/xen/include/xen/rwlock.h @@ -316,6 +316,8 @@ static always_inline void write_lock_irq(rwlock_t *l) #define rw_is_locked(l) _rw_is_locked(l) #define rw_is_write_locked(l) _rw_is_write_locked(l) +#define rw_is_write_locked_by_me(l) \ + lock_evaluate_nospec(_is_write_locked_by_me(atomic_read(&(l)->cnts))) typedef struct percpu_rwlock percpu_rwlock_t; -- 2.44.0
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |