[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [xen staging] xen/softirq: Rework arch_skip_send_event_check() into arch_set_softirq()
commit b473e5e212e445d3c193c1c83b52b129af571b19 Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> AuthorDate: Tue Jul 1 21:04:17 2025 +0100 Commit: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> CommitDate: Fri Jul 4 19:03:32 2025 +0100 xen/softirq: Rework arch_skip_send_event_check() into arch_set_softirq() x86 is the only architecture wanting an optimisation here, but the test_and_set_bit() is a store into the monitored line (i.e. will wake up the target) and, prior to the removal of the broken IPI-elision algorithm, was racy, causing unnecessary IPIs to be sent. To do this in a race-free way, the store to the monited line needs to also sample the status of the target in one atomic action. Implement a new arch helper with different semantics; to make the softirq pending and decide about IPIs together. For now, implement the default helper. It will be overridden by x86 in a subsequent change. No functional change. Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> --- xen/arch/x86/acpi/cpu_idle.c | 5 ----- xen/arch/x86/include/asm/softirq.h | 2 -- xen/common/softirq.c | 8 ++------ xen/include/xen/softirq.h | 16 ++++++++++++++++ 4 files changed, 18 insertions(+), 13 deletions(-) diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c index 07056a91a2..4f69df5a74 100644 --- a/xen/arch/x86/acpi/cpu_idle.c +++ b/xen/arch/x86/acpi/cpu_idle.c @@ -440,11 +440,6 @@ static int __init cf_check cpu_idle_key_init(void) } __initcall(cpu_idle_key_init); -bool arch_skip_send_event_check(unsigned int cpu) -{ - return false; -} - void mwait_idle_with_hints(unsigned int eax, unsigned int ecx) { unsigned int cpu = smp_processor_id(); diff --git a/xen/arch/x86/include/asm/softirq.h b/xen/arch/x86/include/asm/softirq.h index 415ee866c7..e4b194f069 100644 --- a/xen/arch/x86/include/asm/softirq.h +++ b/xen/arch/x86/include/asm/softirq.h @@ -9,6 +9,4 @@ #define HVM_DPCI_SOFTIRQ (NR_COMMON_SOFTIRQS + 4) #define NR_ARCH_SOFTIRQS 5 -bool arch_skip_send_event_check(unsigned int cpu); - #endif /* __ASM_SOFTIRQ_H__ */ diff --git a/xen/common/softirq.c b/xen/common/softirq.c index 60f344e842..dc3aabce33 100644 --- a/xen/common/softirq.c +++ b/xen/common/softirq.c @@ -94,9 +94,7 @@ void cpumask_raise_softirq(const cpumask_t *mask, unsigned int nr) raise_mask = &per_cpu(batch_mask, this_cpu); for_each_cpu(cpu, mask) - if ( !test_and_set_bit(nr, &softirq_pending(cpu)) && - cpu != this_cpu && - !arch_skip_send_event_check(cpu) ) + if ( !arch_set_softirq(nr, cpu) && cpu != this_cpu ) __cpumask_set_cpu(cpu, raise_mask); if ( raise_mask == &send_mask ) @@ -107,9 +105,7 @@ void cpu_raise_softirq(unsigned int cpu, unsigned int nr) { unsigned int this_cpu = smp_processor_id(); - if ( test_and_set_bit(nr, &softirq_pending(cpu)) - || (cpu == this_cpu) - || arch_skip_send_event_check(cpu) ) + if ( arch_set_softirq(nr, cpu) || cpu == this_cpu ) return; if ( !per_cpu(batching, this_cpu) || in_irq() ) diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h index e9f79ec0ce..48f17e49ef 100644 --- a/xen/include/xen/softirq.h +++ b/xen/include/xen/softirq.h @@ -23,6 +23,22 @@ enum { #define NR_SOFTIRQS (NR_COMMON_SOFTIRQS + NR_ARCH_SOFTIRQS) +/* + * Ensure softirq @nr is pending on @cpu. Return true if an IPI can be + * skipped, false if the IPI cannot be skipped. + */ +#ifndef arch_set_softirq +static always_inline bool arch_set_softirq(unsigned int nr, unsigned int cpu) +{ + /* + * Try to set the softirq pending. If we set the bit (i.e. the old bit + * was 0), we're responsible to send the IPI. If the softirq was already + * pending (i.e. the old bit was 1), no IPI is needed. + */ + return test_and_set_bit(nr, &softirq_pending(cpu)); +} +#endif + typedef void (*softirq_handler)(void); void do_softirq(void); -- generated by git-patchbot for /home/xen/git/xen.git#staging
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |