[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH -next v5 08/22] arm64: entry: Use different helpers to check resched for PREEMPT_DYNAMIC
On Fri, Dec 06, 2024 at 06:17:30PM +0800, Jinjie Ruan wrote: > In generic entry, when PREEMPT_DYNAMIC is enabled or disabled, two > different helpers are used to check whether resched is required > and some common code is reused. > > In preparation for moving arm64 over to the generic entry code, > use new helper to check resched when PREEMPT_DYNAMIC enabled and > reuse common code for the disabled case. > > No functional changes. Please fold this together with the last two patches; it's undoing changes you made in patch 6, and it'd be far clearer to see that all at once. Mark. > > Signed-off-by: Jinjie Ruan <ruanjinjie@xxxxxxxxxx> > --- > arch/arm64/include/asm/preempt.h | 3 +++ > arch/arm64/kernel/entry-common.c | 21 +++++++++++---------- > 2 files changed, 14 insertions(+), 10 deletions(-) > > diff --git a/arch/arm64/include/asm/preempt.h > b/arch/arm64/include/asm/preempt.h > index d0f93385bd85..0f0ba250efe8 100644 > --- a/arch/arm64/include/asm/preempt.h > +++ b/arch/arm64/include/asm/preempt.h > @@ -93,11 +93,14 @@ void dynamic_preempt_schedule(void); > #define __preempt_schedule() dynamic_preempt_schedule() > void dynamic_preempt_schedule_notrace(void); > #define __preempt_schedule_notrace() dynamic_preempt_schedule_notrace() > +void dynamic_irqentry_exit_cond_resched(void); > +#define irqentry_exit_cond_resched() dynamic_irqentry_exit_cond_resched() > > #else /* CONFIG_PREEMPT_DYNAMIC */ > > #define __preempt_schedule() preempt_schedule() > #define __preempt_schedule_notrace() preempt_schedule_notrace() > +#define irqentry_exit_cond_resched() raw_irqentry_exit_cond_resched() > > #endif /* CONFIG_PREEMPT_DYNAMIC */ > #endif /* CONFIG_PREEMPTION */ > diff --git a/arch/arm64/kernel/entry-common.c > b/arch/arm64/kernel/entry-common.c > index 029f8bd72f8a..015a65d19b52 100644 > --- a/arch/arm64/kernel/entry-common.c > +++ b/arch/arm64/kernel/entry-common.c > @@ -75,10 +75,6 @@ static noinstr irqentry_state_t > enter_from_kernel_mode(struct pt_regs *regs) > return state; > } > > -#ifdef CONFIG_PREEMPT_DYNAMIC > -DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); > -#endif > - > static inline bool arm64_need_resched(void) > { > /* > @@ -106,17 +102,22 @@ static inline bool arm64_need_resched(void) > > void raw_irqentry_exit_cond_resched(void) > { > -#ifdef CONFIG_PREEMPT_DYNAMIC > - if (!static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched)) > - return; > -#endif > - > if (!preempt_count()) { > if (need_resched() && arm64_need_resched()) > preempt_schedule_irq(); > } > } > > +#ifdef CONFIG_PREEMPT_DYNAMIC > +DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); > +void dynamic_irqentry_exit_cond_resched(void) > +{ > + if (!static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched)) > + return; > + raw_irqentry_exit_cond_resched(); > +} > +#endif > + > /* > * Handle IRQ/context state management when exiting to kernel mode. > * After this function returns it is not safe to call regular kernel code, > @@ -140,7 +141,7 @@ static __always_inline void __exit_to_kernel_mode(struct > pt_regs *regs, > } > > if (IS_ENABLED(CONFIG_PREEMPTION)) > - raw_irqentry_exit_cond_resched(); > + irqentry_exit_cond_resched(); > > trace_hardirqs_on(); > } else { > -- > 2.34.1 >
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |