[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH -next v5 04/22] arm64: entry: Rework arm64_preempt_schedule_irq()



The generic entry do preempt_schedule_irq() by checking if need_resched()
satisfied, but arm64 has some of its own additional checks such as
GIC priority masking.

In preparation for moving arm64 over to the generic entry code, rework
arm64_preempt_schedule_irq() to check whether it need resched in a check
function called arm64_need_resched().

No functional changes.

Signed-off-by: Jinjie Ruan <ruanjinjie@xxxxxxxxxx>
---
 arch/arm64/kernel/entry-common.c | 17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 7a588515ee07..da68c089b74b 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -83,10 +83,10 @@ 
DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
 #define need_irq_preemption()  (IS_ENABLED(CONFIG_PREEMPTION))
 #endif
 
-static void __sched arm64_preempt_schedule_irq(void)
+static inline bool arm64_need_resched(void)
 {
        if (!need_irq_preemption())
-               return;
+               return false;
 
        /*
         * Note: thread_info::preempt_count includes both thread_info::count
@@ -94,7 +94,7 @@ static void __sched arm64_preempt_schedule_irq(void)
         * preempt_count().
         */
        if (READ_ONCE(current_thread_info()->preempt_count) != 0)
-               return;
+               return false;
 
        /*
         * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
@@ -103,7 +103,7 @@ static void __sched arm64_preempt_schedule_irq(void)
         * DAIF we must have handled an NMI, so skip preemption.
         */
        if (system_uses_irq_prio_masking() && read_sysreg(daif))
-               return;
+               return false;
 
        /*
         * Preempting a task from an IRQ means we leave copies of PSTATE
@@ -113,8 +113,10 @@ static void __sched arm64_preempt_schedule_irq(void)
         * Only allow a task to be preempted once cpufeatures have been
         * enabled.
         */
-       if (system_capabilities_finalized())
-               preempt_schedule_irq();
+       if (!system_capabilities_finalized())
+               return false;
+
+       return true;
 }
 
 /*
@@ -139,7 +141,8 @@ static __always_inline void __exit_to_kernel_mode(struct 
pt_regs *regs,
                        return;
                }
 
-               arm64_preempt_schedule_irq();
+               if (arm64_need_resched())
+                       preempt_schedule_irq();
 
                trace_hardirqs_on();
        } else {
-- 
2.34.1




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.