[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v4 22/30] context_tracking: Exit CT_STATE_IDLE upon irq/nmi entry



ct_nmi_{enter, exit}() only touches the RCU watching counter and doesn't
modify the actual CT state part context_tracking.state. This means that
upon receiving an IRQ when idle, the CT_STATE_IDLE->CT_STATE_KERNEL
transition only happens in ct_idle_exit().

One can note that ct_nmi_enter() can only ever be entered with the CT state
as either CT_STATE_KERNEL or CT_STATE_IDLE, as an IRQ/NMI happenning in the
CT_STATE_USER or CT_STATE_GUEST states will be routed down to ct_user_exit().

Add/remove CT_STATE_IDLE from the context tracking state as needed in
ct_nmi_{enter, exit}().

Note that this leaves the following window where the CPU is executing code
in kernelspace, but the context tracking state is CT_STATE_IDLE:

  ~> IRQ
  ct_nmi_enter()
    state = state + CT_STATE_KERNEL - CT_STATE_IDLE

  [...]

  ct_nmi_exit()
    state = state - CT_STATE_KERNEL + CT_STATE_IDLE

  [...] /!\ CT_STATE_IDLE here while we're really in kernelspace! /!\

  ct_cpuidle_exit()
    state = state + CT_STATE_KERNEL - CT_STATE_IDLE

Signed-off-by: Valentin Schneider <vschneid@xxxxxxxxxx>
---
 kernel/context_tracking.c | 22 +++++++++++++++++++---
 1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index a61498a8425e2..15f10ddec8cbe 100644
--- a/kernel/context_tracking.c
+++ b/kernel/context_tracking.c
@@ -236,7 +236,9 @@ void noinstr ct_nmi_exit(void)
        instrumentation_end();
 
        // RCU is watching here ...
-       ct_kernel_exit_state(CT_RCU_WATCHING);
+       ct_kernel_exit_state(CT_RCU_WATCHING -
+                            CT_STATE_KERNEL +
+                            CT_STATE_IDLE);
        // ... but is no longer watching here.
 
        if (!in_nmi())
@@ -259,6 +261,7 @@ void noinstr ct_nmi_enter(void)
 {
        long incby = 2;
        struct context_tracking *ct = this_cpu_ptr(&context_tracking);
+       int curr_state;
 
        /* Complain about underflow. */
        WARN_ON_ONCE(ct_nmi_nesting() < 0);
@@ -271,13 +274,26 @@ void noinstr ct_nmi_enter(void)
         * to be in the outermost NMI handler that interrupted an RCU-idle
         * period (observation due to Andy Lutomirski).
         */
-       if (!rcu_is_watching_curr_cpu()) {
+       curr_state = raw_atomic_read(this_cpu_ptr(&context_tracking.state));
+       if (!(curr_state & CT_RCU_WATCHING)) {
 
                if (!in_nmi())
                        rcu_task_enter();
 
+               /*
+                * RCU isn't watching, so we're one of
+                * CT_STATE_IDLE
+                * CT_STATE_USER
+                * CT_STATE_GUEST
+                * guest/user entry is handled by ct_user_enter(), so this has
+                * to be idle entry.
+                */
+               WARN_ON_ONCE((curr_state & CT_STATE_MASK) != CT_STATE_IDLE);
+
                // RCU is not watching here ...
-               ct_kernel_enter_state(CT_RCU_WATCHING);
+               ct_kernel_enter_state(CT_RCU_WATCHING +
+                                     CT_STATE_KERNEL -
+                                     CT_STATE_IDLE);
                // ... but is watching here.
 
                instrumentation_begin();
-- 
2.43.0




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.