[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3 of 4 RFC] x86/nmi: Prevent reentrant execution of the C nmi handler



>>> On 04.12.12 at 19:16, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
> The (old) function do_nmi() is not reentrantly-safe.  Rename it to
> _do_nmi() and present a new do_nmi() which reentrancy guards.
> 
> If a reentrant NMI has been detected, then it is highly likely that the
> outer NMI exception frame has been corrupted, meaning we cannot return
> to the original context.  In this case, we panic() obviously rather than
> falling into an infinite loop.
> 
> panic() however is not safe to reenter from an NMI context, as an NMI
> (or MCE) can interrupt it inside its critical section, at which point a
> new call to panic() will deadlock.  As a result, we bail early if a
> panic() is already in progress, as Xen is about to die anyway.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> 
> --
> I am fairly sure this is safe with the current kexec_crash functionality
> which involves holding all non-crashing pcpus in an NMI loop.  In the
> case of reentrant NMIs and panic_in_progress, we will repeatedly bail
> early in an infinite loop of NMIs, which has the same intended effect of
> simply causing all non-crashing CPUs to stay out of the way while the
> main crash occurs.
> 
> diff -r 48a60a407e15 -r f6ad86b61d5a xen/arch/x86/traps.c
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -88,6 +88,7 @@ static char __read_mostly opt_nmi[10] = 
>  string_param("nmi", opt_nmi);
>  
>  DEFINE_PER_CPU(u64, efer);
> +static DEFINE_PER_CPU(bool_t, nmi_in_progress) = 0;
>  
>  DEFINE_PER_CPU_READ_MOSTLY(u32, ler_msr);
>  
> @@ -3182,7 +3183,8 @@ static int dummy_nmi_callback(struct cpu
>   
>  static nmi_callback_t nmi_callback = dummy_nmi_callback;
>  
> -void do_nmi(struct cpu_user_regs *regs)
> +/* This function should never be called directly.  Use do_nmi() instead. */
> +static void _do_nmi(struct cpu_user_regs *regs)
>  {
>      unsigned int cpu = smp_processor_id();
>      unsigned char reason;
> @@ -3208,6 +3210,44 @@ void do_nmi(struct cpu_user_regs *regs)
>      }
>  }
>  
> +/* This function is NOT SAFE to call from C code in general.
> + * Use with extreme care! */
> +void do_nmi(struct cpu_user_regs *regs)
> +{
> +    bool_t * in_progress = &this_cpu(nmi_in_progress);
> +
> +    if ( is_panic_in_progress() )
> +    {
> +        /* A panic is already in progress.  It may have reenabled NMIs,
> +         * or we are simply unluckly to receive one right now.  Either
> +         * way, bail early, as Xen is about to die.
> +         *
> +         * TODO: Ideally we should exit without executing an iret, to
> +         * leave NMIs disabled, but that option is not currently
> +         * available to us.

You could easily provide the ground work for this here by having
the function return a bool_t (even if not immediately consumed by
the caller in this same patch).

Jan

> +         */
> +        return;
> +    }
> +
> +    if ( test_and_set_bool(*in_progress) )
> +    {
> +        /* Crash in an obvious mannor, as opposed to falling into
> +         * infinite loop because our exception frame corrupted the
> +         * exception frame of the previous NMI.
> +         *
> +         * TODO: This check does not cover all possible cases of corrupt
> +         * exception frames, but it is substantially better than
> +         * nothing.
> +         */
> +        console_force_unlock();
> +        show_execution_state(regs);
> +        panic("Reentrant NMI detected\n");
> +    }
> +
> +    _do_nmi(regs);
> +    *in_progress = 0;
> +}
> +
>  void set_nmi_callback(nmi_callback_t callback)
>  {
>      nmi_callback = callback;
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.