|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 2/2] xen/arm: Mitigate straight-line speculation for SMC call
On Tue, 16 Jun 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@xxxxxxxxxx>
>
> SMC call will update some of registers (typically only x0) depending on
^a SMC call
> the arguments provided.
>
> Some CPUs can speculate past a SMC instruction and potentially perform
> speculative access to emrmoy using the pre-call values before executing
^ memory
> the SMC.
>
> There is no known gadget available after the SMC call today. However
> some of the registers may contain values from the guest and are expected
> to be updated by the SMC call.
>
> In order to harden the code, it would be better to prevent straight-line
> speculation from an SMC. Architecturally executing the speculation
^ a? any?
> barrier after every SMC can be rather expensive (particularly on core
> not SB). Therefore we want to mitigate it diferrently:
^ not SB capable? ^ differently
>
> * For arm_smccc_1_0_smc, we can avoid a speculation barrier right
> after the SMC instruction and instead rely on the one after eret.
^ ret
> * For arm_smccc_1_1_smc, we can place a B instruction after the SMC
> instruction to skip the barrier.
>
> Note that arm_smccc_1_0_smc version on arm32 is just an alias to
> arm_smccc_1_1_smc.
>
> Note that no speculation barrier has been added after the SMC
> instruction in arm64/entry.S. This is fine because the call is not
> expected to modify any registers. So straight-line speculation doesn't
> matter.
>
> Signed-off-by: Julien Grall <jgrall@xxxxxxxxxx>
I couldn't spot any issues with the patch and I compile-tested it.
> ---
>
> Note this hasn't been vetted by Arm but they are using the same
> sort of mitigation for blr. So I am quite confident this could do the
> trick.
Noted
> ---
> xen/arch/arm/arm64/smc.S | 1 +
> xen/include/asm-arm/smccc.h | 13 +++++++++++++
> xen/include/asm-arm/system.h | 8 ++++++++
> 3 files changed, 22 insertions(+)
>
> diff --git a/xen/arch/arm/arm64/smc.S b/xen/arch/arm/arm64/smc.S
> index b0752be57e8f..e0a3106dd7ec 100644
> --- a/xen/arch/arm/arm64/smc.S
> +++ b/xen/arch/arm/arm64/smc.S
> @@ -30,3 +30,4 @@ ENTRY(__arm_smccc_1_0_smc)
> stp x2, x3, [x4, #SMCCC_RES_a2]
> 1:
> ret
> + sb
> diff --git a/xen/include/asm-arm/smccc.h b/xen/include/asm-arm/smccc.h
> index 9d94beb3df2d..8650224923b1 100644
> --- a/xen/include/asm-arm/smccc.h
> +++ b/xen/include/asm-arm/smccc.h
> @@ -200,11 +200,24 @@ struct arm_smccc_res {
> * We have an output list that is not necessarily used, and GCC feels
> * entitled to optimise the whole sequence away. "volatile" is what
> * makes it stick.
> + *
> + * Some of the SMC callers are passing directly values that are
> + * controlled by the guest. To mitigate against straight-line
> + * speculation, a speculation barrier is required. As it may be
> + * expensive to architecturally execute the speculation barrier, we are
> + * using a B instruction to architecturally skip it.
> + *
> + * Note that the speculation barrier is technically not necessary as the
> + * B instruction should already block straight-line speculation. But
> + * better be safe than sorry ;).
Eh eh indeed :-)
I think this would be one thing to consider removing depending on ARM's
feedback.
> */
> #define arm_smccc_1_1_smc(...) \
> do { \
> __declare_args(__count_args(__VA_ARGS__), __VA_ARGS__); \
> asm volatile("smc #0\n" \
> + "b 1f\n" \
> + ASM_SB \
> + "1:\n" \
> __constraints(__count_args(__VA_ARGS__))); \
> if ( ___res ) \
> *___res = (typeof(*___res)){r0, r1, r2, r3}; \
> diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h
> index 65d5c8e423d7..e33ff4e0fc39 100644
> --- a/xen/include/asm-arm/system.h
> +++ b/xen/include/asm-arm/system.h
> @@ -33,6 +33,14 @@
> #define smp_mb__before_atomic() smp_mb()
> #define smp_mb__after_atomic() smp_mb()
>
> +/*
> + * Speculative barrier
> + * XXX: Add support for the 'sb' instruction
> + */
> +#define ASM_SB "dsb nsh \n isb \n"
Why not ';' ? Anyway it doesn't matter.
> +#define sb() asm volatile(ASM_SB)
> +
> /*
> * This is used to ensure the compiler did actually allocate the register we
> * asked it for some inline assembly sequences. Apparently we can't trust
> --
> 2.17.1
>
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |