|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 2/4] x86/altcall: Optimise away endbr64 instruction where possible
On 26.11.2021 22:22, Andrew Cooper wrote:
> With altcall, we convert indirect branches into direct ones. With that
> complete, none of the potential targets need an endbr64 instruction.
Assuming that no other hooks remain which re-use the same function. I
think this constraint wants at least mentioning explicitly.
> Furthermore, removing the endbr64 instructions is a security defence-in-depth
> improvement, because it limits the options available to an attacker who has
> managed to hijack a function pointer.
>
> Introduce a new .init.data.cf_clobber section. Have _apply_alternatives()
> walk over the entire section, looking for any pointers into .text, and clobber
> an endbr64 instruction if found. This is some minor structure (ab)use but it
> works alarmingly well.
Iirc you've said more than once that non-function-pointer data in
those structures is fine; I'm not convinced. What if a sequence of
sub-pointer-size fields has a value looking like a pointer into
.text? This may not be very likely, but would result in corruption
that may be hard to associate with anything. Of course, with the
is_endbr64() check and with a build time check of there not being
any stray ENDBR64 patterns in .text, that issue would disappear.
But we aren't quite there yet.
> --- a/xen/arch/x86/alternative.c
> +++ b/xen/arch/x86/alternative.c
> @@ -173,6 +173,9 @@ text_poke(void *addr, const void *opcode, size_t len)
> return memcpy(addr, opcode, len);
> }
>
> +extern unsigned long __initdata_cf_clobber_start[];
> +extern unsigned long __initdata_cf_clobber_end[];
const please. I also would find it quite a bit better if these
were suitably typed such that ...
> @@ -329,6 +332,41 @@ static void init_or_livepatch _apply_alternatives(struct
> alt_instr *start,
> add_nops(buf + a->repl_len, total_len - a->repl_len);
> text_poke(orig, buf, total_len);
> }
> +
> + /*
> + * Clobber endbr64 instructions now that altcall has finished optimised
> + * all indirect branches to direct ones.
> + */
> + if ( force && cpu_has_xen_ibt )
> + {
> + unsigned long *val;
> + unsigned int clobbered = 0;
> +
> + /*
> + * This is some minor structure (ab)use. We walk the entire contents
> + * of .init.data.cf_clobber as if it were an array of pointers.
> + *
> + * If the pointer points into .text, and has an endbr64 instruction,
> + * nop out the endbr64. This causes the pointer to no longer be a
> + * legal indirect branch target under CET-IBT. This is a
> + * defence-in-depth measure, to reduce the options available to an
> + * adversary who has managed to hijack a function pointer.
> + */
> + for ( val = __initdata_cf_clobber_start;
> + val < __initdata_cf_clobber_end;
> + val++ )
> + {
> + void *ptr = (void *)*val;
... no cast was needed here.
> --- a/xen/arch/x86/xen.lds.S
> +++ b/xen/arch/x86/xen.lds.S
> @@ -214,6 +214,11 @@ SECTIONS
> *(.initcall1.init)
> __initcall_end = .;
>
> + . = ALIGN(POINTER_ALIGN);
> + __initdata_cf_clobber_start = .;
> + *(.init.data.cf_clobber)
Nit: hard tab slipped in here.
> --- a/xen/include/xen/init.h
> +++ b/xen/include/xen/init.h
> @@ -18,6 +18,8 @@
> #define __init_call(lvl) __used_section(".initcall" lvl ".init")
> #define __exit_call __used_section(".exitcall.exit")
>
> +#define __initdata_cf_clobber __section(".init.data.cf_clobber")
Just to repeat what I've said elsewhere: I think we want a const
version of this as well.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |