|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 59/65] x86/traps: Rework write_stub_trampoline() to not hardcode the jmp
On 26.11.2021 13:34, Andrew Cooper wrote:
> For CET-IBT, we will need to optionally insert an endbr64 instruction at the
> start of the stub. Don't hardcode the jmp displacement assuming that it
> starts at byte 24 of the stub.
>
> Also add extra comments describing what is going on. The mix of %rax and %rsp
> is far from trivial to follow.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> ---
> CC: Jan Beulich <JBeulich@xxxxxxxx>
> CC: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> CC: Wei Liu <wl@xxxxxxx>
> ---
> xen/arch/x86/x86_64/traps.c | 36 ++++++++++++++++++++++--------------
> 1 file changed, 22 insertions(+), 14 deletions(-)
>
> diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c
> index d661d7ffcaaf..6f3c65bedc7a 100644
> --- a/xen/arch/x86/x86_64/traps.c
> +++ b/xen/arch/x86/x86_64/traps.c
> @@ -293,30 +293,38 @@ static unsigned int write_stub_trampoline(
> unsigned char *stub, unsigned long stub_va,
> unsigned long stack_bottom, unsigned long target_va)
> {
> + unsigned char *p = stub;
> +
> + /* Store guest %rax into %ss slot */
> /* movabsq %rax, stack_bottom - 8 */
> - stub[0] = 0x48;
> - stub[1] = 0xa3;
> - *(uint64_t *)&stub[2] = stack_bottom - 8;
> + *p++ = 0x48;
> + *p++ = 0xa3;
> + *(uint64_t *)p = stack_bottom - 8;
> + p += 8;
>
> + /* Store guest %rsp in %rax */
> /* movq %rsp, %rax */
> - stub[10] = 0x48;
> - stub[11] = 0x89;
> - stub[12] = 0xe0;
> + *p++ = 0x48;
> + *p++ = 0x89;
> + *p++ = 0xe0;
>
> + /* Switch to Xen stack */
> /* movabsq $stack_bottom - 8, %rsp */
> - stub[13] = 0x48;
> - stub[14] = 0xbc;
> - *(uint64_t *)&stub[15] = stack_bottom - 8;
> + *p++ = 0x48;
> + *p++ = 0xbc;
> + *(uint64_t *)p = stack_bottom - 8;
> + p += 8;
>
> + /* Store guest %rsp into %rsp slot */
> /* pushq %rax */
> - stub[23] = 0x50;
> + *p++ = 0x50;
>
> /* jmp target_va */
> - stub[24] = 0xe9;
> - *(int32_t *)&stub[25] = target_va - (stub_va + 29);
> + *p++ = 0xe9;
> + *(int32_t *)p = target_va - (stub_va + (p - stub) + 4);
> + p += 4;
>
> - /* Round up to a multiple of 16 bytes. */
> - return 32;
> + return p - stub;
> }
I'm concerned of you silently discarding the aligning to 16 bytes here.
Imo this really needs justifying, or perhaps even delaying until a
later change. I'd like to point out though that we may not have a space
issue here at all, as I think we can replace one of the MOVABSQ (using
absolute numbers to hopefully make this easier to follow):
movabsq %rax, stack_bottom - 8
movq %rsp, %rax
movq -18(%rip), %rsp
pushq %rax
jmp target_va
This totals to 26 bytes, leaving enough room for ENDBR64 without crossing
the 32-byte boundary. But I fear you may eat me for using insn bytes as
data ...
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |