|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 06/14] x86/shstk: Create shadow stacks
On 28/05/2020 13:50, Jan Beulich wrote:
> On 27.05.2020 21:18, Andrew Cooper wrote:
>> --- a/xen/arch/x86/cpu/common.c
>> +++ b/xen/arch/x86/cpu/common.c
>> @@ -769,6 +769,30 @@ void load_system_tables(void)
>> tss->rsp1 = 0x8600111111111111ul;
>> tss->rsp2 = 0x8600111111111111ul;
>>
>> + /* Set up the shadow stack IST. */
>> + if (cpu_has_xen_shstk) {
>> + volatile uint64_t *ist_ssp = this_cpu(tss_page).ist_ssp;
>> +
>> + /*
>> + * Used entries must point at the supervisor stack token.
>> + * Unused entries are poisoned.
>> + *
>> + * This IST Table may be live, and the NMI/#MC entries must
>> + * remain valid on every instruction boundary, hence the
>> + * volatile qualifier.
>> + */
> Move this comment ahead of what it comments on, as we usually have it?
>
>> + ist_ssp[0] = 0x8600111111111111ul;
>> + ist_ssp[IST_MCE] = stack_top + (IST_MCE * IST_SHSTK_SIZE) - 8;
>> + ist_ssp[IST_NMI] = stack_top + (IST_NMI * IST_SHSTK_SIZE) - 8;
>> + ist_ssp[IST_DB] = stack_top + (IST_DB * IST_SHSTK_SIZE) - 8;
>> + ist_ssp[IST_DF] = stack_top + (IST_DF * IST_SHSTK_SIZE) - 8;
> Strictly speaking you want to introduce
>
> #define IST_SHSTK_SLOT 0
>
> next to PRIMARY_SHSTK_SLOT and use
>
> ist_ssp[IST_MCE] = stack_top + (IST_SHSTK_SLOT * PAGE_SIZE) +
> (IST_MCE * IST_SHSTK_SIZE) - 8;
>
> etc here. It's getting longish, so I'm not going to insist. But if you
> go this route, then please also below / elsewhere.
Actually no. I've got a much better idea, based on how Linux does the
same, but it's definitely 4.15 material at this point.
>
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -5994,12 +5994,33 @@ void memguard_unguard_range(void *p, unsigned long l)
>>
>> #endif
>>
>> +static void write_sss_token(unsigned long *ptr)
>> +{
>> + /*
>> + * A supervisor shadow stack token is its own linear address, with the
>> + * busy bit (0) clear.
>> + */
>> + *ptr = (unsigned long)ptr;
>> +}
>> +
>> void memguard_guard_stack(void *p)
>> {
>> - map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
>> + /* IST Shadow stacks. 4x 1k in stack page 0. */
>> + if ( IS_ENABLED(CONFIG_XEN_SHSTK) )
>> + {
>> + write_sss_token(p + (IST_MCE * IST_SHSTK_SIZE) - 8);
>> + write_sss_token(p + (IST_NMI * IST_SHSTK_SIZE) - 8);
>> + write_sss_token(p + (IST_DB * IST_SHSTK_SIZE) - 8);
>> + write_sss_token(p + (IST_DF * IST_SHSTK_SIZE) - 8);
> Up to now two successive memguard_guard_stack() were working fine. This
> will be no longer the case, just as an observation.
I don't think that matters.
>
>> + }
>> + map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1,
>> PAGE_HYPERVISOR_SHSTK);
> As already hinted at in reply to the previous patch, I think this wants
> to remain _PAGE_NONE when we don't use CET-SS.
The commit message discussed why that is not an option (currently), and
why I don't consider it a good idea to make possible.
>> + /* Primary Shadow Stack. 1x 4k in stack page 5. */
>> p += PRIMARY_SHSTK_SLOT * PAGE_SIZE;
>> - map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
>> + if ( IS_ENABLED(CONFIG_XEN_SHSTK) )
>> + write_sss_token(p + PAGE_SIZE - 8);
>> +
>> + map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1,
>> PAGE_HYPERVISOR_SHSTK);
>> }
>>
>> void memguard_unguard_stack(void *p)
> Would this function perhaps better zap the tokens?
Why? We don't zap any other stack contents, and let the regular page
scrubbing clean it.
~Andrew
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |