[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 3/4] x86/uaccess: rework user access speculative harden guards



On Tue, Nov 19, 2024 at 03:31:34PM +0000, Andrew Cooper wrote:
> On 19/11/2024 10:34 am, Roger Pau Monne wrote:
> > The current guards to select whether user accesses should be speculative
> > hardened violate Misra rule 20.7, as the UA_KEEP() macro doesn't (and can't)
> > parenthesize the 'args' argument.
> >
> > Change the logic so the guard is implemented inside the assembly block using
> > the .if assembly directive.
> >
> > No functional change intended.
> >
> > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> > ---
> > The guard check could be moved inside of the guest_access_mask_ptr macro, 
> > but
> > given it's limited usages it's clearer to keep the check in the callers IMO.
> 
> Overall this is far more legible, and I'm tempted to take it on that
> justification alone.  But this is Jan's pile of magic.
> 
> There is a weird effect from this change:
> 
> add/remove: 2/0 grow/shrink: 2/2 up/down: 740/-739 (1)
> Function                                     old     new   delta
> build_symbol_table                             -     686    +686
> build_symbol_table.cold                        -      48     +48
> pv_map_ldt_shadow_page                       641     644      +3
> pv_emulate_gate_op                          2919    2922      +3
> livepatch_op.cold                            557     509     -48
> livepatch_op                                5952    5261    -691

So build_symbol_table() is no longer inlined in load_payload_data()
and thus livepatch_op().

> which is clearly changing inlining decisions.  I suspect it's related to...
> 
> > diff --git a/xen/arch/x86/usercopy.c b/xen/arch/x86/usercopy.c
> > index 7ab2009efe4c..d66beecc5507 100644
> > --- a/xen/arch/x86/usercopy.c
> > +++ b/xen/arch/x86/usercopy.c
> > @@ -11,23 +11,23 @@
> >  #include <asm/uaccess.h>
> >  
> >  #ifndef GUARD
> > -# define GUARD UA_KEEP
> > +# define GUARD 1
> >  #endif
> >  
> >  unsigned int copy_to_guest_ll(void __user *to, const void *from, unsigned 
> > int n)
> >  {
> > -    GUARD(unsigned dummy);
> > +    unsigned __maybe_unused dummy;
> 
> ... this.  This doesn't need to be __maybe_unused, because ...

This is a leftover from when I attempted to use preprocessor #if GUARD
below, and the compiler would then complain about dummy being
unused.

I can fix if there's agreement on the basis of the change.

> >  
> >      stac();
> >      asm volatile (
> > -        GUARD(
> > +        ".if " STR(GUARD) "\n"
> >          "    guest_access_mask_ptr %[to], %q[scratch1], %q[scratch2]\n"
> > -        )
> > +        ".endif\n"
> >          "1:  rep movsb\n"
> >          "2:\n"
> >          _ASM_EXTABLE(1b, 2b)
> > -        : [cnt] "+c" (n), [to] "+D" (to), [from] "+S" (from)
> > -          GUARD(, [scratch1] "=&r" (dummy), [scratch2] "=&r" (dummy))
> > +        : [cnt] "+c" (n), [to] "+D" (to), [from] "+S" (from),
> > +          [scratch1] "=&r" (dummy), [scratch2] "=&r" (dummy)
> 
> ... these parameters are referenced unconditionally.
> 
> However, it does mean the compiler is spilling the scratch registers
> even when guard is 0.  I expect this is what is affecting the inlining
> decisions.

That's my understanding yes, the registers will be spilled.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.