[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Ping: [PATCH v2] x86/stackframe/32: repair 32-bit Xen PV
Andy, On 29.10.2019 10:28, Jan Beulich wrote: > Once again RPL checks have been introduced which don't account for a > 32-bit kernel living in ring 1 when running in a PV Xen domain. The > case in FIXUP_FRAME has been preventing boot; adjust BUG_IF_WRONG_CR3 > as well just in case. > > Fixes: 3c88c692c287 ("x86/stackframe/32: Provide consistent pt_regs") > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> would you mind clarifying whether I should follow Thomas' request, overriding what you had asked for an I did carry out for v2? I don't think this regression should be left unfixed for much longer (as much as the other part of it, addressed by a later 2-patch series). Thanks, Jan > --- > v2: Avoid #ifdef and alter comment along the lines of Andy's suggestion. > > --- a/arch/x86/entry/entry_32.S > +++ b/arch/x86/entry/entry_32.S > @@ -48,6 +48,13 @@ > > #include "calling.h" > > +/* > + * When running on Xen PV, the actual %cs register's RPL in the kernel is 1, > + * not 0. If we need to distinguish between a %cs from kernel mode and a %cs > + * from user mode, we can do test $2 instead of test $3. > + */ > +#define USER_SEGMENT_RPL_MASK 2 > + > .section .entry.text, "ax" > > /* > @@ -172,7 +179,7 @@ > ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_PTI > .if \no_user_check == 0 > /* coming from usermode? */ > - testl $SEGMENT_RPL_MASK, PT_CS(%esp) > + testl $USER_SEGMENT_RPL_MASK, PT_CS(%esp) > jz .Lend_\@ > .endif > /* On user-cr3? */ > @@ -217,7 +224,7 @@ > testl $X86_EFLAGS_VM, 4*4(%esp) > jnz .Lfrom_usermode_no_fixup_\@ > #endif > - testl $SEGMENT_RPL_MASK, 3*4(%esp) > + testl $USER_SEGMENT_RPL_MASK, 3*4(%esp) > jnz .Lfrom_usermode_no_fixup_\@ > > orl $CS_FROM_KERNEL, 3*4(%esp) > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |