|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 5/7] x86: Enable Supervisor Mode Access Prevention (SMAP) for Xen
>>> On 23.04.14 at 16:37, <feng.wu@xxxxxxxxx> wrote:
> @@ -1182,11 +1182,12 @@ static int handle_gdt_ldt_mapping_fault(
> enum pf_type {
> real_fault,
> smep_fault,
> + smap_fault,
> spurious_fault
> };
>
> static enum pf_type __page_fault_type(
> - unsigned long addr, unsigned int error_code)
> + unsigned long addr, struct cpu_user_regs *regs)
const
> @@ -1262,19 +1264,37 @@ static enum pf_type __page_fault_type(
> page_user &= l1e_get_flags(l1e);
>
> leaf:
> - /*
> - * Supervisor Mode Execution Protection (SMEP):
> - * Disallow supervisor execution from user-accessible mappings
> - */
> - if ( (read_cr4() & X86_CR4_SMEP) && page_user &&
> - ((error_code & (PFEC_insn_fetch|PFEC_user_mode)) ==
> PFEC_insn_fetch) )
> - return smep_fault;
> + if ( page_user )
> + {
> + unsigned long cr4 = read_cr4();
> + /*
> + * Supervisor Mode Execution Prevention (SMEP):
> + * Disallow supervisor execution from user-accessible mappings
> + */
> + if ( (cr4 & X86_CR4_SMEP) &&
> + ((error_code & (PFEC_insn_fetch|PFEC_user_mode)) ==
> PFEC_insn_fetch) )
> + return smep_fault;
> +
> + /*
> + * Supervisor Mode Access Prevention (SMAP):
> + * Disallow supervisor access user-accessible mappings
> + * A fault is considered as an SMAP violation if the following
> + * conditions are true:
> + * - X86_CR4_SMAP is set in CR4
> + * - A user page is being accessed
> + * - CPL=3 or X86_EFLAGS_AC is clear
> + * - Page fault in kernel mode
> + */
> + if ( (cr4 & X86_CR4_SMAP) && !(error_code & PFEC_user_mode) &&
> + !(((regs->cs & 0x03) < 3) && (regs->eflags & X86_EFLAGS_AC)) )
I thought I said this on the previous iteration already: Consistent
numbers please (either both hex or - preferable here - both decimal).
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |