[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 7/9] x86: skip check for spurious faults for non-present faults

H. Peter Anvin wrote:
On 04/15/2014 07:15 AM, David Vrabel wrote:
If a fault on a kernel address is due to a non-present page, then it
cannot be the result of stale TLB entry from a protection change (RO
to RW or NX to X).  Thus the pagetable walk in spurious_fault() can be

Erk... this code is screaming WTF to me.  The x86 architecture is such
that the CPU is responsible for avoiding these faults.

Not in this case...

<dig>  <dig>  <dig>


     x86: ignore spurious faults

     When changing a kernel page from RO->RW, it's OK to leave stale TLB
     entries around, since doing a global flush is expensive and they
     pose no security problem.  They can, however, generate a spurious
     fault, which we should catch and simply return from (which will
     have the side-effect of reloading the TLB to the current PTE).

     This can occur when running under Xen, because it frequently changes
     kernel pages from RW->RO->RW to implement Xen's pagetable semantics.
     It could also occur when using CONFIG_DEBUG_PAGEALLOC, since it
     avoids doing a global TLB flush after changing page permissions.

     Signed-off-by: Jeremy Fitzhardinge<jeremy@xxxxxxxxxxxxx>
     Cc: Harvey Harrison<harvey.harrison@xxxxxxxxx>
     Signed-off-by: Ingo Molnar<mingo@xxxxxxx>
     Signed-off-by: Thomas Gleixner<tglx@xxxxxxxxxxxxx>

Again WTF?

Are we chasing hardware errata here?  Or did someone go off and *assume*
that the x86 hardware architecture work a certain way?  Or is there
something way more subtle going on?

See Intel Developer's Manual Vol 3 Section, 3rd bullet... This is expected behaviour, probably to make copy-on-write faults faster.

 -- Keir

I guess next step is mailing list archaeology...

Does anyone still have contacts with Jeremy, and if so, could they poke
him perhaps?


Xen-devel mailing list

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.