|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Illegal PV kernel pfm/pfn translations on PROT_NONE ioremaps
Hi,
On paravirt x86 (both 32- and 64-bit), since cset 13998:
http://xenbits.xensource.com/xen-unstable.hg?rev/13998
we translate all ptes from being mfn-based to pfn-based when the
hardware _PAGE_PRESENT bit is cleared. We do this for PROT_NONE pages,
which appear to the HV to be non-present, but which are special-cased in
the kernel to appear present (a different bit in the pte remains set for
these pages and is caught by the pte_present() tests.)
Unfortunately, it looks like recent X servers are attempting to do
mprotect(PROT_NONE) and back on regions of ioremap()ed memory. When we
do so, the translation of mfn to pfn results on x86_64 in end_pfn:
maddr.h:
static inline unsigned long mfn_to_pfn(unsigned long mfn)
{
...
if (unlikely((mfn >> machine_to_phys_order) != 0))
return end_pfn;
and when we do mprotect(PROT_READ) later on on the same ptes, this
end_pfn value is illegal:
maddr.h:
static inline unsigned long pfn_to_mfn(unsigned long pfn)
{
BUG_ON(end_pfn && pfn >= end_pfn);
so we BUG().
It needs both an updated X and an updated kernel to show the bug, but
given that, this results in an instant, completely repeatable kernel
panic on starting X on both 32- and 64-bits on some hardware.
Any suggestions? The obvious fix is to special-case these mfn_to_pfn
translations so that they can be recognised as "untranslated" by a later
pfn_to_mfn, but ideally we'd want such special pfns to be easily
recognised so that we don't completely lose the value of the BUG_ON()
above.
We'd also ideally like the HV to be able to spot such pte contents, as
they won't (indeed, cannot) be translated on migrate. That's not a
problem for dom0, of course, but might be for domUs with pci
passthrough .
Cheers,
Stephen
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |