|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 02/10] x86/mm: Avoid hard-coding PAT in get_page_from_l1e()
On Fri, Dec 16, 2022 at 02:49:33AM +0000, Andrew Cooper wrote:
> On 15/12/2022 11:57 pm, Demi Marie Obenour wrote:
> > diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> > index
> > 78b1972e4170cacccc9c37c6e64e76e66a7da87f..802073a01c5cf4dc3cf1d58d28ea4d4e9e8149c7
> > 100644
> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -959,15 +959,22 @@ get_page_from_l1e(
> > flip = _PAGE_RW;
> > }
> >
> > - switch ( l1f & PAGE_CACHE_ATTRS )
> > + /* Force cacheable memtypes to UC */
> > + switch ( pte_flags_to_cacheability(l1f) )
> > {
> > - case 0: /* WB */
> > - flip |= _PAGE_PWT | _PAGE_PCD;
> > + case X86_MT_UC:
> > + case X86_MT_UCM:
> > + case X86_MT_WC:
> > + /* not cached */
> > break;
> > - case _PAGE_PWT: /* WT */
> > - case _PAGE_PWT | _PAGE_PAT: /* WP */
> > - flip |= _PAGE_PCD | (l1f & _PAGE_PAT);
> > + case X86_MT_WB:
> > + case X86_MT_WT:
> > + case X86_MT_WP:
> > + /* cacheable, force to UC */
> > + flip |= (l1f & PAGE_CACHE_ATTRS) ^ _PAGE_UC;
> > break;
> > + default:
> > + BUG();
>
> This is guest reachable.
Proof that this is unreachable below. Obviously, feel free to correct
me if the proof is wrong.
pte_flags_to_cacheability() is defined (in this patch) as:
/* Convert from PAT/PCD/PWT embedded in PTE flags to actual cacheability
value */
static inline unsigned int pte_flags_to_cacheability(unsigned int flags)
{
unsigned int pat_shift = ((flags & _PAGE_PAT) >> 2) |
(flags & (_PAGE_PCD|_PAGE_PWT));
return 0xFF & (XEN_MSR_PAT >> pat_shift);
}
_PAGE_PAT is 0x80, so (flags & _PAGE_PAT) will either be 0x00 or 0x80,
and ((flags & _PAGE_PAT) >> 2) will either be 0x00 or 0x20. _PAGE_PCD
is 0x10 and _PAGE_PWT ix 0x08, so (flags & (_PAGE_PCD|_PAGE_PWT)) will
either be 0x00, 0x08, 0x10, or 0x18. Therefore, pat_shift will either
be 0x00, 0x08, 0x10, 0x18, 0x20, 0x28, 0x30, or 0x38. This means that
(XEN_MSR_PAT >> pat_shift) is well-defined and will shift XEN_MSR_PAT by
an integer number of bytes, and so (0xFF & (XEN_MSR_PAT >> pat_shift))
(the return value of pte_flags_to_cacheability()) is a single byte
(entry) in XEN_MSR_PAT. Each byte in XEN_MSR_PAT is one of the six
architectural x86 memory types, and there is a case entry in the switch
for each of those types. Therefore, the default case is not reachable.
Q.E.D.
> But the more I think about it, the more I'm not sure this logic is
> appropriate to begin with. I think it needs deleting for the same
> reasons as the directmap cacheability logic needed deleting in XSA-402.
I would prefer this to be in a separate patch series, not least because
I do not consider myself qualified to write a good commit message for it.
This patch series is purely about removing assumptions about Xen’s PAT,
except for the last patch, which I explicitly marked DO NOT MERGE as it
breaks PV guest migration from old Xen at a minimum.
--
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab
Attachment:
signature.asc
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |