[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v2 2/3] x86/altp2m: p2m_altp2m_get_or_propagate() should honor present page order
Prior to XSA-304 the only caller merely happened to not use any further the order value that it passes into the function. Already then this was a latent issue: The function really should, in the "get" case, hand back the order the underlying mapping actually uses (or actually the smaller of the two), such that (going forward) there wouldn't be any action on unrelated mappings (in particular ones which did already diverge from the host P2M). Similarly in the "propagate" case only the smaller of the two orders should actually get used for creating the new entry, again to avoid altering mappings which did already diverge from the host P2M. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -1832,7 +1832,7 @@ int hvm_hap_nested_page_fault(paddr_t gp * altp2m. */ if ( p2m_altp2m_get_or_propagate(p2m, gfn, &mfn, &p2mt, - &p2ma, page_order) ) + &p2ma, &page_order) ) { /* Entry was copied from host -- retry fault */ rc = 1; --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -2198,10 +2198,11 @@ bool_t p2m_switch_vcpu_altp2m_by_id(stru */ bool p2m_altp2m_get_or_propagate(struct p2m_domain *ap2m, unsigned long gfn_l, mfn_t *mfn, p2m_type_t *p2mt, - p2m_access_t *p2ma, unsigned int page_order) + p2m_access_t *p2ma, unsigned int *page_order) { p2m_type_t ap2mt; p2m_access_t ap2ma; + unsigned int cur_order; unsigned long mask; gfn_t gfn; mfn_t amfn; @@ -2214,7 +2215,10 @@ bool p2m_altp2m_get_or_propagate(struct */ p2m_lock(ap2m); - amfn = get_gfn_type_access(ap2m, gfn_l, &ap2mt, &ap2ma, 0, NULL); + amfn = get_gfn_type_access(ap2m, gfn_l, &ap2mt, &ap2ma, 0, &cur_order); + + if ( cur_order > *page_order ) + cur_order = *page_order; if ( !mfn_eq(amfn, INVALID_MFN) ) { @@ -2222,6 +2226,7 @@ bool p2m_altp2m_get_or_propagate(struct *mfn = amfn; *p2mt = ap2mt; *p2ma = ap2ma; + *page_order = cur_order; return false; } @@ -2229,6 +2234,7 @@ bool p2m_altp2m_get_or_propagate(struct if ( mfn_eq(*mfn, INVALID_MFN) ) { p2m_unlock(ap2m); + *page_order = cur_order; return false; } @@ -2237,11 +2243,11 @@ bool p2m_altp2m_get_or_propagate(struct * to the start of the superpage. NB that we repupose `amfn` * here. */ - mask = ~((1UL << page_order) - 1); + mask = ~((1UL << cur_order) - 1); amfn = _mfn(mfn_x(*mfn) & mask); gfn = _gfn(gfn_l & mask); - rc = p2m_set_entry(ap2m, gfn, amfn, page_order, *p2mt, *p2ma); + rc = p2m_set_entry(ap2m, gfn, amfn, cur_order, *p2mt, *p2ma); p2m_unlock(ap2m); if ( rc ) --- a/xen/arch/x86/include/asm/p2m.h +++ b/xen/arch/x86/include/asm/p2m.h @@ -852,7 +852,7 @@ void p2m_flush_altp2m(struct domain *d); /* Alternate p2m paging */ bool p2m_altp2m_get_or_propagate(struct p2m_domain *ap2m, unsigned long gfn_l, mfn_t *mfn, p2m_type_t *p2mt, - p2m_access_t *p2ma, unsigned int page_order); + p2m_access_t *p2ma, unsigned int *page_order); /* Make a specific alternate p2m valid */ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |