[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH RFC v2 3/3] x86/altp2m: p2m_altp2m_propagate_change() should honor present page order
For higher order mappings the comparison against p2m->min_remapped_gfn needs to take the upper bound of the covered GFN range into account, not just the base GFN. Otherwise, i.e. when dropping a mapping and not overlapping the remapped range, XXX. Note that there's no need to call get_gfn_type_access() ahead of the check against the remapped range boundaries: None of its outputs are needed earlier. Local variables get moved into the more narrow scope to demonstrate this. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- I may be entirely wrong and hence that part of the change may also be wrong, but I'm having trouble seeing why the original "!mfn_eq(m, INVALID_MFN)" wasn't "mfn_eq(m, INVALID_MFN)". Isn't the goal there to pre-fill entries that were previously invalid, instead of undoing prior intentional divergence from the host P2M? (I have intentionally not reflected this aspect in the description yet; I can't really write a description of this without understanding what's going on in case the original code was correct.) When cur_order is below the passed in page_order, the p2m_set_entry() is of course not covering the full range. This could be put in a loop, but locking looks to be a little problematic: If a set of lower order pages gets re-combined to a large one while already having progressed into the range, we'd need to carefully back off. Alternatively the full incoming GFN range could be held locked for the entire loop; this would likely mean relying on gfn_lock() actually resolving to p2m_lock(). But perhaps that's not a big problem, considering that core functions like p2m_get_gfn_type_access() or __put_gfn() assume so, too (because of not taking the order into account at all)? --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -2532,9 +2532,6 @@ int p2m_altp2m_propagate_change(struct d p2m_type_t p2mt, p2m_access_t p2ma) { struct p2m_domain *p2m; - p2m_access_t a; - p2m_type_t t; - mfn_t m; unsigned int i; unsigned int reset_count = 0; unsigned int last_reset_idx = ~0; @@ -2551,12 +2548,30 @@ int p2m_altp2m_propagate_change(struct d continue; p2m = d->arch.altp2m_p2m[i]; - m = get_gfn_type_access(p2m, gfn_x(gfn), &t, &a, 0, NULL); + if ( !mfn_eq(mfn, INVALID_MFN) ) + { + p2m_access_t a; + p2m_type_t t; + unsigned int cur_order; + + if ( mfn_eq(get_gfn_type_access(p2m, gfn_x(gfn), &t, &a, 0, + &cur_order), + INVALID_MFN) ) + { + int rc = p2m_set_entry(p2m, gfn, mfn, min(cur_order, page_order), + p2mt, p2ma); + + /* Best effort: Don't bail on error. */ + if ( !ret ) + ret = rc; + } + + __put_gfn(p2m, gfn_x(gfn)); + } /* Check for a dropped page that may impact this altp2m */ - if ( mfn_eq(mfn, INVALID_MFN) && - gfn_x(gfn) >= p2m->min_remapped_gfn && - gfn_x(gfn) <= p2m->max_remapped_gfn ) + else if ( gfn_x(gfn) + (1UL << page_order) > p2m->min_remapped_gfn && + gfn_x(gfn) <= p2m->max_remapped_gfn ) { if ( !reset_count++ ) { @@ -2566,8 +2581,6 @@ int p2m_altp2m_propagate_change(struct d else { /* At least 2 altp2m's impacted, so reset everything */ - __put_gfn(p2m, gfn_x(gfn)); - for ( i = 0; i < MAX_ALTP2M; i++ ) { if ( i == last_reset_idx || @@ -2581,16 +2594,6 @@ int p2m_altp2m_propagate_change(struct d break; } } - else if ( !mfn_eq(m, INVALID_MFN) ) - { - int rc = p2m_set_entry(p2m, gfn, mfn, page_order, p2mt, p2ma); - - /* Best effort: Don't bail on error. */ - if ( !ret ) - ret = rc; - } - - __put_gfn(p2m, gfn_x(gfn)); } altp2m_list_unlock(d);
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |