[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v2] x86/PoD: move increment of entry count
When not holding the PoD lock across the entire region covering P2M update and stats update, the entry count should indicate too large a value in preference to a too small one, to avoid functions bailing early when they find the count is zero. Hence increments should happen ahead of P2M updates, while decrements should happen only after. Deal with the one place where this hasn't been the case yet. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- v2: Add comments. --- While it might be possible to hold the PoD lock over the entire operation, I didn't want to chance introducing a lock order violation on a perhaps rarely taken code path. --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -1342,19 +1342,22 @@ mark_populate_on_demand(struct domain *d } } + /* + * Without holding the PoD lock across the entire operation, bump the + * entry count up front assuming success of p2m_set_entry(), undoing the + * bump as necessary upon failure. Bumping only upon success would risk + * code elsewhere observing entry count being zero despite there actually + * still being PoD entries. + */ + pod_lock(p2m); + p2m->pod.entry_count += (1UL << order) - pod_count; + pod_unlock(p2m); + /* Now, actually do the two-way mapping */ rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order, p2m_populate_on_demand, p2m->default_access); if ( rc == 0 ) - { - pod_lock(p2m); - p2m->pod.entry_count += 1UL << order; - p2m->pod.entry_count -= pod_count; - BUG_ON(p2m->pod.entry_count < 0); - pod_unlock(p2m); - ioreq_request_mapcache_invalidate(d); - } else if ( order ) { /* @@ -1366,6 +1369,14 @@ mark_populate_on_demand(struct domain *d d, gfn_l, order, rc); domain_crash(d); } + else if ( !pod_count ) + { + /* Undo earlier increment; see comment above. */ + pod_lock(p2m); + BUG_ON(!p2m->pod.entry_count); + --p2m->pod.entry_count; + pod_unlock(p2m); + } out: gfn_unlock(p2m, gfn, order);
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |