|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v4] x86/PoD: tie together P2M update and increment of entry count
When not holding the PoD lock across the entire region covering P2M
update and stats update, the entry count - if to be incorrect at all -
should indicate too large a value in preference to a too small one, to
avoid functions bailing early when they find the count is zero. However,
instead of moving the increment ahead (and adjust back upon failure),
extend the PoD-locked region.
Fixes: 99af3cd40b6e ("x86/mm: Rework locking in the PoD layer")
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
v4: Shrink locked region a little again, where possible.
v3: Extend locked region instead. Add Fixes: tag.
v2: Add comments.
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1348,12 +1348,19 @@ mark_populate_on_demand(struct domain *d
}
}
+ /*
+ * P2M update and stats increment need to collectively be under PoD lock,
+ * to prevent code elsewhere observing PoD entry count being zero despite
+ * there actually still being PoD entries (created by the p2m_set_entry()
+ * invocation below).
+ */
+ pod_lock(p2m);
+
/* Now, actually do the two-way mapping */
rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order,
p2m_populate_on_demand, p2m->default_access);
if ( rc == 0 )
{
- pod_lock(p2m);
p2m->pod.entry_count += 1UL << order;
p2m->pod.entry_count -= pod_count;
BUG_ON(p2m->pod.entry_count < 0);
@@ -1363,6 +1370,8 @@ mark_populate_on_demand(struct domain *d
}
else if ( order )
{
+ pod_unlock(p2m);
+
/*
* If this failed, we can't tell how much of the range was changed.
* Best to crash the domain.
@@ -1372,6 +1381,8 @@ mark_populate_on_demand(struct domain *d
d, gfn_l, order, rc);
domain_crash(d);
}
+ else
+ pod_unlock(p2m);
out:
gfn_unlock(p2m, gfn, order);
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |