|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen stable-4.10] x86/shadow: don't enable shadow mode with too small a shadow allocation
commit c119267f25c5513e35b8c103ab6923c1d1075c68
Author: Jan Beulich <jbeulich@xxxxxxxx>
AuthorDate: Fri Feb 1 11:45:01 2019 +0100
Commit: Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Fri Feb 1 11:45:01 2019 +0100
x86/shadow: don't enable shadow mode with too small a shadow allocation
We've had more than one report of host crashes after failed migration,
and in at least one case we've had a hint towards a too far shrunk
shadow allocation pool. Instead of just checking the pool for being
empty, check whether the pool is smaller than what
shadow_set_allocation() would minimally bump it to if it was invoked in
the first place.
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
Acked-by: Tim Deegan <tim@xxxxxxx>
master commit: 2634b997afabfdc5a972e07e536dfbc6febb4385
master date: 2018-11-30 12:10:39 +0100
---
xen/arch/x86/mm/shadow/common.c | 17 ++++++++++++-----
1 file changed, 12 insertions(+), 5 deletions(-)
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index e3bc1f1c47..48f03b3beb 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1148,7 +1148,7 @@ const u8 sh_type_to_size[] = {
* allow for more than ninety allocated pages per vcpu. We round that
* up to 128 pages, or half a megabyte per vcpu, and add 1 more vcpu's
* worth to make sure we never return zero. */
-static unsigned int shadow_min_acceptable_pages(struct domain *d)
+static unsigned int shadow_min_acceptable_pages(const struct domain *d)
{
u32 vcpu_count = 1;
struct vcpu *v;
@@ -1545,6 +1545,15 @@ shadow_free_p2m_page(struct domain *d, struct page_info
*pg)
paging_unlock(d);
}
+static unsigned int sh_min_allocation(const struct domain *d)
+{
+ /*
+ * Don't allocate less than the minimum acceptable, plus one page per
+ * megabyte of RAM (for the p2m table).
+ */
+ return shadow_min_acceptable_pages(d) + (d->tot_pages / 256);
+}
+
int shadow_set_allocation(struct domain *d, unsigned int pages, bool
*preempted)
{
struct page_info *sp;
@@ -1560,9 +1569,7 @@ int shadow_set_allocation(struct domain *d, unsigned int
pages, bool *preempted)
else
pages -= d->arch.paging.shadow.p2m_pages;
- /* Don't allocate less than the minimum acceptable, plus one page per
- * megabyte of RAM (for the p2m table) */
- lower_bound = shadow_min_acceptable_pages(d) + (d->tot_pages / 256);
+ lower_bound = sh_min_allocation(d);
if ( pages < lower_bound )
pages = lower_bound;
}
@@ -3123,7 +3130,7 @@ int shadow_enable(struct domain *d, u32 mode)
/* Init the shadow memory allocation if the user hasn't done so */
old_pages = d->arch.paging.shadow.total_pages;
- if ( old_pages == 0 )
+ if ( old_pages < sh_min_allocation(d) + d->arch.paging.shadow.p2m_pages )
{
paging_lock(d);
rv = shadow_set_allocation(d, 1024, NULL); /* Use at least 4MB */
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.10
_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |