[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH] x86/xen: Tolerate nested XEN_LAZY_MMU entering/leaving



With the support of nested lazy mmu sections it can happen that
arch_enter_lazy_mmu_mode() is being called twice without a call of
arch_leave_lazy_mmu_mode() in between, as the lazy_mmu_*() helpers
are not disabling preemption when checking for nested lazy mmu
sections.

This is a problem when running as a Xen PV guest, as
xen_enter_lazy_mmu() and xen_leave_lazy_mmu() don't tolerate this
case.

Fix that in xen_enter_lazy_mmu() and xen_leave_lazy_mmu() in order
not to hurt all other lazy mmu mode users.

Fixes: 291b3abed657 ("x86/xen: use lazy_mmu_state when context-switching")
Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
---
 arch/x86/xen/mmu_pv.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index c80d0058efd1..3eee5f84f8a7 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -2145,7 +2145,10 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t 
phys, pgprot_t prot)
 
 static void xen_enter_lazy_mmu(void)
 {
-       enter_lazy(XEN_LAZY_MMU);
+       preempt_disable();
+       if (xen_get_lazy_mode() != XEN_LAZY_MMU)
+               enter_lazy(XEN_LAZY_MMU);
+       preempt_enable();
 }
 
 static void xen_flush_lazy_mmu(void)
@@ -2182,7 +2185,8 @@ static void xen_leave_lazy_mmu(void)
 {
        preempt_disable();
        xen_mc_flush();
-       leave_lazy(XEN_LAZY_MMU);
+       if (xen_get_lazy_mode() != XEN_LAZY_NONE)
+               leave_lazy(XEN_LAZY_MMU);
        preempt_enable();
 }
 
-- 
2.54.0




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.