|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] x86/xen: Tolerate nested XEN_LAZY_MMU entering/leaving
On 08/05/2026 16:39, Juergen Gross wrote:
> With the support of nested lazy mmu sections it can happen that
> arch_enter_lazy_mmu_mode() is being called twice without a call of
> arch_leave_lazy_mmu_mode() in between, as the lazy_mmu_*() helpers
> are not disabling preemption when checking for nested lazy mmu
> sections.
I think this is a correct description of the issue, i.e. potentially we
have arch_enter_lazy_mmu_mode() called twice *sequentially*. Therefore I
don't think that disabling preemption inside arch_enter_lazy_mmu_mode()
is enough - we have a problem with preemption occurring inside
lazy_mmu_mode_enable() generally, not necessarily inside
arch_enter_lazy_mmu_mode().
Preemption shouldn't matter if commit 291b3abed657 is reverted. AFAICT
this is the only easy fix.
- Kevin
> This is a problem when running as a Xen PV guest, as
> xen_enter_lazy_mmu() and xen_leave_lazy_mmu() don't tolerate this
> case.
>
> Fix that in xen_enter_lazy_mmu() and xen_leave_lazy_mmu() in order
> not to hurt all other lazy mmu mode users.
>
> Fixes: 291b3abed657 ("x86/xen: use lazy_mmu_state when context-switching")
> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
> ---
> arch/x86/xen/mmu_pv.c | 8 ++++++--
> 1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
> index c80d0058efd1..3eee5f84f8a7 100644
> --- a/arch/x86/xen/mmu_pv.c
> +++ b/arch/x86/xen/mmu_pv.c
> @@ -2145,7 +2145,10 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t
> phys, pgprot_t prot)
>
> static void xen_enter_lazy_mmu(void)
> {
> - enter_lazy(XEN_LAZY_MMU);
> + preempt_disable();
> + if (xen_get_lazy_mode() != XEN_LAZY_MMU)
> + enter_lazy(XEN_LAZY_MMU);
> + preempt_enable();
> }
>
> static void xen_flush_lazy_mmu(void)
> @@ -2182,7 +2185,8 @@ static void xen_leave_lazy_mmu(void)
> {
> preempt_disable();
> xen_mc_flush();
> - leave_lazy(XEN_LAZY_MMU);
> + if (xen_get_lazy_mode() != XEN_LAZY_NONE)
> + leave_lazy(XEN_LAZY_MMU);
> preempt_enable();
> }
>
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |