|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 3 of 6] x86/mm/shadow: fix potential p2m/paging deadlock when emulating page table writes
Nack, at least for now; we spent a fair amount of effort taking all this
emulation code out from under the shadow lock; serializing it under the
p2m lock would be unfortunate. The other patches are less worrying,
since they wrap a shadow_lock() in a p2m_lock() but I hope they can all
be avoided.
The existing interlock between the shadow code and the p2m code prevents
any p2m modifications from happening when the shadow lock is held, so I
hope I can solve this by switching to unlocked lookups instead. I'm
building a test kernel now to tell me exactly which lookps are to
blame.
If I don't get this done today I'll look into it tomorrow.
Cheers,
Tim.
At 12:22 -0400 on 13 Apr (1334319741), Andres Lagar-Cavilla wrote:
> xen/arch/x86/mm/shadow/multi.c | 25 +++++++++++++++++++++++++
> 1 files changed, 25 insertions(+), 0 deletions(-)
>
>
> Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
>
> diff -r f8fd4a4239a8 -r 964c6cbad926 xen/arch/x86/mm/shadow/multi.c
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -5033,9 +5033,21 @@ sh_x86_emulate_write(struct vcpu *v, uns
> if ( (vaddr & (bytes - 1)) && !is_hvm_vcpu(v) )
> return X86EMUL_UNHANDLEABLE;
>
> + /* To prevent a shadow mode deadlock, we need to hold the p2m from here
> + * onwards. emulate_unmap_dest may need to verify the pte that is being
> + * written to here, and thus get_gfn on the gfn contained in the payload
> + * that is being written here. p2m_lock is recursive, so all is well on
> + * that regard. Further, holding the p2m lock ensures the page table gfn
> + * being written to won't go away (although that could be achieved with
> + * a page reference, as done elsewhere).
> + */
> + p2m_lock(p2m_get_hostp2m(v->domain));
> addr = emulate_map_dest(v, vaddr, bytes, sh_ctxt);
> if ( emulate_map_dest_failed(addr) )
> + {
> + p2m_unlock(p2m_get_hostp2m(v->domain));
> return (long)addr;
> + }
>
> paging_lock(v->domain);
> memcpy(addr, src, bytes);
> @@ -5059,6 +5071,7 @@ sh_x86_emulate_write(struct vcpu *v, uns
> emulate_unmap_dest(v, addr, bytes, sh_ctxt);
> shadow_audit_tables(v);
> paging_unlock(v->domain);
> + p2m_unlock(p2m_get_hostp2m(v->domain));
> return X86EMUL_OKAY;
> }
>
> @@ -5075,9 +5088,14 @@ sh_x86_emulate_cmpxchg(struct vcpu *v, u
> if ( (vaddr & (bytes - 1)) && !is_hvm_vcpu(v) )
> return X86EMUL_UNHANDLEABLE;
>
> + /* see comment in sh_x86_emulate_write. */
> + p2m_lock(p2m_get_hostp2m(v->domain));
> addr = emulate_map_dest(v, vaddr, bytes, sh_ctxt);
> if ( emulate_map_dest_failed(addr) )
> + {
> + p2m_unlock(p2m_get_hostp2m(v->domain));
> return (long)addr;
> + }
>
> paging_lock(v->domain);
> switch ( bytes )
> @@ -5101,6 +5119,7 @@ sh_x86_emulate_cmpxchg(struct vcpu *v, u
> emulate_unmap_dest(v, addr, bytes, sh_ctxt);
> shadow_audit_tables(v);
> paging_unlock(v->domain);
> + p2m_unlock(p2m_get_hostp2m(v->domain));
> return rv;
> }
>
> @@ -5119,9 +5138,14 @@ sh_x86_emulate_cmpxchg8b(struct vcpu *v,
> if ( (vaddr & 7) && !is_hvm_vcpu(v) )
> return X86EMUL_UNHANDLEABLE;
>
> + /* see comment in sh_x86_emulate_write. */
> + p2m_lock(p2m_get_hostp2m(v->domain));
> addr = emulate_map_dest(v, vaddr, 8, sh_ctxt);
> if ( emulate_map_dest_failed(addr) )
> + {
> + p2m_unlock(p2m_get_hostp2m(v->domain));
> return (long)addr;
> + }
>
> old = (((u64) old_hi) << 32) | (u64) old_lo;
> new = (((u64) new_hi) << 32) | (u64) new_lo;
> @@ -5135,6 +5159,7 @@ sh_x86_emulate_cmpxchg8b(struct vcpu *v,
> emulate_unmap_dest(v, addr, 8, sh_ctxt);
> shadow_audit_tables(v);
> paging_unlock(v->domain);
> + p2m_unlock(p2m_get_hostp2m(v->domain));
> return rv;
> }
> #endif
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |