[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] x86/mm: Take the p2m lock even in shadow mode.
The reworking of p2m lookups to use get_gfn()/put_gfn() left the shadow code not taking the p2m lock, even in cases where the p2m would be updated (i.e. PoD). In many cases, shadow code doesn't need the exclusion that get_gfn()/put_gfn() provides, as it has its own interlocks against p2m updates, but this is taking things too far, and can lead to crashes in the PoD code. Now that most shadow-code p2m lookups are done with explicitly unlocked accessors, or with the get_page_from_gfn() accessor, which is often lock-free, we can just turn this locking on. The remaining locked lookups are in sh_page_fault() (in a path that's almost always already serializing on the paging lock), and in emulate_map_dest() (which can probably be updated to use get_page_from_gfn()). They're not addressed here but may be in a follow-up patch. Signed-off-by: Tim Deegan <tim@xxxxxxx> --- xen/arch/x86/mm/p2m.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index de1dd82..2db73c9 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -218,8 +218,7 @@ mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn, return _mfn(gfn); } - /* For now only perform locking on hap domains */ - if ( locked && (hap_enabled(p2m->domain)) ) + if ( locked ) /* Grab the lock here, don't release until put_gfn */ gfn_lock(p2m, gfn, 0); @@ -248,8 +247,7 @@ mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn, void __put_gfn(struct p2m_domain *p2m, unsigned long gfn) { - if ( !p2m || !paging_mode_translate(p2m->domain) - || !hap_enabled(p2m->domain) ) + if ( !p2m || !paging_mode_translate(p2m->domain) ) /* Nothing to do in this case */ return; -- 1.7.10.4 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |