|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 1 of 5] Make p2m lookups fully synchronized wrt modifications
At 14:56 -0500 on 01 Feb (1328108165), Andres Lagar-Cavilla wrote:
> xen/arch/x86/mm/hap/hap.c | 6 +++++
> xen/arch/x86/mm/mm-locks.h | 40 ++++++++++++++++++++++++--------------
> xen/arch/x86/mm/p2m.c | 21 ++++++++++++++++++-
> xen/include/asm-x86/p2m.h | 47
> ++++++++++++++++++++++++++++-----------------
> 4 files changed, 79 insertions(+), 35 deletions(-)
>
>
> We achieve this by locking/unlocking the global p2m_lock in get/put_gfn.
>
> The lock is always taken recursively, as there are many paths that
> do get_gfn, and later, another attempt at grabbing the p2m_lock
>
> The lock is not taken for shadow lookups, as the testing needed to rule out
> the
> possibility of locking inversions and deadlock has not yet been carried out
> for
> shadow-paging. This is tolerable as long as shadows do not gain support for
> paging or sharing.
Are you aware of any inversions or are you just suggesting there might
be some left?
> HAP (EPT) lookups and all modifications do take the lock.
>
> Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
>
> diff -r 9a55109e4d7e -r 223512f9fb5b xen/arch/x86/mm/hap/hap.c
> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -786,7 +786,12 @@ hap_paging_get_mode(struct vcpu *v)
> static void hap_update_paging_modes(struct vcpu *v)
> {
> struct domain *d = v->domain;
> + unsigned long cr3_gfn = v->arch.hvm_vcpu.guest_cr[3];
> + p2m_type_t t;
>
> + /* We hold onto the cr3 as it may be modified later, and
> + * we need to respect lock ordering */
> + (void)get_gfn(d, cr3_gfn, &t);
Don't you need to check for errors?
> paging_lock(d);
>
> v->arch.paging.mode = hap_paging_get_mode(v);
> @@ -803,6 +808,7 @@ static void hap_update_paging_modes(stru
> hap_update_cr3(v, 0);
>
> paging_unlock(d);
> + put_gfn(d, cr3_gfn);
> }
>
> #if CONFIG_PAGING_LEVELS == 3
> diff -r 9a55109e4d7e -r 223512f9fb5b xen/arch/x86/mm/mm-locks.h
> --- a/xen/arch/x86/mm/mm-locks.h
> +++ b/xen/arch/x86/mm/mm-locks.h
> @@ -144,6 +144,31 @@ static inline void mm_enforce_order_unlo
> * *
> ************************************************************************/
>
> +declare_mm_lock(nestedp2m)
> +#define nestedp2m_lock(d) mm_lock(nestedp2m, &(d)->arch.nested_p2m_lock)
> +#define nestedp2m_unlock(d) mm_unlock(&(d)->arch.nested_p2m_lock)
> +
> +/* P2M lock (per-p2m-table)
> + *
> + * This protects all queries and updates to the p2m table.
> + *
> + * A note about ordering:
> + * The order established here is enforced on all mutations of a p2m.
> + * For lookups, the order established here is enforced only for hap
> + * domains (1. shadow domains present a few nasty inversions;
> + * 2. shadow domains do not support paging and sharing,
> + * the main sources of dynamic p2m mutations)
> + *
> + * The lock is recursive as it is common for a code path to look up a gfn
> + * and later mutate it.
> + */
> +
> +declare_mm_lock(p2m)
> +#define p2m_lock(p) mm_lock_recursive(p2m, &(p)->lock)
> +#define p2m_lock_recursive(p) mm_lock_recursive(p2m, &(p)->lock)
> +#define p2m_unlock(p) mm_unlock(&(p)->lock)
> +#define p2m_locked_by_me(p) mm_locked_by_me(&(p)->lock)
> +
> /* Sharing per page lock
> *
> * This is an external lock, not represented by an mm_lock_t. The memory
Since you've just reversed the locking order between this page-sharing
lock and the p2m lock, you need to update the comment that describes
it. Also, presumably, the code that it describes?
Cheers,
Tim.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |