|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86/mm: Add debug code to detect illegal page_lock and put_page_type ordering
>>> On 24.01.18 at 13:48, <george.dunlap@xxxxxxxxxx> wrote:
> The fix for XSA-242 depends on the same cpu never calling
> _put_page_type() while holding a page_lock() for that page. By having
> no locking discipline between pages, the current code also assumes
> that we will never lock two pages on the same cpu.
>
> Add a check to debug builds to verify that both of these are true.
>
> Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
Generally fine, but way too much #ifdef-ary for my taste (see below).
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -1813,10 +1813,25 @@ static int free_l4_table(struct page_info *page)
> return rc;
> }
>
> +#ifndef NDEBUG
> +/*
> + * Check to make sure that we never nest page_lock() calls on a single
> + * cpu (which may deadlock if two cpus attempt to lock the same pages
> + * in a different order), and that we never call _put_page_type() on a
> + * page while we hold its page_lock() (which would deadlock after
> + * XSA-242).
> + */
> +static DEFINE_PER_CPU(struct page_info *, current_page_lock);
current_locked_page or current_page_locked? The way you have
it, the variable name rather suggests some kind of lock.
> +#endif
> +
> int page_lock(struct page_info *page)
> {
> unsigned long x, nx;
>
> +#ifndef NDEBUG
> + ASSERT(this_cpu(current_page_lock) == NULL);
> +#endif
ASSERT(!get_current_locked_page());
> @@ -1827,6 +1842,10 @@ int page_lock(struct page_info *page)
> return 0;
> } while ( cmpxchg(&page->u.inuse.type_info, x, nx) != x );
>
> +#ifndef NDEBUG
> + this_cpu(current_page_lock) = page;
> +#endif
set_current_locked_page(page);
... with suitable wrappers (empty in the non-debug case) defined.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |