|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 2/4] x86/mem_sharing: copy a page_lock version to be internal to memshr
>>> On 03.05.19 at 00:13, <tamas@xxxxxxxxxxxxx> wrote:
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -112,13 +112,48 @@ static inline void page_sharing_dispose(struct
> page_info *page)
>
> #endif /* MEM_SHARING_AUDIT */
>
> -static inline int mem_sharing_page_lock(struct page_info *pg)
> +/*
> + * Private implementations of page_lock/unlock to bypass PV-only
> + * sanity checks not applicable to mem-sharing.
> + */
> +static inline bool _page_lock(struct page_info *page)
> {
> - int rc;
> + unsigned long x, nx;
> +
> + do {
> + while ( (x = page->u.inuse.type_info) & PGT_locked )
> + cpu_relax();
> + nx = x + (1 | PGT_locked);
> + if ( !(x & PGT_validated) ||
> + !(x & PGT_count_mask) ||
> + !(nx & PGT_count_mask) )
> + return false;
Just for my own understanding: Did you verify that the PGT_validated
check is indeed needed here, or did you copy it "just in case"? In the
latter case a comment may be worthwhile.
Furthermore, are there any mem-sharing specific checks reasonable
to do here in place of the PV ones you want to avoid? Like pages
making it here only ever being of PGT_shared_page type?
> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -356,24 +356,12 @@ struct platform_bad_page {
> const struct platform_bad_page *get_platform_badpages(unsigned int
> *array_size);
>
> /* Per page locks:
> - * page_lock() is used for two purposes: pte serialization, and memory
> sharing.
> + * page_lock() is used for pte serialization.
> *
> * All users of page lock for pte serialization live in mm.c, use it
> * to lock a page table page during pte updates, do not take other locks
> ithin
> * the critical section delimited by page_lock/unlock, and perform no
> * nesting.
> - *
> - * All users of page lock for memory sharing live in mm/mem_sharing.c.
> Page_lock
> - * is used in memory sharing to protect addition (share) and removal
> (unshare)
> - * of (gfn,domain) tupples to a list of gfn's that the shared page is
> currently
> - * backing. Nesting may happen when sharing (and locking) two pages --
> deadlock
> - * is avoided by locking pages in increasing order.
> - * All memory sharing code paths take the p2m lock of the affected gfn before
> - * taking the lock for the underlying page. We enforce ordering between
> page_lock
> - * and p2m_lock using an mm-locks.h construct.
> - *
> - * These two users (pte serialization and memory sharing) do not collide,
> since
> - * sharing is only supported for hvm guests, which do not perform pv pte
> updates.
> */
> int page_lock(struct page_info *page);
> void page_unlock(struct page_info *page);
I think it would be helpful to retain (in a slightly adjusted form) the last
sentence of the comment above, to clarify that the PGT_locked uses
are now what does not end up colliding. At this occasion "which do not
perform pv pte updates" would also better be re-worded to e.g.
"which do not have PV PTEs updated" (as PVH Dom0 is very much
expected to issue PV page table operations for PV DomU-s).
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |