[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 00/34] put_user_pages(): miscellaneous call sites
On Fri, Aug 02, 2019 at 02:41:46PM +0200, Jan Kara wrote: > On Fri 02-08-19 11:12:44, Michal Hocko wrote: > > On Thu 01-08-19 19:19:31, john.hubbard@xxxxxxxxx wrote: > > [...] > > > 2) Convert all of the call sites for get_user_pages*(), to > > > invoke put_user_page*(), instead of put_page(). This involves dozens of > > > call sites, and will take some time. > > > > How do we make sure this is the case and it will remain the case in the > > future? There must be some automagic to enforce/check that. It is simply > > not manageable to do it every now and then because then 3) will simply > > be never safe. > > > > Have you considered coccinele or some other scripted way to do the > > transition? I have no idea how to deal with future changes that would > > break the balance though. > > Yeah, that's why I've been suggesting at LSF/MM that we may need to create > a gup wrapper - say vaddr_pin_pages() - and track which sites dropping > references got converted by using this wrapper instead of gup. The > counterpart would then be more logically named as unpin_page() or whatever > instead of put_user_page(). Sure this is not completely foolproof (you can > create new callsite using vaddr_pin_pages() and then just drop refs using > put_page()) but I suppose it would be a high enough barrier for missed > conversions... Thoughts? I think the API we really need is get_user_bvec() / put_user_bvec(), and I know Christoph has been putting some work into that. That avoids doing refcount operations on hundreds of pages if the page in question is a huge page. Once people are switched over to that, they won't be tempted to manually call put_page() on the individual constituent pages of a bvec. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |