[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Xen-devel] [PATCH RFC v1 02/12] mm/usercopy.c: Prepare check_page_span() for PG_reserved changes
- To: linux-kernel@xxxxxxxxxxxxxxx
- From: David Hildenbrand <david@xxxxxxxxxx>
- Date: Tue, 22 Oct 2019 19:12:29 +0200
- Cc: Kate Stewart <kstewart@xxxxxxxxxxxxxxxxxxx>, Sasha Levin <sashal@xxxxxxxxxx>, linux-hyperv@xxxxxxxxxxxxxxx, Michal Hocko <mhocko@xxxxxxxx>, Radim Krčmář <rkrcmar@xxxxxxxxxx>, kvm@xxxxxxxxxxxxxxx, David Hildenbrand <david@xxxxxxxxxx>, KarimAllah Ahmed <karahmed@xxxxxxxxx>, Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx>, Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>, Alexander Duyck <alexander.duyck@xxxxxxxxx>, Michal Hocko <mhocko@xxxxxxxxxx>, Paul Mackerras <paulus@xxxxxxxxxx>, linux-mm@xxxxxxxxx, Pavel Tatashin <pavel.tatashin@xxxxxxxxxxxxx>, Paul Mackerras <paulus@xxxxxxxxx>, Michael Ellerman <mpe@xxxxxxxxxxxxxx>, "H. Peter Anvin" <hpa@xxxxxxxxx>, Wanpeng Li <wanpengli@xxxxxxxxxxx>, Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx>, "K. Y. Srinivasan" <kys@xxxxxxxxxxxxx>, Fabio Estevam <festevam@xxxxxxxxx>, Ben Chan <benchan@xxxxxxxxxxxx>, Kees Cook <keescook@xxxxxxxxxxxx>, devel@xxxxxxxxxxxxxxxxxxxx, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Stephen Hemminger <sthemmin@xxxxxxxxxxxxx>, "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxx>, Joerg Roedel <joro@xxxxxxxxxx>, x86@xxxxxxxxxx, YueHaibing <yuehaibing@xxxxxxxxxx>, Mike Rapoport <rppt@xxxxxxxxxxxxx>, Madhumitha Prabakaran <madhumithabiw@xxxxxxxxx>, Peter Zijlstra <peterz@xxxxxxxxxxxxx>, Ingo Molnar <mingo@xxxxxxxxxx>, Vlastimil Babka <vbabka@xxxxxxx>, Nishka Dasgupta <nishkadg.linux@xxxxxxxxx>, Anthony Yznaga <anthony.yznaga@xxxxxxxxxx>, Oscar Salvador <osalvador@xxxxxxx>, Dan Carpenter <dan.carpenter@xxxxxxxxxx>, "Isaac J. Manjarres" <isaacm@xxxxxxxxxxxxxx>, Matt Sickler <Matt.Sickler@xxxxxxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Anshuman Khandual <anshuman.khandual@xxxxxxx>, Haiyang Zhang <haiyangz@xxxxxxxxxxxxx>, Simon Sandström <simon@xxxxxxxxxx>, Dan Williams <dan.j.williams@xxxxxxxxx>, kvm-ppc@xxxxxxxxxxxxxxx, Qian Cai <cai@xxxxxx>, Alex Williamson <alex.williamson@xxxxxxxxxx>, Mike Rapoport <rppt@xxxxxxxxxxxxxxxxxx>, Borislav Petkov <bp@xxxxxxxxx>, Nicholas Piggin <npiggin@xxxxxxxxx>, Andy Lutomirski <luto@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>, Todd Poynor <toddpoynor@xxxxxxxxxx>, Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>, Allison Randal <allison@xxxxxxxxxxx>, Jim Mattson <jmattson@xxxxxxxxxx>, Christophe Leroy <christophe.leroy@xxxxxx>, Vandana BN <bnvandana@xxxxxxxxx>, Jeremy Sowden <jeremy@xxxxxxxxxx>, Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>, Cornelia Huck <cohuck@xxxxxxxxxx>, Pavel Tatashin <pasha.tatashin@xxxxxxxxxx>, Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>, Sean Christopherson <sean.j.christopherson@xxxxxxxxx>, Rob Springer <rspringer@xxxxxxxxxx>, Thomas Gleixner <tglx@xxxxxxxxxxxxx>, Johannes Weiner <hannes@xxxxxxxxxxx>, Paolo Bonzini <pbonzini@xxxxxxxxxx>, Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>, linuxppc-dev@xxxxxxxxxxxxxxxx
- Delivery-date: Tue, 22 Oct 2019 17:14:03 +0000
- List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
Right now, ZONE_DEVICE memory is always set PG_reserved. We want to
change that.
Let's make sure that the logic in the function won't change. Once we no
longer set these pages to reserved, we can rework this function to
perform separate checks for ZONE_DEVICE (split from PG_reserved checks).
Cc: Kees Cook <keescook@xxxxxxxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Kate Stewart <kstewart@xxxxxxxxxxxxxxxxxxx>
Cc: Allison Randal <allison@xxxxxxxxxxx>
Cc: "Isaac J. Manjarres" <isaacm@xxxxxxxxxxxxxx>
Cc: Qian Cai <cai@xxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
---
mm/usercopy.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/usercopy.c b/mm/usercopy.c
index 660717a1ea5c..a3ac4be35cde 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -203,14 +203,15 @@ static inline void check_page_span(const void *ptr,
unsigned long n,
* device memory), or CMA. Otherwise, reject since the object spans
* several independently allocated pages.
*/
- is_reserved = PageReserved(page);
+ is_reserved = PageReserved(page) || is_zone_device_page(page);
is_cma = is_migrate_cma_page(page);
if (!is_reserved && !is_cma)
usercopy_abort("spans multiple pages", NULL, to_user, 0, n);
for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) {
page = virt_to_head_page(ptr);
- if (is_reserved && !PageReserved(page))
+ if (is_reserved && !(PageReserved(page) ||
+ is_zone_device_page(page)))
usercopy_abort("spans Reserved and non-Reserved pages",
NULL, to_user, 0, n);
if (is_cma && !is_migrate_cma_page(page))
--
2.21.0
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|