[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xen: Can't insert balloon page into VM userspace (WAS Re: [linux-linus bisection] complete test-arm64-arm64-xl-xsm)


  • To: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Matthew Wilcox <willy@xxxxxxxxxxxxx>, Julien Grall <julien.grall@xxxxxxx>
  • From: David Hildenbrand <david@xxxxxxxxxx>
  • Date: Tue, 12 Mar 2019 20:46:20 +0100
  • Autocrypt: addr=david@xxxxxxxxxx; prefer-encrypt=mutual; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwX4EEwECACgFAljj9eoCGwMFCQlmAYAGCwkI BwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEE3eEPcA/4Na5IIP/3T/FIQMxIfNzZshIq687qgG 8UbspuE/YSUDdv7r5szYTK6KPTlqN8NAcSfheywbuYD9A4ZeSBWD3/NAVUdrCaRP2IvFyELj xoMvfJccbq45BxzgEspg/bVahNbyuBpLBVjVWwRtFCUEXkyazksSv8pdTMAs9IucChvFmmq3 jJ2vlaz9lYt/lxN246fIVceckPMiUveimngvXZw21VOAhfQ+/sofXF8JCFv2mFcBDoa7eYob s0FLpmqFaeNRHAlzMWgSsP80qx5nWWEvRLdKWi533N2vC/EyunN3HcBwVrXH4hxRBMco3jvM m8VKLKao9wKj82qSivUnkPIwsAGNPdFoPbgghCQiBjBe6A75Z2xHFrzo7t1jg7nQfIyNC7ez MZBJ59sqA9EDMEJPlLNIeJmqslXPjmMFnE7Mby/+335WJYDulsRybN+W5rLT5aMvhC6x6POK z55fMNKrMASCzBJum2Fwjf/VnuGRYkhKCqqZ8gJ3OvmR50tInDV2jZ1DQgc3i550T5JDpToh dPBxZocIhzg+MBSRDXcJmHOx/7nQm3iQ6iLuwmXsRC6f5FbFefk9EjuTKcLMvBsEx+2DEx0E UnmJ4hVg7u1PQ+2Oy+Lh/opK/BDiqlQ8Pz2jiXv5xkECvr/3Sv59hlOCZMOaiLTTjtOIU7Tq 7ut6OL64oAq+zsFNBFXLn5EBEADn1959INH2cwYJv0tsxf5MUCghCj/CA/lc/LMthqQ773ga uB9mN+F1rE9cyyXb6jyOGn+GUjMbnq1o121Vm0+neKHUCBtHyseBfDXHA6m4B3mUTWo13nid 0e4AM71r0DS8+KYh6zvweLX/LL5kQS9GQeT+QNroXcC1NzWbitts6TZ+IrPOwT1hfB4WNC+X 2n4AzDqp3+ILiVST2DT4VBc11Gz6jijpC/KI5Al8ZDhRwG47LUiuQmt3yqrmN63V9wzaPhC+ xbwIsNZlLUvuRnmBPkTJwwrFRZvwu5GPHNndBjVpAfaSTOfppyKBTccu2AXJXWAE1Xjh6GOC 8mlFjZwLxWFqdPHR1n2aPVgoiTLk34LR/bXO+e0GpzFXT7enwyvFFFyAS0Nk1q/7EChPcbRb hJqEBpRNZemxmg55zC3GLvgLKd5A09MOM2BrMea+l0FUR+PuTenh2YmnmLRTro6eZ/qYwWkC u8FFIw4pT0OUDMyLgi+GI1aMpVogTZJ70FgV0pUAlpmrzk/bLbRkF3TwgucpyPtcpmQtTkWS gDS50QG9DR/1As3LLLcNkwJBZzBG6PWbvcOyrwMQUF1nl4SSPV0LLH63+BrrHasfJzxKXzqg rW28CTAE2x8qi7e/6M/+XXhrsMYG+uaViM7n2je3qKe7ofum3s4vq7oFCPsOgwARAQABwsFl BBgBAgAPBQJVy5+RAhsMBQkJZgGAAAoJEE3eEPcA/4NagOsP/jPoIBb/iXVbM+fmSHOjEshl KMwEl/m5iLj3iHnHPVLBUWrXPdS7iQijJA/VLxjnFknhaS60hkUNWexDMxVVP/6lbOrs4bDZ NEWDMktAeqJaFtxackPszlcpRVkAs6Msn9tu8hlvB517pyUgvuD7ZS9gGOMmYwFQDyytpepo YApVV00P0u3AaE0Cj/o71STqGJKZxcVhPaZ+LR+UCBZOyKfEyq+ZN311VpOJZ1IvTExf+S/5 lqnciDtbO3I4Wq0ArLX1gs1q1XlXLaVaA3yVqeC8E7kOchDNinD3hJS4OX0e1gdsx/e6COvy qNg5aL5n0Kl4fcVqM0LdIhsubVs4eiNCa5XMSYpXmVi3HAuFyg9dN+x8thSwI836FoMASwOl C7tHsTjnSGufB+D7F7ZBT61BffNBBIm1KdMxcxqLUVXpBQHHlGkbwI+3Ye+nE6HmZH7IwLwV W+Ajl7oYF+jeKaH4DZFtgLYGLtZ1LDwKPjX7VAsa4Yx7S5+EBAaZGxK510MjIx6SGrZWBrrV TEvdV00F2MnQoeXKzD7O4WFbL55hhyGgfWTHwZ457iN9SgYi1JLPqWkZB0JRXIEtjd4JEQcx +8Umfre0Xt4713VxMygW0PnQt5aSQdMD58jHFxTk092mU+yIHj5LeYgvwSgZN4airXk5yRXl SE+xAvmumFBY
  • Cc: Juergen Gross <jgross@xxxxxxxx>, k.khlebnikov@xxxxxxxxxxx, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Kees Cook <keescook@xxxxxxxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, "VMware, Inc." <pv-drivers@xxxxxxxxxx>, osstest service owner <osstest-admin@xxxxxxxxxxxxxx>, "linux-kernel@xxxxxxxxxxxxxxx" <linux-kernel@xxxxxxxxxxxxxxx>, linux-mm@xxxxxxxxx, Julien Freche <jfreche@xxxxxxxxxx>, Nadav Amit <namit@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 12 Mar 2019 19:46:36 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 12.03.19 19:23, David Hildenbrand wrote:
> On 12.03.19 19:02, Boris Ostrovsky wrote:
>> On 3/12/19 1:24 PM, Andrew Cooper wrote:
>>> On 12/03/2019 17:18, David Hildenbrand wrote:
>>>> On 12.03.19 18:14, Matthew Wilcox wrote:
>>>>> On Tue, Mar 12, 2019 at 05:05:39PM +0000, Julien Grall wrote:
>>>>>> On 3/12/19 3:59 PM, Julien Grall wrote:
>>>>>>> It looks like all the arm test for linus [1] and next [2] tree
>>>>>>> are now failing. x86 seems to be mostly ok.
>>>>>>>
>>>>>>> The bisector fingered the following commit:
>>>>>>>
>>>>>>> commit 0ee930e6cafa048c1925893d0ca89918b2814f2c
>>>>>>> Author: Matthew Wilcox <willy@xxxxxxxxxxxxx>
>>>>>>> Date:   Tue Mar 5 15:46:06 2019 -0800
>>>>>>>
>>>>>>>      mm/memory.c: prevent mapping typed pages to userspace
>>>>>>>      Pages which use page_type must never be mapped to userspace as it 
>>>>>>> would
>>>>>>>      destroy their page type.  Add an explicit check for this instead of
>>>>>>>      assuming that kernel drivers always get this right.
>>>>> Oh good, it found a real problem.
>>>>>
>>>>>> It turns out the problem is because the balloon driver will call
>>>>>> __SetPageOffline() on allocated page. Therefore the page has a type and
>>>>>> vm_insert_pages will deny the insertion.
>>>>>>
>>>>>> My knowledge is quite limited in this area. So I am not sure how we can
>>>>>> solve the problem.
>>>>>>
>>>>>> I would appreciate if someone could provide input of to fix the mapping.
>>>>> I don't know the balloon driver, so I don't know why it was doing this,
>>>>> but what it was doing was Wrong and has been since 2014 with:
>>>>>
>>>>> commit d6d86c0a7f8ddc5b38cf089222cb1d9540762dc2
>>>>> Author: Konstantin Khlebnikov <k.khlebnikov@xxxxxxxxxxx>
>>>>> Date:   Thu Oct 9 15:29:27 2014 -0700
>>>>>
>>>>>     mm/balloon_compaction: redesign ballooned pages management
>>>>>
>>>>> If ballooned pages are supposed to be mapped into userspace, you can't 
>>>>> mark
>>>>> them as ballooned pages using the mapcount field.
>>>>>
>>>> Asking myself why anybody would want to map balloon inflated pages into
>>>> user space (this just sounds plain wrong but my understanding to what
>>>> XEN balloon driver does might be limited), but I assume the easy fix
>>>> would be to revert
>>> I suspect the bug here is that the balloon driver is (ab)used for a
>>> second purpose
>>
>> Yes. And its name is alloc_xenballooned_pages().
>>
> 
> Haven't had a look at the code yet, but would another temporary fix be
> to clear/set PG_offline when allocating/freeing a ballooned page?
> (assuming here that only such pages will be mapped to user space)
> 

I guess something like this could do the trick if I understood it correctly:

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 39b229f9e256..d37dd5bb7a8f 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -604,6 +604,7 @@ int alloc_xenballooned_pages(int nr_pages, struct
page **pages)
        while (pgno < nr_pages) {
                page = balloon_retrieve(true);
                if (page) {
+                       __ClearPageOffline(page);
                        pages[pgno++] = page;
 #ifdef CONFIG_XEN_HAVE_PVMMU
                        /*
@@ -645,8 +646,10 @@ void free_xenballooned_pages(int nr_pages, struct
page **pages)
        mutex_lock(&balloon_mutex);

        for (i = 0; i < nr_pages; i++) {
-               if (pages[i])
+               if (pages[i]) {
+                       __SetPageOffline(pages[i]);
                        balloon_append(pages[i]);
+               }
        }

        balloon_stats.target_unpopulated -= nr_pages;


At least this way, the pages allocated (and thus eventually mapped to
user space) would not be marked, but the other ones would remain marked
and could be excluded by makedumptool.

-- 

Thanks,

David / dhildenb

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.