[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/p2m: fix extra memory regions accounting



El 03/09/15 a les 17.20, Juergen Gross ha escrit:
> On 09/03/2015 05:01 PM, David Vrabel wrote:
>> On 03/09/15 15:55, Juergen Gross wrote:
>>> On 09/03/2015 04:52 PM, David Vrabel wrote:
>>>> On 03/09/15 15:45, David Vrabel wrote:
>>>>> On 03/09/15 15:38, Roger Pau Monnà wrote:
>>>>>> El 03/09/15 a les 14.25, Juergen Gross ha escrit:
>>>>>>> On 09/03/2015 02:05 PM, Roger Pau Monne wrote:
>>>>>>>> On systems with memory maps with ranges that don't end at page
>>>>>>>> boundaries,
>>>>>>>> like:
>>>>>>>>
>>>>>>>> [...]
>>>>>>>> (XEN)  0000000000100000 - 00000000dfdf9c00 (usable)
>>>>>>>> (XEN)  00000000dfdf9c00 - 00000000dfe4bc00 (ACPI NVS)
>>>>>>>> [...]
>>>>>>>>
>>>>>>>> xen_add_extra_mem will create a protected range that ends up at
>>>>>>>> 0xdfdf9c00,
>>>>>>>> but the function used to check if a memory address is inside of a
>>>>>>>> protected
>>>>>>>> range works with pfns, which means that an attempt to map
>>>>>>>> 0xdfdf9c00
>>>>>>>> will be
>>>>>>>> refused because the check is performed against 0xdfdf9000
>>>>>>>> instead of
>>>>>>>> 0xdfdf9c00.
>>>>>>>>
>>>>>>>> In order to fix this, make sure that the ranges that are added
>>>>>>>> to the
>>>>>>>> xen_extra_mem array are aligned to page boundaries.
>>>>>>>>
>>>>>>>> Signed-off-by: Roger Pau Monnà <roger.pau@xxxxxxxxxx>
>>>>>>>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
>>>>>>>> Cc: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
>>>>>>>> Cc: David Vrabel <david.vrabel@xxxxxxxxxx>
>>>>>>>> Cc: Juergen Gross <jgross@xxxxxxxx>
>>>>>>>> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx
>>>>>>>> ---
>>>>>>>> AFAICT this patch needs to be backported to 3.19, 4.0, 4.1 and 4.2.
>>>>>>>> ---
>>>>>>>>     arch/x86/xen/setup.c | 6 ++++++
>>>>>>>>     1 file changed, 6 insertions(+)
>>>>>>>>
>>>>>>>> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
>>>>>>>> index 55f388e..dcf5865 100644
>>>>>>>> --- a/arch/x86/xen/setup.c
>>>>>>>> +++ b/arch/x86/xen/setup.c
>>>>>>>> @@ -68,6 +68,9 @@ static void __init xen_add_extra_mem(phys_addr_t
>>>>>>>> start, phys_addr_t size)
>>>>>>>>     {
>>>>>>>>         int i;
>>>>>>>>
>>>>>>>> +    start = PAGE_ALIGN(start);
>>>>>>>> +    size &= PAGE_MASK;
>>>>>>>
>>>>>>> This is not correct. If start wasn't page aligned and size was,
>>>>>>> you'll
>>>>>>> add one additional page to xen_extra_mem.
>>>>>>
>>>>>> I'm not understanding this, let's put an example:
>>>>>>
>>>>>> start = 0x8c00
>>>>>> size = 0x1000
>>>>>>
>>>>>> After the fixup added above this would become:
>>>>>>
>>>>>> start = 0x9000
>>>>>> size = 0x1000
>>>>>>
>>>>>> So if anything, I'm adding one page less (because 0x8000 was partly
>>>>>> added, and with the fixup it is not added).
>>>>>
>>>>> We expand the reserved (i.e., non-RAM) areas down so they're fully
>>>>> covered with whole pages when we depopulate and 1:1 map them, we
>>>>> should
>>>>> add extra memory regions that cover these same areas.
>>>>
>>>> Ignore this.  This was nonsense.
>>>>
>>>> We expand the reserved (i.e., non-RAM) areas so they're fully covered
>>>> with whole pages when we depopulate and 1:1 map them, we should add the
>>>> extra memory such that it does not overlap with with expanded regions.
>>>> i.e., round up the start and round down the end (like Roger's patch
>>>> does).
>>>
>>> Nearly. Roger's patch rounds up start and rounds down the size. It might
>>> add non-RAM partial pages to xen_extra_mem.
>>
>> Yes.  You're right.
> 
> Hmm, thinking more about it, I'd prefer to change xen_extra_mem to use
> pfns instead of physical addresses. This would make things much more
> clear.
> 
> Roger, do you want to do the patch or should I?

I can certainly take care of it if you are busy, otherwise I leave it to
you since you have more expertise on it :).

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.