[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] maybe revert commit c275a57f5ec3 "xen/balloon: Set balloon's initial state to number of existing RAM pages"



On 08/07/17 02:59, Konrad Rzeszutek Wilk wrote:
> On Tue, Mar 28, 2017 at 05:30:24PM +0200, Juergen Gross wrote:
>> On 28/03/17 16:27, Boris Ostrovsky wrote:
>>> On 03/28/2017 04:08 AM, Jan Beulich wrote:
>>>>>>> On 28.03.17 at 03:57, <boris.ostrovsky@xxxxxxxxxx> wrote:
>>>>> I think there is indeed a disconnect between target memory (provided by 
>>>>> the toolstack) and current memory (i.e actual pages available to the 
>>>>> guest).
>>>>>
>>>>> For example
>>>>>
>>>>> [    0.000000] BIOS-e820: [mem 0x000000000009e000-0x000000000009ffff] 
>>>>> reserved
>>>>> [    0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] 
>>>>> reserved
>>>>>
>>>>> are missed in target calculation. The hvmloader marks them as RESERVED 
>>>>> (in build_e820_table()) but target value is not aware of this action.
>>>>>
>>>>> And then the same problem repeats when kernel removes 
>>>>> 0x000a0000-0x000fffff chunk.
>>>> But this is all in-guest behavior, i.e. nothing an entity outside the
>>>> guest (tool stack or hypervisor) should need to be aware of. That
>>>> said, there is still room for improvement in the tools I think:
>>>> Regions which architecturally aren't RAM (namely the
>>>> 0xa0000-0xfffff range) would probably better not be accounted
>>>> for as RAM as far as ballooning is concerned. In the hypervisor,
>>>> otoh, all memory assigned to the guest (i.e. including such backing
>>>> ROMs) needs to be accounted.
>>>
>>> On the Linux side we should not include in balloon calculations pages
>>> reserved by trim_bios_range(), i.e. (BIOS_END-BIOS_BEGIN) + 1.
>>>
>>> Which leaves hvmloader's special pages (and possibly memory under
>>> 0xA0000 which may get reserved). Can we pass this info to guests via
>>> xenstore?
>>
>> I'd rather keep an internal difference between online pages and E820-map
>> count value in the balloon driver. This should work always.
> 
> Did we ever come with a patch for this?

Yes, I've sent V2 recently:

https://lists.xen.org/archives/html/xen-devel/2017-07/msg00530.html


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.