[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v8][PATCH 09/17] hvmloader/ram: check if guest memory is out of reserved device memory maps

>>> On 05.12.14 at 07:23, <kevin.tian@xxxxxxxxx> wrote:
>>  From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> Sent: Friday, December 05, 2014 12:20 AM
>> >>> On 01.12.14 at 10:24, <tiejun.chen@xxxxxxxxx> wrote:
>> > We need to check to reserve all reserved device memory maps in e820
>> > to avoid any potential guest memory conflict.
>> >
>> > Currently, if we can't insert RDM entries directly, we may need to handle
>> > several ranges as follows:
>> > a. Fixed Ranges --> BUG()
>> >  lowmem_reserved_base-0xA0000: reserved by BIOS implementation,
>> >  BIOS region,
>> >  RESERVED_MEMBASE ~ 0x100000000,
>> > b. RAM or RAM:Hole -> Try to reserve
>> I continue to be unconvinced of the overall approach: The domain
>> builder continues to populate these regions when it shouldn't. Yet
>> once it doesn't, it would be most natural to simply communicate the
> doesn't -> does?

No. The domain builder currently populates these regions (at least
I didn't spot a change to make it not do so).

>> RAM regions to hvmloader, and hvmloader would use just that to
>> build the E820 table (and subsequently assign BARs).
> My impression is that you didn't like extending hvm_info to carry
> sparse RAM regions. that's why the current tradeoff is taken, i.e.
> leaving domain builder unchanged for RAM, then preventing EPT 
> setup for reserved regions in hypervisor (means wasting memory), 
> and then having hvmloader to actually figure out the final e820. 
> and that's also why per-BDF design is introduced to minimize wasted 
> memory. We discussed to change domain builder to avoid populating 
> reserved regions as the next step after 4.5, but w/o extending 
> hvm_info we always need the logic in hvmloader to construct e820 
> from scratch.

Communicating this via hvm_info is not the only way. For example,
the XENMEM_{set_,}memory_map pair of hypercalls could be used
(and is readily available to be extended that way, since for HVM
domains XENMEM_set_memory_map returns -EPERM at present). The
only potentially problematic aspect I can see with using it might be
its limiting of the entry count to E820MAX.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.