[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [v8][PATCH 09/17] hvmloader/ram: check if guest memory is out of reserved device memory maps
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > Sent: Friday, December 05, 2014 12:20 AM > > >>> On 01.12.14 at 10:24, <tiejun.chen@xxxxxxxxx> wrote: > > We need to check to reserve all reserved device memory maps in e820 > > to avoid any potential guest memory conflict. > > > > Currently, if we can't insert RDM entries directly, we may need to handle > > several ranges as follows: > > a. Fixed Ranges --> BUG() > > lowmem_reserved_base-0xA0000: reserved by BIOS implementation, > > BIOS region, > > RESERVED_MEMBASE ~ 0x100000000, > > b. RAM or RAM:Hole -> Try to reserve > > I continue to be unconvinced of the overall approach: The domain > builder continues to populate these regions when it shouldn't. Yet > once it doesn't, it would be most natural to simply communicate the doesn't -> does? > RAM regions to hvmloader, and hvmloader would use just that to > build the E820 table (and subsequently assign BARs). > My impression is that you didn't like extending hvm_info to carry sparse RAM regions. that's why the current tradeoff is taken, i.e. leaving domain builder unchanged for RAM, then preventing EPT setup for reserved regions in hypervisor (means wasting memory), and then having hvmloader to actually figure out the final e820. and that's also why per-BDF design is introduced to minimize wasted memory. We discussed to change domain builder to avoid populating reserved regions as the next step after 4.5, but w/o extending hvm_info we always need the logic in hvmloader to construct e820 from scratch. I did not catch all the discussion history between you and Tiejun, so may miss sth. here. (btw Tiejun is on an urgent leave, so his response will be slow in a few days) Thanks Kevin _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |