[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] (v2) Design proposal for RMRR fix



>>> On 13.01.15 at 12:03, <kevin.tian@xxxxxxxxx> wrote:
> Then I hope you understand now about our discussion in libxl/xen/
> hvmloader, based on the fact that conflict may not be avoided. 
> That's the major open in original discussion with Jan. I'd like to 
> give an example of the flow here per Jan's suggestion, starting 
> from domain builder after reserved regions have been specified 
> by high level libxl.
> 
> Let's take an synthetic platform w/ two devices each reported 
> with one RMRR reserved region:
>       (D1): [0xe0000, 0xeffff] in <1MB area
>       (D2): [0xa0000000, 0xa37fffff] in ~2.75G area
> 
> The guest is configured with 4G memory, and assigned with D2.
> due to libxl policy (say for migration and hotplug) in total 3
> ranges are reported:
>       (hotplug): [0xe0000, 0xeffff] in <1MB area in this node
>       (migration): [0x40000000, 0x40003fff] in ~1G area in another node
>       (static-assign): [0xa0000000, 0xa37fffff] in ~2.75G area in this node
> 
> let's use xenstore to save such information (assume accessible to both
> domain builder and hvmloader?)
> 
> STEP-1. domain builder
> 
> say the default layout w/o reserved regions would be:
>       lowmem:         [0, 0xbfffffff]
>       mmio hole:      [0xc0000000, 0xffffffff]
>       highmem:        [0x100000000, 0x140000000]
> 
> domain builder then queries reserved regions from xenstore, 
> and tries to avoid conflicts.
> 
> For [0xad000000, 0xaf7fffff], it can be avoided by reducing
> lowmem to 0xad000000 and increase highmem:

Inconsistent numbers?

>       lowmem:         [0, 0x9fffffff]
>       mmio hole:      [0xa0000000, 0xffffffff]
>       highmem:        [0x100000000, 0x160000000]
> 
> 
> For [0x40000000, 0x40003fff], leave it as a conflict since either
> reducing lowmem to 1G is not nice to guest which doesn't use
> highmem or we have to break lowmem into two trunks so more 
> structure changes are required.

This makes no sense - if such an area was explicitly requested to
be reserved, leaving it as a conflict is not an option.

> For [0xe0000, 0xeffff], leave it as a conflict (w/ guest BIOS)
> 
> w/ libxl centrally managed mode, domain builder doesn't know
> whether a conflict will lead to an immediate error or not, so 
> the best policy here is to throw warning and then move forward.
> conflicts will be caught in later steps when a region is actually
> concerned.
> 
> STEP-2. static device assignment
> 
> after domain builder, libxl will request Xen hypervisor to complete
> actual device assignment. Because D2 is statically assigned to
> this guest, Xen will setup identity mapping for [0xa0000000, 
> 0xa37fffff] with conflict detection in gfn space. Since domain builder
> has making hole for this region, there'll no conflict and device will
> be assigned to the guest successfully.
> 
> STEP-3. hvmloader boot
> 
> hvmloader also needs to query reserved regions (still thru xenstore?)

The mechanism (xenstore vs hypercall) is secondary right now I think.

> due to two reasons:
>       - mark all reported reserved regions in guest e820
>       - make holes to avoid conflict in dynamic allocation (e.g. PCI
> BAR, ACPI opregion, etc.)
> 
> hvmloader can avoid making holes for guest RAM again (even there
> are potential conflicts w/ guest RAM they would be acceptable otherwise
> libxl will fail the boot before reaching here). So hvmloader will just 
> add a new reserved e820 entry and make hole for [0xa0000000, 
> 0xa37fffff] in this example, which doesn't have a guest RAM confliction.

Make hole? The hole is already there from STEP-1.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.