[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 24/35] OvmfPkg/XenPlatformPei: Rework memory detection
On Thu, Jul 04, 2019 at 03:42:22PM +0100, Anthony PERARD wrote: > When running as a Xen PVH guest, there is no CMOS to read the memory > size from. Rework GetSystemMemorySize(Below|Above)4gb() so they can > works without CMOS by reading the e820 table. > > Rework XenPublishRamRegions for PVH, handle the Reserve type and explain > about the ACPI type. MTRR settings aren't modified anymore, on HVM, it's > already done by hvmloader, on PVH it is supposed to have sane default. > > Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=1689 > Signed-off-by: Anthony PERARD <anthony.perard@xxxxxxxxxx> > Acked-by: Laszlo Ersek <lersek@xxxxxxxxxx> > --- > > Notes: > Comment for Xen people: > About MTRR, should we redo the setting in OVMF? Even if in both case of > PVH and HVM, something would have setup the default type to write back > and handle a few other ranges like PCI hole, hvmloader for HVM or and > libxc I think for PVH. That's a tricky question. Ideally we would like the firmware (OVMF) to take care of that, because it already has code to do so. Problem here is that PVH can also be booted without firmware, in which case it needs the hypervisor to have setup some sane initial MTRR state. The statement in the PVH document about initial MTRR state is vague enough that allows Xen to boot into the guest with a minimal MTRR state, that can for example not contain UC regions for the MMIO regions of passed through devices, hence I think OVMF should be in charge of creating a more complete MTRR state if possible. Is this something OVMF already has logic for? Also accounting for the MMIO regions of devices? > (For PVH, it's in the spec as well > https://xenbits.xenproject.org/docs/unstable/misc/pvh.html#mtrr ) > > OvmfPkg/XenPlatformPei/Platform.h | 6 +++ > OvmfPkg/XenPlatformPei/MemDetect.c | 71 ++++++++++++++++++++++++++++++ > OvmfPkg/XenPlatformPei/Xen.c | 47 ++++++++++++++------ > 3 files changed, 111 insertions(+), 13 deletions(-) > > diff --git a/OvmfPkg/XenPlatformPei/Platform.h > b/OvmfPkg/XenPlatformPei/Platform.h > index db9a62572f..e8e0b835a5 100644 > --- a/OvmfPkg/XenPlatformPei/Platform.h > +++ b/OvmfPkg/XenPlatformPei/Platform.h > @@ -114,6 +114,12 @@ XenPublishRamRegions ( > VOID > ); > > +EFI_STATUS > +XenGetE820Map ( > + EFI_E820_ENTRY64 **Entries, > + UINT32 *Count > + ); > + > extern EFI_BOOT_MODE mBootMode; > > extern UINT8 mPhysMemAddressWidth; > diff --git a/OvmfPkg/XenPlatformPei/MemDetect.c > b/OvmfPkg/XenPlatformPei/MemDetect.c > index cb7dd93ad6..3e33e7f414 100644 > --- a/OvmfPkg/XenPlatformPei/MemDetect.c > +++ b/OvmfPkg/XenPlatformPei/MemDetect.c > @@ -96,6 +96,47 @@ Q35TsegMbytesInitialization ( > mQ35TsegMbytes = ExtendedTsegMbytes; > } > > +STATIC > +UINT64 > +GetHighestSystemMemoryAddress ( > + BOOLEAN Below4gb > + ) > +{ > + EFI_E820_ENTRY64 *E820Map; > + UINT32 E820EntriesCount; > + EFI_E820_ENTRY64 *Entry; > + EFI_STATUS Status; > + UINT32 Loop; > + UINT64 HighestAddress; > + UINT64 EntryEnd; > + > + HighestAddress = 0; > + > + Status = XenGetE820Map (&E820Map, &E820EntriesCount); You could maybe initialize this as a global to avoid having to issue a hypercall each time you need to get something from the memory map. > + ASSERT_EFI_ERROR (Status); > + > + for (Loop = 0; Loop < E820EntriesCount; Loop++) { > + Entry = E820Map + Loop; > + EntryEnd = Entry->BaseAddr + Entry->Length; > + > + if (Entry->Type == EfiAcpiAddressRangeMemory && > + EntryEnd > HighestAddress) { > + > + if (Below4gb && (EntryEnd <= BASE_4GB)) { > + HighestAddress = EntryEnd; > + } else if (!Below4gb && (EntryEnd >= BASE_4GB)) { > + HighestAddress = EntryEnd; > + } > + } > + } > + > + // > + // Round down the end address. > + // > + HighestAddress &= ~(UINT64)EFI_PAGE_MASK; > + > + return HighestAddress; You could do the rounding on the return statement. > +} > > UINT32 > GetSystemMemorySizeBelow4gb ( > @@ -105,6 +146,19 @@ GetSystemMemorySizeBelow4gb ( > UINT8 Cmos0x34; > UINT8 Cmos0x35; > > + // > + // In PVH case, there is no CMOS, we have to calculate the memory size > + // from parsing the E820 > + // > + if (XenPvhDetected ()) { IIRC on HVM you can also get the memory map from the hypercall, in which case you could use the same code path for both HVM and PVH. > + UINT64 HighestAddress; > + > + HighestAddress = GetHighestSystemMemoryAddress (TRUE); > + ASSERT (HighestAddress > 0 && HighestAddress <= BASE_4GB); > + > + return HighestAddress; The name of the function here is GetSystemMemorySizeBelow4gb, but you are returning the highest memory address in the range, is this expected? ie: highest address != memory size On HVM there are quite some holes in the memory map, and nothing guarantees there are no memory regions after the holes or non-RAM regions. > + } > + > // > // CMOS 0x34/0x35 specifies the system memory above 16 MB. > // * CMOS(0x35) is the high byte > @@ -129,6 +183,23 @@ GetSystemMemorySizeAbove4gb ( > UINT32 Size; > UINTN CmosIndex; > > + // > + // In PVH case, there is no CMOS, we have to calculate the memory size > + // from parsing the E820 > + // > + if (XenPvhDetected ()) { > + UINT64 HighestAddress; > + > + HighestAddress = GetHighestSystemMemoryAddress (FALSE); > + ASSERT (HighestAddress == 0 || HighestAddress >= BASE_4GB); > + > + if (HighestAddress >= BASE_4GB) { > + HighestAddress -= BASE_4GB; > + } > + > + return HighestAddress; > + } > + > // > // CMOS 0x5b-0x5d specifies the system memory above 4GB MB. > // * CMOS(0x5d) is the most significant size byte > diff --git a/OvmfPkg/XenPlatformPei/Xen.c b/OvmfPkg/XenPlatformPei/Xen.c > index cbfd8058fc..62a2c3ed93 100644 > --- a/OvmfPkg/XenPlatformPei/Xen.c > +++ b/OvmfPkg/XenPlatformPei/Xen.c > @@ -279,6 +279,8 @@ XenPublishRamRegions ( > EFI_E820_ENTRY64 *E820Map; > UINT32 E820EntriesCount; > EFI_STATUS Status; > + EFI_E820_ENTRY64 *Entry; > + UINTN Index; > > DEBUG ((EFI_D_INFO, "Using memory map provided by Xen\n")); > > @@ -287,26 +289,45 @@ XenPublishRamRegions ( > // > E820EntriesCount = 0; > Status = XenGetE820Map (&E820Map, &E820EntriesCount); > - > ASSERT_EFI_ERROR (Status); > > - if (E820EntriesCount > 0) { > - EFI_E820_ENTRY64 *Entry; > - UINT32 Loop; > + for (Index = 0; Index < E820EntriesCount; Index++) { > + UINT64 Base; > + UINT64 End; > > - for (Loop = 0; Loop < E820EntriesCount; Loop++) { > - Entry = E820Map + Loop; > + Entry = &E820Map[Index]; > > + > + // > + // Round up the start address, and round down the end address. > + // > + Base = ALIGN_VALUE (Entry->BaseAddr, (UINT64)EFI_PAGE_SIZE); > + End = (Entry->BaseAddr + Entry->Length) & ~(UINT64)EFI_PAGE_MASK; > + > + switch (Entry->Type) { > + case EfiAcpiAddressRangeMemory: > + AddMemoryRangeHob (Base, End); > + break; > + case EfiAcpiAddressRangeACPI: > + // > + // Ignore, OVMF should read the ACPI tables and provide them to linux > + // from a different location. Will OVMF also parse dynamic tables to check for references there? > + // > + break; > + case EfiAcpiAddressRangeReserved: > // > - // Only care about RAM > + // Avoid ranges marked as reserved in the e820 table provided by > + // hvmloader as it conflicts with an other aperture. I think you want the last part of the sentence to be: '... as it conflicts with other apertures.' I think however that you should make sure ranges marked as reserved in the original memory map also end up in the final one, hence overlapping ranges should be merged, instead of discarded. > + // error message: CpuDxe: IntersectMemoryDescriptor: > + // desc [FC000000, 100000000) type 1 cap 8700000000026001 > + // conflicts with aperture [FEE00000, FEE01000) cap 1 > // > - if (Entry->Type != EfiAcpiAddressRangeMemory) { > - continue; > + if (!XenHvmloaderDetected ()) { > + AddReservedMemoryBaseSizeHob (Base, End - Base, FALSE); This special casing for PVH looks weird, ideally we would like to use the same code path, or else it should be explicitly mentioned why PVH has diverging behaviour. Thanks, Roger. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |