[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: E820 memory allocation issue on Threadripper platforms
On Tue, Jan 16, 2024 at 10:33:26AM +0100, Jan Beulich wrote: > ... as per > > (XEN) Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x4a00000 > > there's an overlap with not exactly a hole, but with an > EfiACPIMemoryNVS region: > > (XEN) 0000000100000-0000003159fff type=2 attr=000000000000000f > (XEN) 000000315a000-0000003ffffff type=7 attr=000000000000000f > (XEN) 0000004000000-0000004045fff type=10 attr=000000000000000f > (XEN) 0000004046000-0000009afefff type=7 attr=000000000000000f > > (the 3rd of the 4 lines). Considering there's another region higher > up: > > (XEN) 00000a747f000-00000a947efff type=10 attr=000000000000000f > > I'm inclined to say it is poor firmware (or, far less likely, boot > loader) behavior to clobber a rather low and entirely arbitrary RAM > range, rather than consolidating all such regions near the top of > RAM below 4Gb. FWIW, we have two more similar reports, with different motherboards and firmware versions, but the common factor is Threadripper CPU. It doesn't exclude firmware issue (it can be an issue in some common template, like edk2?), but makes it a bit less likely. > There are further such odd regions, btw: > > (XEN) 0000009aff000-0000009ffffff type=0 attr=000000000000000f > ... > (XEN) 000000b000000-000000b020fff type=0 attr=000000000000000f > > If the kernel image was sufficiently much larger, these could become > a problem as well. Otoh if the kernel wasn't built with > CONFIG_PHYSICAL_START=0x1000000, i.e. to start at 16Mb, but at, say, > 2Mb, things should apparently work even with this unusual memory > layout (until the kernel would grow enough to again run into that > very region). Shouldn't CONFIG_RELOCATABLE=y take care of this? At least in the case of Qubes OS, it's enabled and the issue still happens. -- Best Regards, Marek Marczykowski-Górecki Invisible Things Lab Attachment:
signature.asc
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |