[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 5/7] xen/p2m: Add logic to revector a P2M tree to use __va leafs.
>>> On 27.07.12 at 19:34, Konrad Rzeszutek Wilk <konrad@xxxxxxxxxx> wrote: > On Fri, Jul 27, 2012 at 12:47:47PM +0100, Jan Beulich wrote: >> >>> On 27.07.12 at 13:18, Stefano Stabellini >> >>> <stefano.stabellini@xxxxxxxxxxxxx> > wrote: >> > On Thu, 26 Jul 2012, Konrad Rzeszutek Wilk wrote: >> >> 1) All P2M lookups instead of using the __ka address would >> >> use the __va address. This means we can safely erase from >> >> __ka space the PMD pointers that point to the PFNs for >> >> P2M array and be OK. >> >> 2). Allocate a new array, copy the existing P2M into it, >> >> revector the P2M tree to use that, and return the old >> >> P2M to the memory allocate. This has the advantage that >> >> it sets the stage for using XEN_ELF_NOTE_INIT_P2M >> >> feature. That feature allows us to set the exact virtual >> >> address space we want for the P2M - and allows us to >> >> boot as initial domain on large machines. >> >> >> >> So we pick option 2). >> > >> > 1) looks like a decent option that requires less code. >> > Is the problem with 1) that we might want to access the P2M before we >> > have __va addresses ready? >> >> AIUI 1) has no easy way of subsequently accommodating support >> for XEN_ELF_NOTE_INIT_P2M (where you almost definitely will >> want/need to reclaim the originally used VA space - if nothing else, >> then for forward compatibility with the rest of the kernel). > > <nods> That was my thinking - this way we can boot dom0 (since > the hypervisor is the only one that implements the > XEN_ELF_NOTE_INIT_P2M) with large amount of memory. Granted booting > with more than 500GB would require adding another layer to the P2M > tree. But somehow I thought that we are limited in the hypervisor > to 500GB? The only limitation is that kexec (with the current specification) would not work beyond 512Gb, but that's a non-issue for upstream since kexec doesn't work there yet anyway. Our kernels come up fine even on 5Tb now (which is the current limit in the hypervisor). Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |