[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] One question about the hypercall to translate gfn to mfn.
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > Sent: Monday, December 15, 2014 4:45 PM > > >>> On 15.12.14 at 07:25, <kevin.tian@xxxxxxxxx> wrote: > >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > >> >>> On 12.12.14 at 08:24, <kevin.tian@xxxxxxxxx> wrote: > >> > - how is BFN or unused address (what do you mean by address here?) > >> > allocated? does it need present in guest physical memory at boot time, > >> > or just finding some holes? > >> > >> Fitting this into holes should be fine. > > > > this is an interesting open to be further discussed. Here we need consider > > the extreme case, i.e. a 64bit GPU page table can legitimately use up all > > the system memory allocates to that VM, and considering dozens of VMs, > > it means we need reserve a very large hole. > > Oh, it's guest RAM you want mapped, not frame buffer space. But still > you're never going to have to map more than the total amount of host > RAM, and (with Linux) we already assume everything can be mapped > through the 1:1 mapping. I.e. the only collision would be with excessive > PFN reservations for ballooning purposes. > Intel GPU has graphics memory (or framebuffer) backed through system memory, and we need to walk GPU page table and then map corresponding guest RAM for handling. yes, definitely host RAM is the upper limit, and what I'm concerning here is how to reserve (at boot time) or allocate (on-demand) such large PFN resource, w/o collision with other PFN reservation usage (ballooning should be fine since it's operating existing RAM ranges in dom0 e820 table). Maybe we can reserve a big-enough reserved region in dom0's e820 table at boot time, for all PFN reservation usages, and then allocate them on-demand for specific usages? Thanks Kevin _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |