[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Passing Xen memory map and resource map to OVMF



On Tue, Nov 12, 2013 at 06:33:21PM +0000, Wei Liu wrote:
> Hi all
> 
> Currently OVMF determines memory size by consulting CMOS, then it makes
> up memory map of its own.
> 
> Consulting memory size by reading CMOS limits the RAM size to 1TB as
> there's only 3 bytes from 0x5b-0x5d in CMOS, where the upper memory size
> is stored.
> 
> And from Xen's point of view, OVMF should use the memory mapped passed
> by hypervisor (from hvmloader) instead of making up its own.
> 
> To solve the above two problems all in one go, I plan to pass necessary
> information (io resource, mmio resource) to OVMF from Xen.  I will
> construct the table / structure in hvmloader then hook up platform pei
> code when OVMF is running on Xen.

/me nods. The nice thing about that is that it could also allow us
to modify the E820 from the toolstack for any combination we want.

For example if you use e820_host=1 in your PV guest the E820 will
look like the host one. Great for PCI passthrough when you have some
stubborn cards.
> 
> The first thing that comes in mind is to reuse E820 table for memory map
> plus some extra fields for io / mmio resources. But I guess UEFI is the
> new world so stuffs like E820 from old world will be less popular. Any
> suggestion on existing table / data structure I can use?

I think the E820 is fine. After all the Linux kernel picks up the EFI
memmap and then stuffs it in E820.

And there are some patches that do this for HVM already:

http://article.gmane.org/gmane.comp.emulators.xen.devel/170593

for which I was hoping could be reworked as needed.
> 
> 
> Thanks
> Wei.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.