[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 4/4] hvmloader: add support to load extra ACPI tables from qemu
On 01/20/16 13:16, Andrew Cooper wrote: > On 20/01/16 10:36, Xiao Guangrong wrote: > > > > Hi, > > > > On 01/20/2016 06:15 PM, Haozhong Zhang wrote: > > > >> CCing QEMU vNVDIMM maintainer: Xiao Guangrong > >> > >>> Conceptually, an NVDIMM is just like a fast SSD which is linearly > >>> mapped > >>> into memory. I am still on the dom0 side of this fence. > >>> > >>> The real question is whether it is possible to take an NVDIMM, split it > >>> in half, give each half to two different guests (with appropriate NFIT > >>> tables) and that be sufficient for the guests to just work. > >>> > >> > >> Yes, one NVDIMM device can be split into multiple parts and assigned > >> to different guests, and QEMU is responsible to maintain virtual NFIT > >> tables for each part. > >> > >>> Either way, it needs to be a toolstack policy decision as to how to > >>> split the resource. > > > > Currently, we are using NVDIMM as a block device and a DAX-based > > filesystem > > is created upon it in Linux so that file-related accesses directly reach > > the NVDIMM device. > > > > In KVM, If the NVDIMM device need to be shared by different VMs, we can > > create multiple files on the DAX-based filesystem and assign the file to > > each VMs. In the future, we can enable namespace (partition-like) for > > PMEM > > memory and assign the namespace to each VMs (current Linux driver uses > > the > > whole PMEM as a single namespace). > > > > I think it is not a easy work to let Xen hypervisor recognize NVDIMM > > device > > and manager NVDIMM resource. > > > > Thanks! > > > > The more I see about this, the more sure I am that we want to keep it as > a block device managed by dom0. > > In the case of the DAX-based filesystem, I presume files are not > necessarily contiguous. I also presume that this is worked around by > permuting the mapping of the virtual NVDIMM such that the it appears as > a contiguous block of addresses to the guest? > No, it's not necessary to be contiguous. We can map those none-contiguous parts into a contiguous guest physical address space area and QEMU fills the base and size of area in vNFIT. > Today in Xen, Qemu already has the ability to create mappings in the > guest's address space, e.g. to map PCI device BARs. I don't see a > conceptual difference here, although the security/permission model > certainly is more complicated. > I'm preparing a design document and let's see afterwards what would be a better solution. Thanks, Haozhong _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |