|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM support for Xen
On 02/24/16 07:24, Jan Beulich wrote:
> >>> On 24.02.16 at 14:28, <haozhong.zhang@xxxxxxxxx> wrote:
> > On 02/18/16 10:17, Jan Beulich wrote:
> >> >>> On 01.02.16 at 06:44, <haozhong.zhang@xxxxxxxxx> wrote:
> >> > This design treats host NVDIMM devices as ordinary MMIO devices:
> >>
> >> Wrt the cachability note earlier on, I assume you're aware that with
> >> the XSA-154 changes we disallow any cachable mappings of MMIO
> >> by default.
> >>
> >
> > EPT entries that map the host NVDIMM SPAs to guest will be the only
> > mapping used for NVDIMM. If the memory type in the last level entries is
> > always set to the same type reported by NFIT and the ipat bit is always
> > set as well, I think it would not suffer from the cache-type
> > inconsistency problem in XSA-154?
>
> This assumes Xen knows the NVDIMM address ranges, which so
> far you meant to keep out of Xen iirc.
The original design did not consider and failed for some cases, such as
XSA-154 and MCE handling. For those two cases, Xen really should track
mapping for NVDIMM. (Yes, I changed my mind)
> But yes, things surely can
> be made work, I simply wanted to point out that there are some
> caveats.
>
> >> > (1) Dom0 Linux NVDIMM driver is responsible to detect (through NFIT)
> >> > and drive host NVDIMM devices (implementing block device
> >> > interface). Namespaces and file systems on host NVDIMM devices
> >> > are handled by Dom0 Linux as well.
> >> >
> >> > (2) QEMU mmap(2) the pmem NVDIMM devices (/dev/pmem0) into its
> >> > virtual address space (buf).
> >> >
> >> > (3) QEMU gets the host physical address of buf, i.e. the host system
> >> > physical address that is occupied by /dev/pmem0, and calls Xen
> >> > hypercall XEN_DOMCTL_memory_mapping to map it to a DomU.
> >> >
> >> > (ACPI part is described in Section 3.3 later)
> >> >
> >> > Above (1)(2) have already been done in current QEMU. Only (3) is
> >> > needed to implement in QEMU. No change is needed in Xen for address
> >> > mapping in this design.
> >> >
> >> > Open: It seems no system call/ioctl is provided by Linux kernel to
> >> > get the physical address from a virtual address.
> >> > /proc/<qemu_pid>/pagemap provides information of mapping from
> >> > VA to PA. Is it an acceptable solution to let QEMU parse this
> >> > file to get the physical address?
> >> >
> >> > Open: For a large pmem, mmap(2) is very possible to not map all SPA
> >> > occupied by pmem at the beginning, i.e. QEMU may not be able to
> >> > get all SPA of pmem from buf (in virtual address space) when
> >> > calling XEN_DOMCTL_memory_mapping.
> >> > Can mmap flag MAP_LOCKED or mlock(2) be used to enforce the
> >> > entire pmem being mmaped?
> >>
> >> A fundamental question I have here is: Why does qemu need to
> >> map this at all? It shouldn't itself need to access those ranges,
> >> since the guest is given direct access. It would seem quite a bit
> >> more natural if qemu simply inquired to underlying GFN range(s)
> >> and handed those to Xen for translation to MFNs and mapping
> >> into guest space.
> >>
> >
> > The above design is more like a hack on the existing QEMU
> > implementation for KVM without modifying the Dom0 kernel.
> >
> > Maybe it's better to let QEMU to get SPAs from Dom0 kernel (NVDIMM
> > driver) and then pass them to Xen for the address mapping:
> > (1) QEMU passes fd of /dev/pmemN or file on /dev/pmemN to Dom0 kernel.
> > (2) Dom0 kernel finds and returns all SPAs occupied by /dev/pmemN or
> > portions of /dev/pmemN where the file sits.
> > (3) QEMU passes above SPAs, and GMFN where they will be mapped to Xen
> > which maps SPAs to GMFN.
>
> Indeed, and that would also eliminate the second of your Opens
> above.
>
Yes.
> >> > 3.3 Guest ACPI Emulation
> >> >
> >> > 3.3.1 My Design
> >> >
> >> > Guest ACPI emulation is composed of two parts: building guest NFIT
> >> > and SSDT that defines ACPI namespace devices for NVDIMM, and
> >> > emulating guest _DSM.
> >> >
> >> > (1) Building Guest ACPI Tables
> >> >
> >> > This design reuses and extends hvmloader's existing mechanism that
> >> > loads passthrough ACPI tables from binary files to load NFIT and
> >> > SSDT tables built by QEMU:
> >> > 1) Because the current QEMU does not building any ACPI tables when
> >> > it runs as the Xen device model, this design needs to patch QEMU
> >> > to build NFIT and SSDT (so far only NFIT and SSDT) in this case.
> >> >
> >> > 2) QEMU copies NFIT and SSDT to the end of guest memory below
> >> > 4G. The guest address and size of those tables are written into
> >> > xenstore (/local/domain/domid/hvmloader/dm-acpi/{address,length}).
> >> >
> >> > 3) hvmloader is patched to probe and load device model passthrough
> >> > ACPI tables from above xenstore keys. The detected ACPI tables
> >> > are then appended to the end of existing guest ACPI tables just
> >> > like what current construct_passthrough_tables() does.
> >> >
> >> > Reasons for this design are listed below:
> >> > - NFIT and SSDT in question are quite self-contained, i.e. they do
> >> > not refer to other ACPI tables and not conflict with existing
> >> > guest ACPI tables in Xen. Therefore, it is safe to copy them from
> >> > QEMU and append to existing guest ACPI tables.
> >>
> >> How is this not conflicting being guaranteed? In particular I don't
> >> see how tables containing AML code and coming from different
> >> sources won't possibly cause ACPI name space collisions.
> >>
> >
> > Really there is no effective mechanism to avoid ACPI name space
> > collisions (and other kinds of conflicts) between ACPI tables loaded
> > from QEMU and ACPI tables built by hvmloader. Because which ACPI tables
> > are loaded is determined by developers, IMO it's developers'
> > responsibility to avoid any collisions and conflicts with existing ACPI
> > tables.
>
> Right, but this needs to be spelled out and settled on at design
> time (i.e. now), rather leaving things unspecified, awaiting the
> first clash.
>
So that means if no collision-proof mechanism is introduced, Xen should not
trust any passed-in ACPI tables and should build them by itself?
Haozhong
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |