[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] hvmloader: add support to load extra ACPI tables from qemu

>>> On 20.01.16 at 06:31, <haozhong.zhang@xxxxxxxxx> wrote:
> The primary reason of current solution is to reuse existing NVDIMM
> driver in Linux kernel.

Re-using code in the Dom0 kernel has benefits and drawbacks, and
in any event needs to depend on proper layering to remain in place.
A benefit is less code duplication between Xen and Linux; along the
same lines a drawback is code duplication between various Dom0
OS variants.

> One responsibility of this driver is to discover NVDIMM devices and
> their parameters (e.g. which portion of an NVDIMM device can be mapped
> into the system address space and which address it is mapped to) by
> parsing ACPI NFIT tables. Looking at the NFIT spec in Sec 5.2.25 of
> ACPI Specification v6 and the actual code in Linux kernel
> (drivers/acpi/nfit.*), it's not a trivial task.

To answer one of Kevin's questions: The NFIT table doesn't appear
to require the ACPI interpreter. They seem more like SRAT and SLIT.
Also you failed to answer Kevin's question regarding E820 entries: I
think NVDIMM (or at least parts thereof) get represented in E820 (or
the EFI memory map), and if that's the case this would be a very
strong hint towards management needing to be in the hypervisor.

> Secondly, the driver implements a convenient block device interface to
> let software access areas where NVDIMM devices are mapped. The
> existing vNVDIMM implementation in QEMU uses this interface.
> As Linux NVDIMM driver has already done above, why do we bother to
> reimplement them in Xen?

See above; a possibility is that we may need a split model (block
layer parts on Dom0, "normal memory" parts in the hypervisor.
Iirc the split is being determined by firmware, and hence set in
stone by the time OS (or hypervisor) boot starts.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.