[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] hvmloader: add support to load extra ACPI tables from qemu

>>> On 19.01.16 at 12:37, <wei.liu2@xxxxxxxxxx> wrote:
> On Mon, Jan 18, 2016 at 01:46:29AM -0700, Jan Beulich wrote:
>> >>> On 18.01.16 at 01:52, <haozhong.zhang@xxxxxxxxx> wrote:
>> > On 01/15/16 10:10, Jan Beulich wrote:
>> >> >>> On 29.12.15 at 12:31, <haozhong.zhang@xxxxxxxxx> wrote:
>> >> > NVDIMM devices are detected and configured by software through
>> >> > ACPI. Currently, QEMU maintains ACPI tables of vNVDIMM devices. This
>> >> > patch extends the existing mechanism in hvmloader of loading passthrough
>> >> > ACPI tables to load extra ACPI tables built by QEMU.
>> >> 
>> >> Mechanically the patch looks okay, but whether it's actually needed
>> >> depends on whether indeed we want NV RAM managed in qemu
>> >> instead of in the hypervisor (where imo it belongs); I didn' see any
>> >> reply yet to that same comment of mine made (iirc) in the context
>> >> of another patch.
>> > 
>> > One purpose of this patch series is to provide vNVDIMM backed by host
>> > NVDIMM devices. It requires some drivers to detect and manage host
>> > NVDIMM devices (including parsing ACPI, managing labels, etc.) that
>> > are not trivial, so I leave this work to the dom0 linux. Current Linux
>> > kernel abstract NVDIMM devices as block devices (/dev/pmemXX). QEMU
>> > then mmaps them into certain range of dom0's address space and asks
>> > Xen hypervisor to map that range of address space to a domU.
>> > 
> OOI Do we have a viable solution to do all these non-trivial things in
> core hypervisor?  Are you proposing designing a new set of hypercalls
> for NVDIMM?  

That's certainly a possibility; I lack sufficient detail to make myself
an opinion which route is going to be best.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.