[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Design doc of adding ACPI support for arm64 on Xen - version 2



On 08/11/2015 11:35 AM, Ian Campbell wrote:
On Tue, 2015-08-11 at 11:29 -0400, Boris Ostrovsky wrote:
On 08/11/2015 10:21 AM, Ian Campbell wrote:
On Tue, 2015-08-11 at 15:19 +0100, Ian Campbell wrote:
On Fri, 2015-08-07 at 10:11 +0800, Shannon Zhao wrote:
This document is going to explain the design details of Xen booting
with
ACPI on ARM. Maybe parts of it may not be appropriate. Any comments
are
welcome.
Some small subsets of this seem like they might overlap with what
will be
required for PVH on x86 (a new x86 guest mode not dissimilar to the
sole
ARM guest mode). If so then it would be preferable IMHO if PVH x86
could
use the same interfaces.

I've trimmed the quotes to just those bits and CCd some of the PVH
people
(Boris and Roger[0]) in case they have any thoughts.

Actually, having done the trimming there is only one such bit:

[...]
4. Map MMIO regions
-------------------
Register a bus_notifier for platform and amba bus in Linux.
Previously PCI was included in this scheme, which is why I thought of
PVH,
having missed that PCI wasn't mentioned here now.

Is it handled some other way now or is that just an accidental
omission?
PCI already has a bus notifier which is probably why it's not explicitly
called out here.
Right, but I was more specifically thinking of the bit which followed
regarding the use of the notifier to map the MMIO, which is the more
important bit.

This notifier calls PHYSDEVOP_pci_device_add and as far as I can tell MMIO is mapped there (for IOMMU, which IIUIC is the main issue here).

-boris


   Add a new
XENMAPSPACE "XENMAPSPACE_dev_mmio". Within the register, check if
the
device is newly added, then call hypercall XENMEM_add_to_physmap to
map
the mmio regions.
Ian.

[0] Roger is away for a week or so, but I'm expect feedback to be of
the
"we could use one extra field" type rather than "this needs to be
done
some
totally different way for x86/PVH" (in which case we wouldn't want to
share
the interface anyway I suppose) so need to block on awaiting that
feedback.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.