[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC: [PATCH 1/3] Enhance platform support for PCI




On Monday 02 March 2015 05:18 PM, Ian Campbell wrote:
On Fri, 2015-02-27 at 17:15 +0000, Stefano Stabellini wrote:
On Fri, 27 Feb 2015, Ian Campbell wrote:
On Fri, 2015-02-27 at 16:35 +0000, Jan Beulich wrote:
On 27.02.15 at 16:24, <ian.campbell@xxxxxxxxxx> wrote:
On Fri, 2015-02-27 at 14:54 +0000, Stefano Stabellini wrote:
MMCFG is a Linux config option, not to be confused with
PHYSDEVOP_pci_mmcfg_reserved that is a Xen hypercall interface.  I don't
think that the way Linux (or FreeBSD) call PHYSDEVOP_pci_mmcfg_reserved
is relevant.
My (possibly flawed) understanding was that pci_mmcfg_reserved was
intended to propagate the result of dom0 parsing some firmware table or
other to the hypevisor.
That's not flawed at all.
I think that's a first in this thread ;-)

In Linux dom0 we call it walking pci_mmcfg_list, which looking at
arch/x86/pci/mmconfig-shared.c pci_parse_mcfg is populated by walking
over a "struct acpi_table_mcfg" (there also appears to be a bunch of
processor family derived entries, which I guess are "quirks" of some
sort).
Right - this parses ACPI tables (plus applies some knowledge about
certain specific systems/chipsets/CPUs) and verifies that the space
needed for the MMCFG region is properly reserved either in E820 or
in the ACPI specified resources (only if so Linux decides to use
MMCFG and consequently also tells Xen that it may use it).
Thanks.

So I think what I wrote in <1424948710.14641.25.camel@xxxxxxxxxx>
applies as is to Device Tree based ARM devices, including the need for
the PHYSDEVOP_pci_host_bridge_add call.
Although I understand now that PHYSDEVOP_pci_mmcfg_reserved was
intendend for passing down firmware information to Xen, as the
information that we need is exactly the same, I think it would be
acceptable to use the same hypercall on ARM too.
I strongly disagree, they have very different semantics and overloading
the existing interface would be both wrong and confusing.

It'll also make things harder in the ACPI case where we want to use the
existing hypercall for its original purpose (to propagate MCFG
information to Xen).

I am not hard set on this and the new hypercall is also a viable option.
However If we do introduce a new hypercall as Ian suggested, do we need
to take into account the possibility that an host bridge might have
multiple cfg memory ranges?
I don't believe so, a host bridge has a 1:1 relationship with CFG space.

Ian.

I agree with this flow. It fits into what we have implemented at our end.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.