[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC: [PATCH 1/3] Enhance platform support for PCI

On Fri, 2015-02-20 at 15:15 +0000, Jan Beulich wrote:
> > That's the issue we are trying to resolve, with device tree there is no
> > explicit segment ID, so we have an essentially unindexed set of PCI
> > buses in both Xen and dom0.
> How that? What if two bus numbers are equal? There ought to be
> some kind of topology information. Or if all buses are distinct, then
> you don't need a segment number.

It's very possible that I simply don't have the PCI terminology straight
in my head, leading to me talking nonsense.

I'll explain how I'm using it and perhaps you can put me straight...

My understanding was that a PCI segment equates to a PCI host
controller, i.e. a specific instance of some PCI host IP on an SoC. I
think that PCI specs etc perhaps call a segment a PCI domain, which we
avoided in Xen due to the obvious potential for confusion.

A PCI host controller defines the root of a bus, within which the BDF
need not be distinct due to the differing segments which are effectively
a higher level namespace on the BDFs.

So given a system with two PCI host controllers we end up with two
segments (lets say A and B, but choosing those is the topic of this
thread) and it is acceptable for both to contain a bus 0 with a device 1
on it, i.e. (A:0:0.0) and (B:0:0.0) are distinct and can coexist.

It sounds like you are saying that this is not actually acceptable and
that 0:0.0 must be unique in the system irrespective of the associated
segment? iow (B:0:0.0) must be e.g. (B:1:0.0) instead?

Just for reference a DT node describing a PCI host controller might look
like (taking the APM Mustang one as an example):

                pcie0: pcie@1f2b0000 {
                        status = "disabled";
                        device_type = "pci";
                        compatible = "apm,xgene-storm-pcie", "apm,xgene-pcie";
                        #interrupt-cells = <1>;
                        #size-cells = <2>;
                        #address-cells = <3>;
                        reg = < 0x00 0x1f2b0000 0x0 0x00010000   /* Controller 
registers */
                                0xe0 0xd0000000 0x0 0x00040000>; /* PCI config 
space */
                        reg-names = "csr", "cfg";
                        ranges = <0x01000000 0x00 0x00000000 0xe0 0x10000000 
0x00 0x00010000   /* io */
                                  0x02000000 0x00 0x80000000 0xe1 0x80000000 
0x00 0x80000000>; /* mem */
                        dma-ranges = <0x42000000 0x80 0x00000000 0x80 
0x00000000 0x00 0x80000000
                                      0x42000000 0x00 0x00000000 0x00 
0x00000000 0x80 0x00000000>;
                        interrupt-map-mask = <0x0 0x0 0x0 0x7>;
                        interrupt-map = <0x0 0x0 0x0 0x1 &gic 0x0 0xc2 0x1
                                         0x0 0x0 0x0 0x2 &gic 0x0 0xc3 0x1
                                         0x0 0x0 0x0 0x3 &gic 0x0 0xc4 0x1
                                         0x0 0x0 0x0 0x4 &gic 0x0 0xc5 0x1>;
                        clocks = <&pcie0clk 0>;

I expect most of this is uninteresting but the key thing is that there
is no segment number nor topology relative to e.g. "pcie1:
pcie@1f2c0000" (the node look identical except e.g. all the base
addresses and interrupt numbers differ).

(FWIW reg here shows that the PCI cfg space is at 0xe0d0000000,
interrupt-map shows that SPI(AKA GSI) 0xc2 is INTA and 0xc3 is INTB (I
think, a bit fuzzy...), ranges is the space where BARs live, I think you
can safely ignore everything else for the purposes of this


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.