[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC: [PATCH 1/3] Enhance platform support for PCI



On Thu, 12 Mar 2015, Jan Beulich wrote:
> >>> On 11.03.15 at 19:26, <stefano.stabellini@xxxxxxxxxxxxx> wrote:
> > On Mon, 23 Feb 2015, Jan Beulich wrote:
> >> >>> On 20.02.15 at 18:33, <ian.campbell@xxxxxxxxxx> wrote:
> >> > On Fri, 2015-02-20 at 15:15 +0000, Jan Beulich wrote:
> >> >> > That's the issue we are trying to resolve, with device tree there is 
> >> >> > no
> >> >> > explicit segment ID, so we have an essentially unindexed set of PCI
> >> >> > buses in both Xen and dom0.
> >> >> 
> >> >> How that? What if two bus numbers are equal? There ought to be
> >> >> some kind of topology information. Or if all buses are distinct, then
> >> >> you don't need a segment number.
> >> > 
> >> > It's very possible that I simply don't have the PCI terminology straight
> >> > in my head, leading to me talking nonsense.
> >> > 
> >> > I'll explain how I'm using it and perhaps you can put me straight...
> >> > 
> >> > My understanding was that a PCI segment equates to a PCI host
> >> > controller, i.e. a specific instance of some PCI host IP on an SoC.
> >> 
> >> No - there can be multiple roots (i.e. host bridges) on a single
> >> segment. Segments are - afaict - purely a scalability extension
> >> allowing to overcome the 256 bus limit.
> > 
> > Actually this turns out to be wrong. On the PCI MCFG spec it is clearly
> > stated:
> > 
> > "The MCFG table format allows for more than one memory mapped base
> > address entry provided each entry (memory mapped configuration space
> > base address allocation structure) corresponds to a unique PCI Segment
> > Group consisting of 256 PCI buses. Multiple entries corresponding to a
> > single PCI Segment Group is not allowed."
> 
> For one, what you quote is in no contradiction to what I said. All it
> specifies is that there shouldn't be multiple MCFG table entries
> specifying the same segment. Whether on any such segment there
> is a single host bridge or multiple of them is of no interest here.

I thought that we had already established that one host bridge
corresponds to one PCI config memory region, see the last sentence in
http://marc.info/?l=xen-devel&m=142529695117142.  Did I misunderstand
it?  If a host bridge has a 1:1 relationship with CFG space, then each
MCFG entry would correspond to one host bridge and one segment.

But it looks like that things are more complicated than that.


> And then the present x86 Linux code suggests that there might be
> systems actually violating this (the fact that each entry names not
> only a segment, but also a bus range also kind of suggests this
> despite the wording above); see commit 068258bc15 and its
> neighbors - even if it talks about the address ranges coming from
> other than ACPI tables, firmware wanting to express them by ACPI
> tables would have to violate that rule.

Interesting.


> > I think that it is reasonable to expect device tree systems to respect this
> > too. 
> 
> Not really - as soon as we leave ACPI land, we're free to arrange
> things however they suit us best (of course in agreement with other
> components involved, like Dom0 in this case), and for that case the
> cited Linux commit is a proper reference that it can (and has been)
> done differently by system designers.

OK. It looks like everything should work OK with the hypercall proposed
by Ian anyway.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.