[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [early RFC] ARM PCI Passthrough design document



On Tue, 24 Jan 2017, Julien Grall wrote:
> > > ## Discovering and register hostbridge
> > > 
> > > Both ACPI and Device Tree do not provide enough information to fully
> > > instantiate an host bridge driver. In the case of ACPI, some data may come
> > > from ASL,
> > 
> > The data available from ASL is just to initialize quirks and non-ECAM
> > controllers, right? Given that SBSA mandates ECAM, and we assume that
> > ACPI is mostly (if not only) for servers, then I think it is safe to say
> > that in the case of ACPI we should have all the info to fully
> > instantiate an host bridge driver.
> 
> From the spec, the MCFG will only describe host bridge available at boot (see
> 4.2 in "PCI firmware specification, rev 3.2"). All the other host bridges will
> be described in ASL.
> 
> So we need DOM0 to feed Xen about the latter host bridges.

Unfortunately PCI specs are only accessible by PCI SIG members
organizations. In other words, I cannot read the doc.

Could you please explain what kind of host bridges are not expected to
be available at boot? Do you know of any examples?


> > > whilst for Device Tree the segment number is not available.
> > > 
> > > So Xen needs to rely on DOM0 to discover the host bridges and notify Xen
> > > with all the relevant informations. This will be done via a new hypercall
> > > PHYSDEVOP_pci_host_bridge_add. The layout of the structure will be:
> > 
> > I understand that the main purpose of this hypercall is to get Xen and Dom0
> > to
> > agree on the segment numbers, but why is it necessary? If Dom0 has an
> > emulated contoller like any other guest, do we care what segment numbers
> > Dom0 will use?
> 
> I was not planning to have a emulated controller for DOM0. The physical one is
> not necessarily ECAM compliant so we would have to either emulate the physical
> one (meaning multiple different emulation) or an ECAM compliant.
> 
> The latter is not possible because you don't know if there is enough free MMIO
> space for the emulation.
> 
> In the case on ARM, I don't see much the point to emulate the host bridge for
> DOM0. The only thing we need in Xen is to access the configuration space, we
> don't have about driving the host bridge. So I would let DOM0 dealing with
> that.
> 
> Also, I don't see any reason for ARM to trap DOM0 configuration space access.
> The MSI will be configured using the interrupt controller and it is a trusted
> Domain.

These last you sentences raise a lot of questions. Maybe I am missing
something. You might want to clarify the strategy for Dom0 and DomUs,
and how they differ, in the next version of the doc.

At some point you wrote "Instantiation of a specific driver for the host
controller can be easily done if Xen has the information to detect it.
However, those drivers may require resources described in ASL." Does it
mean you plan to drive the physical host bridge from Xen and Dom0
simultaneously?

Otherwise, if Dom0 is the only one to drive the physical host bridge,
and Xen is the one to provide the emulated host bridge, how are DomU PCI
config reads and writes supposed to work in details?  How is MSI
configuration supposed to work?


> > > XXX: Shall we limit DOM0 the access to the configuration space from that
> > > moment?
> > 
> > If we can, we should
> 
> Why would be the benefits? For now, I see a big drawback: resetting a PCI
> devices would need to be done in Xen rather than DOM0. As you may now there
> are a lot of quirks for reset.
> 
> So for me, it looks more sensible to handle this in DOM0 and let DOM0 a full
> access to the configuration space. Overall he is a trusted domain.

PCI reset is worth of its own chapter in the doc :-)

Dom0 is a trusted domain, but when possible, I consider an improvement
to limit the amount of trust we put in it. Also, as I wrote above, I
don't understand what is the plan to deal with concurrent accesses to
the host bridge from Dom0 and Xen.

In any case, regarding PCI reset, we should dig out past discussions on
the merits of doing reset in the hypervisor vs. dom0. I agree
introducing PCI reset quirks in Xen is not nice but I recall that
XenClient did it to avoid possible misbehaviors of the device. We need
to be careful about ordering PCI reset against domain destruction. I
couldn't find any email discussions to reference, maybe it is worth
contacting the OpenXT guys about it.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.