[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [early RFC] ARM PCI Passthrough design document



Hello Manish,

On 25/01/17 04:37, Manish Jaggi wrote:
On 01/24/2017 11:13 PM, Julien Grall wrote:


On 19/01/17 05:09, Manish Jaggi wrote:
I think, PCI passthrough and DOM0 w/ACPI enumerating devices on PCI are 
separate features.
Without Xen mapping PCI config space region in stage2 of dom0, ACPI dom0 wont 
boot.
Currently for dt xen does that.

So can we have 2 design documents
a) PCI passthrough
b) ACPI dom0/domU support in Xen and Linux
- this may include:
b.1 Passing IORT to Dom0 without smmu
b.2 Hypercall to map PCI config space in dom0
b.3 <more>

What do you think?

I don't think ACPI should be treated in a separate design document.
As PCI passthrough support will take time to mature, why should we hold the 
ACPI design ?
If I can boot dom0/domU with ACPI as it works with dt today, it would be a good 
milestone.

The way PCI is working on DT today is a hack. There is no SMMU support and the first version of GICv3 ITS support will contain hardcoded DeviceID (or very similar).

The current hack will introduce problem on platform where a specific host controller is necessary to access the configuration space. Indeed, at the beginning Xen may not have a driver available (this will depend on the contribution), but we still need to be able to use PCI with Xen.

We chose this way on DT because we didn't know when the PCI passthrough will be added in Xen.

As mentioned in the introduction of the design document, I envision PCI passthrough implementation in 2 phases: - Phase 1: Register all PCI devices in Xen => will allow to use ITS and SMMU with PCI in Xen
        - Phase 2: Assign devices to guests

This design document will cover both phases because they are tight together. But the implementation can be decoupled, it would be possible (and also my plan) to see the 2 phases upstreamed in different Xen release.

Phase 1, will cover anything necessary for Xen to discover and register PCI devices. This include the ACPI support (IORT,...).

I see little point to have a temporary solution for ACPI that will require bandwidth review. It would be better to put this bandwidth focusing on getting a good design document.

When we brainstormed about PCI passthrough, we identified some tasks that could be done in parallel of the design document. The list I have in mind is:
        * SMMUv3: I am aware of a company working on this
        * GICv3-ITS: work done by ARM (see [2])
* IORT: it is required to discover ITSes and SMMU with ACPI. So it can at least be parsed (I will speak about hiding some part to DOM0 later)
        * PCI support for SMMUv2

There are quite a few companies willing to contribute to PCI passthrough. So we need some coordination to avoid redundancy. Please get in touch with me if you are interested to work on one of these items.

Later when PCI passthrough design gets mature and implemented the support can 
be extended.
The support of ACPI may affect some of the decisions (such as hypercall) and we 
have to know them now.

Still it can be an independent with only dependent features implemented or 
placeholders can be addded
Regarding the ECAM region not mapped. This is not related to PCI passthrough 
but how MMIO are mapped with ACPI. This is a separate subject already in 
discussion (see [1]).

What about IORT generation for Dom0 without smmu ?

Looking at the IORT, the ITS node is taking the StreamID in input when the device is protected by an SMMU.

Rather than removing the SMMU node in the IORT, I would blacklist them using the STAO table (or maybe we could introduce a disable flag in the IORT?).

Stefano, IIRC you took part of the design of IORT. How did you envision hiding SMMU from DOM0?

I believe, It is not dependent on [1]
Cheers,

[1] https://lists.xenproject.org/archives/html/xen-devel/2017-01/msg01607.html


Cheers,

[2] https://lists.xenproject.org/archives/html/xen-devel/2016-12/msg02742.html

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.