[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] xen/arm: Hiding SMMUs from Dom0 when using ACPI on Xen
+Chales. Hi Julien, On 2/27/2017 11:42 PM, Julien Grall wrote: You are correct that it is not mandated by the spec, but AFAIK there seems to be no valid use case for that.On 02/27/2017 04:58 PM, Shanker Donthineni wrote:Hi Julien,Hi Shanker,Please don't drop people in CC. In my case, any e-mail I am not CCed are skipping my inbox and I may not read them for a while.On 02/27/2017 08:12 AM, Julien Grall wrote:On 27/02/17 13:23, Vijay Kilari wrote:Hi Julien,Hello Vijay,On Wed, Feb 22, 2017 at 7:40 PM, Julien Grall <julien.grall@xxxxxxx> wrote:Hello, There was few discussions recently about hiding SMMUs from DOM0 when using ACPI. I thought it would be good to have a separate thread for this.When using ACPI, the SMMUs will be described in the IO Remapping Table(IORT). The specification can be found on the ARM website [1]. For a brief summary, the IORT can be used to discover the SMMUs present on the platform and find for a given device the ID to configure components such as ITS (DeviceID) and SMMU (StreamID). The appendix A in the specification gives an example how DeviceID and StreamID can be found. For instance, when a PCI device is both protected by an SMMU and MSI-capable the following translation will happen: RID -> StreamID -> DeviceID Currently, SMMUs are hidden from DOM0 because they are been used by Xen and we don't support stage-1 SMMU. If we pass the IORT as it is, DOM0 will try to initialize SMMU and crash. I first thought about using a Xen specific way (STAO) or extending a flag in IORT. But that is not ideal. So we would have to rewrite the IORT for DOM0. Given that a range of RID can mapped to multiple ranges of DeviceID, we would have to translate RID one by one to find the associated DeviceID. I think this may end up to complex code and have a big IORT table.Why can't we replace Output base of IORT of PCI node with SMMU output base?. I mean similar to PCI node without SMMU, why can't replace output base of PCI node with SMMU's output base?.Because I don't see anything in the spec preventing one RC ID mappingto produce multiple SMMU ID mapping. So which output base would you use?Basically, remove SMMU nodes, and replaces output of the PCIe and named nodes ID mappings with ITS nodes. RID --> StreamID --> dviceID --> ITS device id = RID --> dviceID --> ITS device idCan you detail it? You seem to assume that one RC ID mapping range will only produce ID mapping range. AFAICT, this is not mandated by the spec. RID range should not overlap between ID Array entries. I believe this would be updated in the next IORT spec revision. I have started working on recreating iort for dom0 with this restriction. The issue I see is RID is [15:0] where is DeviceID is [17:0].Actuality device id is 32bit field.However, given that DeviceID will be used by DOM0 to only configure the ITS. We have no need to use to have the DOM0 DeviceID equal to the host DeviceID. So I think we could simplify our life by generating DeviceID for each RID range.If DOM0 DeviceID != host Device ID, then we cannot initialize ITS using DOM0 ITS commands (MAPD). So, is it concluded that ITS initializes all the devices with platform specific Device ID's in Xen?.Initializing ITS using DOM0 ITS command is a workaround until we get PCI passthrough done. It would still be possible to implement that with vDeviceID != pDeviceID as Xen would likely have the mapping between the 2 DeviceID.I believe mapping dom0 ITS commands to XEN ITS commands one to one is the better approach. Physical DeviceID is unique per ITS group, not a system wide unique ID.As for guest, you don't care about the virtual DeviceID for DOM0 as long as you are able to map it to the host ITS and host DeviceID.> In case of direct VLPI, LPI number has to beprogrammed whenever dom0/domU calls the MAPTI command but not at the time of PCIe device creation.I am a bit confused. Why are you speaking about direct vLPI here? This has no relation with the IORT.Cheers, _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |