[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [RFC + Queries] Flow of PCI passthrough in ARM



Hi,
Below is the flow I am working on, Please provide your comments, I
have a couple of queries as well..

a) Device tree has smmu nodes and each smmu node has the mmu-master property.
In our Soc DT the mmu-master is a pcie node in device tree.

b) Xen parses the device tree and prepares a list which stores the pci
device tree node pointers. The order in device tree is mapped to
segment number in subsequent calls. For eg 1st pci node found is
segment 0, 2nd segment 1

c) During SMMU init the pcie nodes in DT are saved as smmu masters.

d) Dom0 Enumerates PCI devices, calls hypercall PHYSDEVOP_pci_device_add.
 - In Xen the SMMU iommu_ops add_device is called. I have implemented
the add_device function.
- In the add_device function
 the segment number is used to locate the device tree node pointer of
the pcie node which helps to find out the corresponding smmu.
- In the same PHYSDEVOP the BAR regions are mapped to Dom0.

Note: The current SMMU driver maps the complete Domain's Address space
for the device in SMMU hardware.

The above flow works currently for us.

Now when I call pci-assignable-add I see that the iommu_ops
remove_device in smmu driver is not called. If that is not called the
SMMU would still have the dom0 address space mappings for that device

Can you please suggest the best place (kernel / xl-tools) to put the
code which would call the remove_device in iommu_opps in the control
flow from pci-assignable-add.

One way I see is to introduce a DOMCTL_iommu_remove_device in
pci-assignable-add / pci-detach and DOMCTL_iommu_add_device in
pci-attach. Is that a valid approach  ?

-Manish

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.