[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC + Queries] Flow of PCI passthrough in ARM
On 3 October 2014 15:02, manish jaggi <manishjaggi.oss@xxxxxxxxx> wrote: > On 2 October 2014 22:29, Stefano Stabellini > <stefano.stabellini@xxxxxxxxxxxxx> wrote: >> On Thu, 2 Oct 2014, Stefano Stabellini wrote: >>> > >> >> using this the ITS emulation code in xen is able to trap ITS command >>> > >> >> writes by driver. >>> > >> >> But we are facing a problem now, where your help is needed >> >> Actually given that you are talking about the virtual ITS, why do you >> need the real Device ID or Stream ID of the passthrough device in the >> guest at all? Shouldn't you just be generating a bunch of virtual IDs >> instead? >> >> Let's take a step back: why do you need *real* IDs to program a >> *virtual* ITS? >> > Guest Passes passthrough devices stream id - which is generated by BDF > so a device with BDF (real ID) say 1:0:5.0 may at BDF 0:0:0.0 in > guest. > Xen ITS Emulation Code would need the information mapping between > guestBDF actual BDF of a passthrough device. > > Guest ITS never sees real ID. > >> >>> > >> >> The StreamID is generated by segment: bus : device: number which is >>> > >> >> fed as DevID in ITS commands. In Dom0 the streamID is correctly >>> > >> >> generated but in domU the Stream ID for a passthrough device is >>> > >> >> 0:0:0:0 now when emulating this in Xen it is a problem as xen does >>> > >> >> not >>> > >> >> know how to get the physical stream id. >>> > >> >> >>> > >> >> (Eg: xl pci-attach 1 0001:00:05.0 >>> > >> >> DomU has the device but in DomU the id is 0000:00:00.0.) >>> > >> >> >>> > >> >> Could you suggest how to go about this. >>> > >> > >>> > >> > I don't think that the ITS patches have been posted yet, so it is >>> > >> > difficult for me to understand the problem and propose a solution. >>> > >> >>> > >> In a simpler way, It is more of what the StreamID a driver running in >>> > >> domU sees. Which is programmed in the ITS commands. >>> > >> And how to map the domU streamID to actual streamID in Xen when the >>> > >> ITS command write traps. >>> > > >>> > > Wouldn't it be possible to pass the correct StreamID to DomU via device >>> > > tree? Does it really need to match the PCI BDF? >>> > Device Tree provide static mapping, runtime attaching a device (using >>> > xl tools) to a domU is what I am working on. >>> >>> As I wrote before it is difficult to answer without the patches and/or a >>> design document. >>> Below is the Design Flow: ===== Xen PCI Passthrough support for ARM ---------------------------------------------- Nomenclature: SBDF - segment:bus:device.function. Segment - Number of the PCIe RootComplex. (this is also referred to as domain in linux terminology Domain:BDF) domU sbdf - refers to the sbdf of the device assigned when enumerated in domU. This is different from the actual sbdf. What is the requirement 1. Any PCI device be assigned to any domU at runtime. 2. The assignment should not be static, system admin must be able to assign a PCI BDF at will. 3. SMMU should be programmed for the device/domain pair 4. MSI should be directly injected into domU. 5. Access to GIC should be trapped in Xen to program MSI(LPI) in ITS 6. FrontEnd - Backend communication between DomU-Dom0 must be limited to PCI config read writes. What is support today 1. Non PCI passthrough using a device tree. A device tree node can be passthrough to a domU. 2. SMMU is programmed (devices' stream ID is allowed to access domU guest memory) 3. By default a device is assigned to dom0 if not done a pci-hide. (SMMU for that device is programmed for dom0) Proposed Flow: 1. Xen parses its device tree to find the pci nodes. There nodes represent the PCIe Root Complexes 2. Xen parses its device tree to find smmu nodes which have mmu-master property which should point to the PCIe RCs (pci nodes) Note: The mapping between the pci node and the smmu node is based on: which smmu which translates r/w requests from that pcie 3. All the pci nodes are assigned to the dom0. 4. dom0 boots 5. dom0 enumerates all the PCI devices and calls xen hypercall PHYSDEVOP_pci_device_add. This in-turn programs the smmu Note: Also The MMIO regions for the device are assigned to the dom0. 6. dom0 boots domU. Note: in domU, the pcifront bus has a msi-controller attached. It is assumed that the domU device tree contains the 'gicv3-its' node. 7. dom0 calls xl pci-assignable-add <sbdf> 8. dom0 calls xl pci-attach <domU_id> '<sbdf>,permissive=1' Note: XEN_DOMCTL_assign_device is called. Implemented assign_device in smmu.c The MMIO regions for the device are assigned to the respective domU. 9. Front end driver is notified by xenwatch daemon and it starts enumerating devices. Note: the check of initial_domain in register_xen_pci_notifier() is removed. 10. The device driver requests the msi using pcie_enable_msi, which in turn calls the its driver which tries to issue commands like MAPD, MAPVI. which are trapped by the Xen ITS emulation driver. 10a. The MAPD command contains a device id, which is a stream ID constructed using the sbdf. The sbdf is specific to domU for that device. 10b. Xen ITS driver has to replace the domU sbdf to the actual sdbf and program the command in the command queue. Now there is a _problem_ that xen does not know the mapping between domU streamid (sbdf) and actual sbdf. For ex if two pci devices are assigned to domU, xen does not know that which domU sbdf maps to which pci device Thus the Xen ITS driver fails to program the MAPD command in command queue, which results LPIs not programmed for that device At the time xl pci-attach is called the domU sbdf of the device is not known, as the enumeration of the PCI device in domU has not started by that time. in xl pci-attach a virtual slot number can be provided but it is not used in ITS commands in domU. If it is known somehow (__we need help on this__) then dom0 could issue a hypercall to xen to map domU sbdf to actual in the following format PHYSDEVOP_map_domU_sbdf{ actual sBDF, domU enumerated sBDF and domU_ID. } ===== >>> You should be able to specify StreamID ranges in Device Tree to cover a >>> bus. So you should be able to say that the virtual PCI bus in the guest >>> has StreamID [0-8] for slots [0-8]. Then in your example below you need >>> to make sure to insert the passthrough device in virtual slot 1 instead >>> of virtual slot 0. >>> >>> I don't know if you were aware of this but you can already specify the >>> virtual slot number to pci-attach, see xl pci-attach --help >>> >>> Otherwise you could let the frontend know the StreamID via xenbus: the >>> backend should know the correct StreamID for the device, it could just >>> add it to xenstore as a new parameter for the frontend. >>> >>> Either way you should be able to tell the frontend what is the right >>> StreamID for the device. >>> >>> >>> > > Otherwise if the command trap into Xen, couldn't Xen do the translation? >>> > Xen does not know how to map the BDF in domU to actual streamID. >>> > >>> > I had thought of adding a hypercall, when xl pci-attach is called. >>> > PHYSDEVOP_map_streamid { >>> > dom_id, >>> > phys_streamid, //bdf >>> > guest_streamid, >>> > } >>> > >>> > But I am not able to get correct BDF of domU. >>> >>> I don't think that an hypercall is a good way to solve this. >>> >>> >>> > For instance the logs at 2 different place give diff BDFs >>> > >>> > #xl pci-attach 1 '0002:01:00.1,permissive=1' >>> > >>> > xen-pciback pci-1-0: xen_pcibk_export_device exporting dom 2 bus 1 slot 0 >>> > func 1 >>> > xen_pciback: vpci: 0002:01:00.1: assign to virtual slot 1 >>> > xen_pcibk_publish_pci_dev 0000:00:01.00 >>> > >>> > Code that generated print: >>> > static int xen_pcibk_publish_pci_dev(struct xen_pcibk_device *pdev, >>> > unsigned int domain, unsigned int bus, >>> > unsigned int devfn, unsigned int devid) >>> > { >>> > ... >>> > printk(KERN_ERR"%s %04x:%02x:%02x.%02x",__func__, domain, bus, >>> > PCI_SLOT(devfn), PCI_FUNC(devfn)); >>> > >>> > >>> > While in xen_pcibk_do_op Print is: >>> > >>> > xen_pcibk_do_op Guest SBDF=0:0:1.1 (this is output of lspci in domU) >>> > >>> > Code that generated print: >>> > >>> > void xen_pcibk_do_op(struct work_struct *data) >>> > { >>> > ... >>> > if (dev == NULL) >>> > op->err = XEN_PCI_ERR_dev_not_found; >>> > else { >>> > printk(KERN_ERR"%s Guest SBDF=%d:%d:%d.%d \r\n",__func__, >>> > op->domain, op->bus, op->devfn>>3, op->devfn&0x7); >>> > >>> > >>> > Stefano, I need your help in this >>> _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |