[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] PCI Pass-through in Xen ARM - Draft 2.
On Tue, 7 Jul 2015, Ian Campbell wrote: > On Tue, 2015-07-07 at 14:16 +0530, Manish Jaggi wrote: > > > As asked you in the previous mail, can you please prove it? The > > > function used to get the requester ID (pci_for_each_dma_alias) is more > > > complex than a simple return sbdf. > > I am not sure what you would like me to prove. > > As of ThunderX Xen code we have assumed sbdf == deviceID. > > Please remember that you are not writing "ThunderX Xen code" here, you > are writing generic Xen code which you happen to be testing on Thunder > X. The design and implementation does need to consider the more generic > case I'm afraid. > > In particular if this is going to be a PHYSDEVOP then it needs to be > designed to be future proof, since PHYSDEVOP is a stable API i.e. it is > hard to change in the future. > > I think I did ask elsewhere _why_ this was a physdev op, since I can't > see why it can't be done by the toolstack, and therefore why it can't be > a domctl. > > If this was a domctl there might be scope for accepting an > implementation which made assumptions such as sbdf == deviceid. It is always easier to deal with domctls compared to physdebops, so I agree with Ian. However I think we can assume BDF = requester ID, as I wrote earlier. > However > I'd still like to see this topic given proper treatment in the design > and not just glossed over with "this is how ThunderX does things". right > Or maybe the solution is simple and we should just do it now -- i.e. can > we add a new field to the PHYSDEVOP_pci_host_bridge_add argument struct > which contains the base deviceid for that bridge (since I believe both > DT and ACPI IORT assume a simple linear mapping[citation needed])? Expanding XEN_DOMCTL_assign_device is also a good option _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |