[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] One question about the hypercall to translate gfn to mfn.
>>> On 10.12.14 at 09:47, <kevin.tian@xxxxxxxxx> wrote: > two translation paths in assigned case: > > 1. [direct CPU access from VM], with partitioned PCI aperture > resource, every VM can access a portion of PCI aperture directly. > > - CPU page table/EPT: CPU virtual address->PCI aperture > - PCI aperture - bar base = Graphics Memory Address (GMA) > - GPU page table: GMA -> GPA (as programmed by guest) > - IOMMU: GPA -> MPA > > 2. [GPU access through GPU command operands], with GPU scheduling, > every VM's command buffer will be fetched by GPU in a time-shared > manner. > > - GPU page table: GMA->GPA > - IOMMU: GPA->MPA > > In our case, IOMMU is setup with 1:1 identity table for dom0. So > when GPU may access GPAs from different VMs, we can't count on > IOMMU which can only serve one mapping for one device (unless > we have SR-IOV). > > That's why we need shadow GPU page table in dom0, and need a > p2m query call to translate from GPA -> MPA: > > - shadow GPU page table: GMA->MPA > - IOMMU: MPA->MPA (for dom0) I still can't see why the Dom0 translation has to remain 1:1, i.e. why Xen couldn't return some "arbitrary" GPA for the query in question here, setting up a suitable GPA->MPA translation. (I put arbitrary in quotes because this of course must not conflict with GPAs already or possibly in use by Dom0.) And I can only stress again that you shouldn't leave out PVH (where the IOMMU already isn't set up with all 1:1 mappings) from these considerations. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |