[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] One question about the hypercall to translate gfn to mfn.



> From: Tian, Kevin
> Sent: Wednesday, December 10, 2014 4:48 PM
> 
> > From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> > Sent: Wednesday, December 10, 2014 4:39 PM
> >
> > >>> On 10.12.14 at 02:07, <kevin.tian@xxxxxxxxx> wrote:
> > >>  From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> > >> Sent: Tuesday, December 09, 2014 6:50 PM
> > >>
> > >> >>> On 09.12.14 at 11:37, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
> > >> > On 12/9/2014 6:19 PM, Paul Durrant wrote:
> > >> >> I think use of an raw mfn value currently works only because dom0 is
> > using
> > >> a
> > >> > 1:1 IOMMU mapping scheme. Is my understanding correct, or do you
> > really
> > >> need
> > >> > raw mfn values?
> > >> > Thanks for your quick response, Paul.
> > >> > Well, not exactly for this case. :)
> > >> > In XenGT, our need to translate gfn to mfn is for GPU's page table,
> > >> > which contains the translation between graphic address and the
> memory
> > >> > address. This page table is maintained by GPU drivers, and our service
> > >> > domain need to have a method to translate the guest physical
> addresses
> > >> > written by the vGPU into host physical ones.
> > >> > We do not use IOMMU in XenGT and therefore this translation may not
> > >> > necessarily be a 1:1 mapping.
> > >>
> > >> Hmm, that suggests you indeed need raw MFNs, which in turn seems
> > >> problematic wrt PVH Dom0 (or you'd need a GFN->GMFN translation
> > >> layer). But while you don't use the IOMMU yourself, I suppose the GPU
> > >> accesses still don't bypass the IOMMU? In which case all you'd need
> > >> returned is a frame number that guarantees that after IOMMU
> > >> translation it refers to the correct MFN, i.e. still allowing for your 
> > >> Dom0
> > >> driver to simply set aside a part of its PFN space, asking Xen to
> > >> (IOMMU-)map the necessary guest frames into there.
> > >>
> > >
> > > No. What we require is the raw MFNs. One IOMMU device entry can't
> > > point to multiple VM's page tables, so that's why XenGT needs to use
> > > software shadow GPU page table to implement the sharing. Note it's
> > > not for dom0 to access the MFN. It's for dom0 to setup the correct
> > > shadow GPU page table, so a VM can access the graphics memory
> > > in a controlled way.
> >
> > So what's the translation flow here: driver -> GPU -> IOMMU ->
> > hardware or driver -> IOMMU -> GPU -> hardware? Or do things get
> > set up for the GPU to bypass the IOMMU altogether?
> >
> 
> two translation paths in assigned case:
> 
> 1. [direct CPU access from VM], with partitioned PCI aperture
> resource, every VM can access a portion of PCI aperture directly.

sorry the above description is for XenGT shared case, and the 
below translation is for VT-d assigned case. Just put there to indicate
the necessity of same translation path in XenGT.

> 
> - CPU page table/EPT: CPU virtual address->PCI aperture
> - PCI aperture - bar base = Graphics Memory Address (GMA)
> - GPU page table: GMA -> GPA (as programmed by guest)
> - IOMMU: GPA -> MPA
> 
> 2. [GPU access through GPU command operands], with GPU scheduling,
> every VM's command buffer will be fetched by GPU in a time-shared
> manner.
> 
> - GPU page table: GMA->GPA
> - IOMMU: GPA->MPA
> 
> In our case, IOMMU is setup with 1:1 identity table for dom0. So
> when GPU may access GPAs from different VMs, we can't count on
> IOMMU which can only serve one mapping for one device (unless
> we have SR-IOV).
> 
> That's why we need shadow GPU page table in dom0, and need a
> p2m query call to translate from GPA -> MPA:
> 
> - shadow GPU page table: GMA->MPA
> - IOMMU: MPA->MPA (for dom0)
> 
> Thanks
> Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.