[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH 5/23] Tools/libxc: Add viommu operations in libxc
On 2017年04月18日 22:15, Paul Durrant wrote: >> -----Original Message----- > [snip] >>> > > >>> > > Not quite sure I understand this. The QEMu device model does not 'pass >> > DMA requests' as such, it maps guest RAM and reads or writes to emulate >> > DMA, right? So, what's needed is a mechanism to map guest RAM by 'bus >> > address'... i.e. an address that will need to be translated through the >> > vIOMMU mappings. This is just an evolution of the current 'priv mapping' >> > operations that allow guest RAM to be mapped by guest physical address. So >> > you don't need a vIOMMU 'device model' as such, do you? >> > >> > >> > Guest also may enable DMA protection mechanism in linux kernel which >> > limits address space of emulated device and this depends on the vIOMMU's >> > DMA translation function. In vIOMMU's MMIO emulation part is in the Xen >> > hypersior and the guest shadow IO page table will be only in the >> > hypervisor. To translate emulated device's DMA request. It's necessary >> > to pass the DMA request to hypervisor. >> > > What do you mean by DMA request though? Are you intending to make some form > of hypercall to read or write guest memory? If so then why not introduce a > call to map the guest memory (via bus address) and read or write directly. Such "DMA request" in Qemu vIOMMU framework just contains IOVA(IO virtual address) and write/read flag. vIOMMU device model just translates IOVA to GPA and then return back to vIOMMU core which will be in charge of memory access. So hyercall we want to introduce is to translate IOVA to GPA. The data to write and target address to store read data aren't passed to vIOMMU device model and we can't perform read/write directly there. >> > So far we don't support DMA translation and so doesn't pass DMA request. >> > > Indeed. We map guest memory using guest physical address because, without an > emulated IOMMU, guest physical address === bus address. This is why I suggest > a new mapping operation rather than 'passing a DMA request' to the hypervisor. > >> > Map/umap guest memory already support in Qemu and just like emulated >> > device model access guest memory. Qemu also provides vIOMMU hook to >> > receive DMA request and return target guest address. vIOMMU framework >> > will read/write target address. > That's the part I don't get... why have the vIOMMU code do the reads and > writes? Why not have it provide a mapping function and then have the device > model in QEMU read and write directly as it does now? > Actually it's common interface in Qemu to read/write guest memory. The code will check whether there is a vIOMMU translation callback or not before performing read/write. If yes, call the callback and vIOMMU device model translate IOVA to GPA and then do read/write operation. >> > What we need to do is to translate DMA >> > request to target address according shadow IO page table in the hypervisor. >> > > Yes, so the mapping has to be done by the hypervisor (as is the case for priv > mapping or grant mapping) but the memory accesses themselves can be done > directly by the device model in QEMU. Yes. > >> > >> > >>> > > >>>> > >> Qemu is required to use DMOP hypercall and >>>> > >> tool stack may use domctl hyercall. vIOMMU hypercalls will be divided >>>> > >> into two part. >>>> > >> >>>> > >> Domctl: >>>> > >> create, destroy and query. >>>> > >> DMOP: >>>> > >> vDev's DMA related operations. >>> > > >>> > > Yes, the mapping/unmapping operations should be DMOPs and IMO >> > should be designed such that they can be unified with replacements for >> > current 'priv map' ops such that QEMU can use the same function call, but >> > with different address space identifiers (i.e. bus address, guest physical >> > address, etc.). BTW, I say 'etc.' because we should also consider mapping >> > the >> > ioreq pages from Xen using the same call - with a dedicated address space >> > identifier - as well. >>> > > >> > >> > So you agree to divide vIOMMU's hypercalls into two parts(DMOP and >> > Domctl), right? >> > > Yes, I agree with the logic of the split. > > Cheers, > > Paul > -- Best regards Tianyu Lan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |