[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 00/23] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion on Intel platform



On Wed, Mar 29, 2017 at 09:00:58AM +0100, Roger Pau Monné wrote:
>On Tue, Mar 21, 2017 at 01:29:26PM +0800, Lan Tianyu wrote:
>> On 2017年03月21日 10:28, Lan Tianyu wrote:
>> > On 2017年03月20日 22:23, Roger Pau Monné wrote:
>> >> Thanks! So you add all this vIOMMU code, but the maximum number of allowed
>> >> vCPUs for HVM guests is still limited to 128 (HVM_MAX_VCPUS is not 
>> >> touched). Is
>> >> there any missing pieces in order to bump this?
>> > 
>> > To increase vcpu number, we need to change APIC ID rule and now it's
>> > APICID = VCPUID * 2. Andrew's CPUID improvement will change it and so
>> > our following patches of increasing vcpu number will base on Andrew's job.
>> > 
>> > 
>> >>
>> >> Also, have you tested if this series works with PVH guests? Boris added 
>> >> PVH
>> >> support to Linux not long ago, so you should be able to test it just by 
>> >> picking
>> >> the latest Linux kernel.
>> > 
>> > Our patchset just targets hvm guest and it will not work for PV guest. 
>> 
>> New hypercalls introduced by this patchset also can reuse for PVH to
>> enable vIOMMU. This patchset relies on Qemu Xen-vIOMMU device model to
>> create/destroy vIOMMU. If we want to enable DMA translation for hvm
>> guest later, virtual device's DMA request would be passed from Qemu to
>> Xen hypervisor and the device model in Qemu is necessary.
>
>The DMA remapping probably doesn't make sense for PVH because PVH doesn't have
>emulated devices, but in any case it would be good to disentangle all this from
>the QEMU (device model) code. AFAICT the DMA remapping part would still be
>useful when doing PCI-passthrough to a PVH guest, and that doesn't involve
>QEMU.

I agree that we need a DMA remapping function in Hypervisor to handle DMA
remapping of pass through device for both HVM and PVH guest and this is also
in Tianyu's v3 design 
(https://lists.xenproject.org/archives/html/xen-devel/2016-11/msg01391.html).
But this doesn't mean that we don't need the dummy vIOMMU in qemu to handle DMA
remapping of virtual device.

>
>Also, the vIOMMU hypercalls are added as DMOPs, which seems to imply that they
>should be used with/from a device model (and this won't be the case for PVH).

No. When we decided to add vIOMMU hypercalls, we were worry about whether it can
be used by tool stack. We found this:

Privileged toolstack software is permitted to use DMOPs as well as
other hypercalls, of course.  So there is no need to duplicate
functionality between DMOPs and non-stable privileged toolstack
hypercalls.
in https://lists.xenproject.org/archives/html/xen-devel/2016-08/msg00048.html 

I think it just means these interfaces are safe to be used by dm. We needn't
worry a dm will compromise the domain that isn't associated with it. I think
only the interfaces that qemu won't use can be exposed as another format other
than DMOP.

Thank,
Chao

>
>Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.