[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 5/25] Xen/doc: Add Xen virtual IOMMU doc



On 2017年07月05日 21:25, Julien Grall wrote:
> 
> 
> On 05/07/17 04:15, Lan Tianyu wrote:
>> Hi Julien:
> 
> Hi Tianyu Lan,
> 
>>     Thanks for your review.
>>
>> On 2017年07月04日 18:39, Julien Grall wrote:
>>>> +vIOMMU hypercall
>>>> +================
>>>> +Introduce new domctl hypercall "xen_domctl_viommu_op" to
>>>> create/destroy
>>>> +vIOMMU and query vIOMMU capabilities that device model can support.
>>>> +
>>>> +* vIOMMU hypercall parameter structure
>>>> +struct xen_domctl_viommu_op {
>>>> +    uint32_t cmd;
>>>> +#define XEN_DOMCTL_create_viommu          0
>>>> +#define XEN_DOMCTL_destroy_viommu         1
>>>> +#define XEN_DOMCTL_query_viommu_caps      2
>>>
>>> I am a bit confused. This is only creating the vIOMMU. However, there
>>> might be multiple host IOMMUs, how do you link them together?
>>>
>>>> +    union {
>>>> +        struct {
>>>> +            /* IN - vIOMMU type */
>>>> +            uint64_t viommu_type;
>>>
>>> This is a bit confusing, you don't define what should be the value of
>>> viommu_type, ...
>>>
>>>> +            /* IN - MMIO base address of vIOMMU. */
>>>> +            uint64_t base_address;
>>>> +            /* IN - Length of MMIO region */
>>>> +            uint64_t length; > +            /* IN - Capabilities with
>>>> which we want to create */
>>>> +            uint64_t capabilities;
>>>
>>> ... capabilities ...
>>>
>>
>> Sorry. miss the type and capability definition here.
>>
>> /* VIOMMU type */
>> #define VIOMMU_TYPE_INTEL_VTD     (1u << 0)
>>
>> /* VIOMMU capabilities*/
>> #define VIOMMU_CAP_IRQ_REMAPPING  (1u << 0)
>>
>> "viommu_type" means vendor vIOMMU device model. So far, we just support
>> virtual Intel VTD.
>>
>> "capabilities" means the feature that vIOMMU supports. We just add
>> interrupt remapping for virtual VTD.
>>
>>
>>>> +            /* OUT - vIOMMU identity */
>>>> +            uint32_t viommu_id;
>>>> +        } create_viommu; > +
>>>> +        struct {
>>>> +            /* IN - vIOMMU identity */
>>>> +            uint32_t viommu_id;
>>>> +        } destroy_viommu;
>>>> +
>>>> +        struct {
>>>> +            /* IN - vIOMMU type */
>>>> +            uint64_t viommu_type; > +            /* OUT - vIOMMU
>>>> Capabilities */
>>>> +            uint64_t caps;
>>>
>>> ... and caps. I see you have defined them in a separate header
>>> (viommu.h). But there are no way for the developer to know that they
>>> should be used.
>>
>> Macros of "Capabilities" and "type" are defined under public directory
>> in order to tool stack also can use them to pass vIOMMU type and
>> capabilities.
> 
> My point was that if a developer read domctl.h first, he cannot guess
> that the value to be used in "capabilities" and "type" are defined in a
> separate header (viommu.h). You should at least write down a comment in
> the code explaining that.

Yes, good suggestion and will update in next version.

> 
>>
>>
>>>
>>>> +        } query_caps;
>>>> +    } u;
>>>> +};
>>>> +
>>>> +- XEN_DOMCTL_query_viommu_caps
>>>> +    Query capabilities of vIOMMU device model. vIOMMU_type specifies
>>>> +which vendor vIOMMU device model(E,G Intel VTD) is targeted and
>>>> hypervisor
>>>
>>> "E,G" did you mean "e.g"?
>>
>> Yes. Will update.
>>
>>>
>>>> +returns capability bits(E,G interrupt remapping bit).
>>>
>>> Ditto.
>>>
>>> A given platform may have multiple IOMMUs with different features. Are
>>> we expecting
>>
>> So far, our patchset just supports VM with one vIOMMU as starter.
>>
>> Do you mean emulation of some vIOMMU capabilities rely on physical IOMMU
>> and there are multiple IOMMUs with different feature?
>>
>> If yes, we need to emulate mult-vIOMMU for different assigned devices
>> under different pIOMMU. Vendor vIOMMU device model needs to check
>> whether the assigned device and support given capabilities passed by
>> tool stack.
> 
> Hmmm, I think I was a bit confused with the domctl. You are querying the
> vIOMMU capabilities and they may be different from the physical IOMMU
> right?

Yes, that's possible. If pass though two devices under different
physical IOMMUs.

> 
>>
>>>
>>>> +
>>>> +- XEN_DOMCTL_create_viommu
>>>> +    Create vIOMMU device with vIOMMU_type, capabilities, MMIO
>>>> +base address and length. Hypervisor returns viommu_id. Capabilities
>>>> should
>>>> +be in range of value returned by query_viommu_caps hypercall.
>>>
>>> Can you explain what mmio and length are here for? Do you expect to trap
>>> and emulate the MMIO region in Xen?
>>
>> Yes, we need to emulate VTD MMIO register in the Xen hypervisor and this
>> is agreement under design stage. The MMIO base address is passed to
>> guest via ACPI table which is built by tool stack and so tool stack
>> manages vIOMMU MMIO region. When create vIOMMU, base address and length
>> needs to be passed.
> 
> I am not yet sure we want to emulate an IOMMU for ARM. They are a bit
> complex to emulate and we have multiple one (SMMUv2, SMMUv3,
> IPMMU-VMSA,...). So PV might be the solution here. Though, it is too
> early to decide.

Yes, What I got ARM vIOMMU from KVM side is that ARM engineer are
pushing PV IOMMU and reason for that is just like you said about
multiple IOMMU version.

https://www.spinics.net/lists/kvm/msg147990.html

> 
> If we wanted to use emulation, an IOMMU may have multiple MMIO ranges
> and multiple interrupts (either legacy or MSI). Here you are assuming
> only one MMIO and no interrupt. This new interface is a DOMCTL so it
> might be ok to extend it in the future?

For Intel VTD, one instance's MMIO registers will be in "4KB-aligned
memorymapped location" and so just need to pass base address and
length(4KB). If other vendor have multi-MMIO region, the structure can
be extended.

Because we now just have onE vIOMMU, all virtual interrupt will be bound
to it. If need to support mult-vIOMMU, we can add device-scope
field(sbdf array or some thing like that) in the structure and specify
what devices should be under one vIOMMU.





-- 
Best regards
Tianyu Lan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.