[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/passthrough: Support a single iommu_domain(context bank) per xen domain per SMMU



Hello Robert,

Thank you for the patch.

The commit title is too long (over 80 characters).

On 19/03/2015 18:25, Robbie VanVossen wrote:
If multiple devices are being passed through to the same domain and they
share a single SMMU, then they only require a single iommu_domain.

In arm_smmu_assign_dev, before a new iommu_domain is created, the
xen_domain->contexts is checked for any iommu_domains that are already
assigned to device that uses the same SMMU as the current device. If one
is found, attach the device to that iommu_domain. If a new one isn't
found, create a new iommu_domain just like before.

The arm_smmu_deassign_dev function assumes that there is a single
device per iommu_domain. This meant that when the first device was
deassigned, the iommu_domain was freed and when another device was
deassigned a crash occured in xen.

To fix this, a reference counter was added to the iommu_domain struct.
When an arm_smmu_xen_device references an iommu_domain, the
iommu_domains ref is incremented. When that reference is removed, the
iommu_domains ref is decremented. The iommu_domain will only be freed
when the ref is 0.

Signed-off-by: Robbie VanVossen <robert.vanvossen@xxxxxxxxxxxxxxx>
---
  xen/drivers/passthrough/arm/smmu.c |  113 ++++++++++++++++++++++++++++--------
  1 file changed, 88 insertions(+), 25 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c 
b/xen/drivers/passthrough/arm/smmu.c
index a7a7da9..9b46054 100644

[..]

@@ -1596,7 +1617,7 @@ static void arm_smmu_detach_dev(struct iommu_domain 
*domain, struct device *dev)
        if (!cfg)
                return;

-       dev_iommu_domain(dev) = NULL;
+       iommu_domain_remove_device(domain, dev);
        arm_smmu_domain_remove_master(smmu_domain, cfg);
  }

I'd like to avoid modifying arm_smmu_attach_dev and arm_smmu_detach_dev as it's part of the Linux code.

I think you can increment the reference counter in arm_smmu_assign_dev and arm_smmu_deassign_dev. That would avoid some possible race condition (see below).

@@ -2569,7 +2590,9 @@ static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
  {
        struct iommu_domain *domain;
        struct arm_smmu_xen_domain *xen_domain;
+       struct arm_smmu_device *smmu;
        int ret;
+       int existing_ctxt_fnd = 0;

        xen_domain = domain_hvm_iommu(d)->arch.priv;

@@ -2585,29 +2608,65 @@ static int arm_smmu_assign_dev(struct domain *d, u8 
devfn,
                        return ret;
        }

-       /*
-        * TODO: Share the context bank (i.e iommu_domain) when the device is
-        * under the same SMMU as another device assigned to this domain.
-        * Would it useful for PCI
+       /*
+        * Check to see if a context bank (iommu_domain) already exists for 
this xen domain
+        * under the same SMMU

A general comment for coding style within this file. A line should not be more than 80 characters long.

         */
-       domain = xzalloc(struct iommu_domain);
-       if (!domain)
-               return -ENOMEM;
+       if (!list_empty(&xen_domain->contexts)) {
+               smmu = find_smmu_for_device(dev);
+               if (!smmu) {
+                       dev_err(dev, "cannot find SMMU\n");
+                       return -ENXIO;
+               }

-       ret = arm_smmu_domain_init(domain);
-       if (ret)
-               goto err_dom_init;
+               /* Loop through the &xen_domain->contexts to locate a context 
assigned to this SMMU */
+               spin_lock(&xen_domain->lock);
+               list_for_each_entry(domain, &xen_domain->contexts, list) {

The insertion of a new iommu_domain is done after the device is attached, which is not protected by xen_domain->lock.

Therefore it may be possible to end up with 2 iommu_domain on the same SMMU.

+                       if(domain->priv->smmu == smmu)
+                       {
+                               /* We have found a context already associated 
with the same xen domain and SMMU */
+                               ret = arm_smmu_attach_dev(domain, dev);
+                               if (ret) {
+                                       /*
+                                        * TODO: If arm_smmu_attach_dev fails, 
should we perform arm_smmu_domain_destroy,
+                                        * eventhough another smmu_master is 
configured correctly? If Not, what error
+                                        * code should we use
+                                        */

The failure may be because we don't have any stream register available. So I don't think we should detach all the other devices within the same context.

+                                       dev_err(dev, "cannot attach device to 
already existing iommu_domain\n");
+                                       return -ENXIO;
+                               }
+
+                               existing_ctxt_fnd = 1;
+                               break;
+                       }
+                       
+               }
+               spin_unlock(&xen_domain->lock);
+       }
+       
+       if(!existing_ctxt_fnd){

if (!existing_ctx_fnd) {

[..]

@@ -2633,12 +2692,16 @@ static int arm_smmu_deassign_dev(struct domain *d, 
struct device *dev)

        arm_smmu_detach_dev(domain, dev);

-       spin_lock(&xen_domain->lock);
-       list_del(&domain->list);
-       spin_unlock(&xen_domain->lock);
+       if (domain->ref.counter == 0)
+       {

There is a possible race with the previous function. arm_smmu_assign_dev still have time to increment domain->ref and we will free a domain with 1 device assigned.

Overall, I think the 2 functions should be completely protected by the xen_domain->lock.

Regards,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.