[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 01/10] vt-d: fix the IOMMU flush issue



On May 23, 2016 9:31 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
> >>> On 18.05.16 at 10:08, <quan.xu@xxxxxxxxx> wrote:
> > --- a/xen/drivers/passthrough/vtd/iommu.c
> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> > @@ -557,14 +557,16 @@ static void iommu_flush_all(void)
> >      }
> >  }
> >
> > -static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
> > -        int dma_old_pte_present, unsigned int page_count)
> > +static int __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
> > +                                     bool_t dma_old_pte_present,
> > +                                     unsigned int page_count)
> 
> I realize you say so in the overview mail, but the continuing lack of
> __must_check here causes review trouble again. And I have a hard time seeing
> how adding these annotations right away would "disrupt the order", as long
> as the series is properly ordered / broken up.
> 

If I add __must_check annotations here right now, e.g. 

-static void intel_iommu_iotlb_flush()
+static int __must_check iommu_flush_iotlb_pages()

... 

@@ -179,8 +179,9 @@ struct iommu_ops {
-    void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int 
page_count);
+    int __must_check (*iotlb_flush)(struct domain *d, unsigned long gfn, 
unsigned int page_count);
...
}

Should belong  to here too. 




> > @@ -579,23 +581,28 @@ static void __intel_iommu_iotlb_flush(struct
> > domain *d, unsigned long gfn,
> >
> >          flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> >          iommu_domid= domain_iommu_domid(d, iommu);
> > +
> >          if ( iommu_domid == -1 )
> 
> I appreciate you adding blank lines where needed, but this one looks spurious.
> 
> > @@ -1391,13 +1399,26 @@ int domain_context_mapping_one(
> >      spin_unlock(&iommu->lock);
> >
> >      /* Context entry was previously non-present (with domid 0). */
> > -    if ( iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | devfn,
> > -                                    DMA_CCMD_MASK_NOBIT, 1) )
> > -        iommu_flush_write_buffer(iommu);
> > -    else
> > +    rc = iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) |
> > + devfn,
> 
> If you already touch such code, I'd appreciate if you did away with the open
> coding of pre-canned macros or inline functions (PCI_BDF2() in this case).

I will enhance it in v6.

> 
> > +                                    DMA_CCMD_MASK_NOBIT, 1);
> > +
> > +    /*
> > +     * The current logic for rc returns:
> > +     *   - positive  invoke iommu_flush_write_buffer to flush cache.
> > +     *   - zero      success.
> > +     *   - negative  failure. Continue to flush IOMMU IOTLB on a best
> > +     *               effort basis.
> > +     */
> > +    if ( rc <= 0 )
> >      {
> >          int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> > -        iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
> > +
> > +        rc = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
> 
> If rc was negative before this call, you may end up returning success without
> having been successful. Furthermore I think it was you who last time round
> reminded me that
> iommu_flush_iotlb_dsi() can also return 1, which you don't take care of.
> 

Yes, the iommu_flush_iotlb_dsi() can also return 1.
Look at the call tree, at the beginning of flush_context_qi()/flush_iotlb_qi(), 
or flush_context_reg()/flush_iotlb_reg()..

If rc was negative when we call iommu_flush_context_device(), it is impossible 
to return 1 for iommu_flush_iotlb_dsi().
 
IMO, furthermore, this should not belong  to comment.

> > @@ -1522,6 +1544,7 @@ int domain_context_unmap_one(
> >      iommu_flush_cache_entry(context, sizeof(struct context_entry));
> >
> >      iommu_domid= domain_iommu_domid(domain, iommu);
> > +
> >      if ( iommu_domid == -1 )
> 
> Seemingly stray addition of blank line again (more such below). And the code
> below has the same issue as that above.
> 
I will enhance it in v6.

Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.