[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v7 09/11] vt-d: fix the IOMMU flush issue
On June 12, 2016 5:27 PM, Xu, Quan <quan.xu@xxxxxxxxx> wrote: > On June 12, 2016 3:33 PM, Tian, Kevin <kevin.tian@xxxxxxxxx> wrote: > > > From: Xu, Quan > > > Sent: Wednesday, June 08, 2016 4:59 PM @@ -545,18 +549,42 @@ > static > > > int __must_check iommu_flush_all(void) { > > > struct acpi_drhd_unit *drhd; > > > struct iommu *iommu; > > > - int flush_dev_iotlb; > > > + int rc = 0; > > > > > > flush_all_cache(); > > > for_each_drhd_unit ( drhd ) > > > { > > > iommu = drhd->iommu; > > > - iommu_flush_context_global(iommu, 0); > > > - flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0; > > > - iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb); > > > + /* > > > + * The current logic for rc returns: > > > + * - positive invoke iommu_flush_write_buffer to flush cache. > > > + * - zero on success. > > > + * - negative on failure. Continue to flush IOMMU IOTLB on a > > > + * best effort basis. > > > + * > > > + * Moreover, IOMMU flush handlers flush_context_qi and > > flush_iotlb_qi > > > + * (or flush_context_reg and flush_iotlb_reg, deep functions in > > > the > > > + * call trees of iommu_flush_context_global and > > iommu_flush_iotlb_global) > > > + * are with the same logic to bubble up positive return value. > > > + */ > > > + rc = iommu_flush_context_global(iommu, 0); > > > + if ( rc <= 0 ) > > > + { > > > + int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0; > > > + int ret = iommu_flush_iotlb_global(iommu, 0, > > > + flush_dev_iotlb); > > > + > > > + ASSERT(ret <= 0); > > > + if ( !rc ) > > > + rc = ret; > > > > I'm dubious about the assertion here. Why can't above call return 1 > > upon error on earlier flush? I digged back your earlier reply like: > > > > > Yes, the iommu_flush_iotlb_dsi() can also return 1. > > > Look at the call tree, at the beginning of > > > flush_context_qi()/flush_iotlb_qi(), or > > > flush_context_reg()/flush_iotlb_reg().. > > > > > > If rc was negative when we call iommu_flush_context_device(), it is > > > impossible to return 1 for iommu_flush_iotlb_dsi(). > > > > But I don't think it a good idea of making so much assumptions about > > internal implementations of those low level interfaces. > > Also flush_context may fail for one specific reason which doesn't > > block flush_iotlb which could get 1 returned when caching mode is > > disabled. We'd better have return-1 case correctly handled here. > > > > Your comment looks reasonable here. Could I change it as below: > > -static int iommu_flush_iotlb_psi( > - struct iommu *iommu, u16 did, u64 addr, unsigned int order, > - int flush_non_present_entry, int flush_dev_iotlb) > +static int __must_check iommu_flush_iotlb_psi(struct iommu *iommu, u16 > did, > + u64 addr, unsigned int order, > + int flush_non_present_entry, > + int flush_dev_iotlb) > { > struct iommu_flush *flush = iommu_get_flush(iommu); > int status; > @@ -546,17 +550,35 @@ static int __must_check iommu_flush_all(void) > struct acpi_drhd_unit *drhd; > struct iommu *iommu; > int flush_dev_iotlb; > + int rc = 0; > > flush_all_cache(); > for_each_drhd_unit ( drhd ) > { > + int ret; > + > iommu = drhd->iommu; > - iommu_flush_context_global(iommu, 0); > + /* > + * The current logic for rc returns: > + * - positive invoke iommu_flush_write_buffer to flush cache. > + * - zero on success. > + * - negative on failure. Continue to flush IOMMU IOTLB on a > + * best effort basis. > + */ > + rc = iommu_flush_context_global(iommu, 0); > flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0; > - iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb); > + ret = iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb); > + if ( !rc ) > + rc = ret; > + > + if ( rc > 0 || ret > 0 ) > + iommu_flush_write_buffer(iommu); > } > > - return 0; > + if ( rc > 0 ) > + rc = 0; > + > + return rc; > } > > Ah, this change is not correct, as the previous error return value may be erased by the later positive / zero value. I'll highlight this change is not under Jan's R-b in next v8. .. I think the below is correct. +static int __must_check iommu_flush_iotlb_psi(struct iommu *iommu, u16 did, + u64 addr, unsigned int order, + int flush_non_present_entry, + int flush_dev_iotlb) { struct iommu_flush *flush = iommu_get_flush(iommu); int status; @@ -546,17 +550,37 @@ static int __must_check iommu_flush_all(void) struct acpi_drhd_unit *drhd; struct iommu *iommu; int flush_dev_iotlb; + int rc = 0; flush_all_cache(); for_each_drhd_unit ( drhd ) { + int iommu_rc, iommu_ret; + iommu = drhd->iommu; - iommu_flush_context_global(iommu, 0); + iommu_rc = iommu_flush_context_global(iommu, 0); flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0; - iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb); + iommu_ret = iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb); + + /* + * The current logic for returns: + * - positive invoke iommu_flush_write_buffer to flush cache. + * - zero on success. + * - negative on failure. Continue to flush IOMMU IOTLB on a + * best effort basis. + */ + if ( iommu_rc > 0 || iommu_ret > 0 ) + iommu_flush_write_buffer(iommu); + if ( rc >= 0 ) + rc = iommu_rc; + if ( rc >= 0 ) + rc = iommu_ret; } - return 0; + if ( rc > 0 ) + rc = 0; + + return rc; } > > Also, Jan, what's your opinion? > > Quan > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxx > http://lists.xen.org/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |