[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 0/3] VT-d Device-TLB flush issue
> On 23.12.2015 at 6:39pm, 'Jan Beulich' wrote: > >>> Quan Xu <quan.xu@xxxxxxxxx> 12/23/15 9:26 AM >>> > >This patches are based on Kevin Tian's previous discussion 'Revisit VT-d > asynchronous flush issue'. > >Fix current timeout concern and also allow limited ATS support in a light > >way: > > > >1. Check VT-d Device-TLB flush error. > >This patch checks all kinds of error and all the way up the call trees of > >VT-d > Device-TLB flush. > > > >2. Reduce spin timeout to 1ms, which can be boot-time changed with > 'iommu_qi_timeout_ms'. > >For example: > >multiboot /boot/xen.gz ats=1 iommu_qi_timeout_ms=100 > > > >3. Fix vt-d Device-TLB flush timeout issue. > >Now if IOTLB/Context/IETC flush is timeout, panic hypervisor. The > >coming patch set will fix it. > > There must have been a misunderstanding: Your earlier outline didn't indicate > you mean to introduce panics here, It is a panic for iotlb/context/iec flush issue without my patch set. I just move the panic from queue_invalidate_wait() to invalidate_sync().. I did NOT introduce a new panic.. > even if only temporarily. I'm afraid I'm not > currently willing to take any conceptually wrong patches anymore with just the > promise of fixing the issue(s) later (and I think we've mentioned this in some > past discussion on the list, albeit unlikely in the context of any of your > work). > This may mean that the earlier described ordering of things you mean to do > needs changing. > :(:( .. > I'm sorry that you're hit first by this, the more that it was not you but > colleagues > of yours causing this change to the acceptance model. > > Jan > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxx > http://lists.xen.org/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |