[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 16/21] VT-d: free all-empty page tables
On 20.05.2022 13:13, Roger Pau Monné wrote: > On Wed, May 18, 2022 at 12:26:03PM +0200, Jan Beulich wrote: >> On 10.05.2022 16:30, Roger Pau Monné wrote: >>> On Mon, Apr 25, 2022 at 10:42:50AM +0200, Jan Beulich wrote: >>>> @@ -837,9 +843,31 @@ static int dma_pte_clear_one(struct doma >>>> >>>> old = *pte; >>>> dma_clear_pte(*pte); >>>> + iommu_sync_cache(pte, sizeof(*pte)); >>>> + >>>> + while ( pt_update_contig_markers(&page->val, >>>> + address_level_offset(addr, level), >>>> + level, PTE_kind_null) && >>>> + ++level < min_pt_levels ) >>>> + { >>>> + struct page_info *pg = maddr_to_page(pg_maddr); >>>> + >>>> + unmap_vtd_domain_page(page); >>>> + >>>> + pg_maddr = addr_to_dma_page_maddr(domain, addr, level, >>>> flush_flags, >>>> + false); >>>> + BUG_ON(pg_maddr < PAGE_SIZE); >>>> + >>>> + page = map_vtd_domain_page(pg_maddr); >>>> + pte = &page[address_level_offset(addr, level)]; >>>> + dma_clear_pte(*pte); >>>> + iommu_sync_cache(pte, sizeof(*pte)); >>>> + >>>> + *flush_flags |= IOMMU_FLUSHF_all; >>>> + iommu_queue_free_pgtable(hd, pg); >>>> + } >>> >>> I think I'm setting myself for trouble, but do we need to sync cache >>> the lower lever entries if higher level ones are to be changed. >>> >>> IOW, would it be fine to just flush the highest level modified PTE? >>> As the lower lever ones won't be reachable anyway. >> >> I definitely want to err on the safe side here. If later we can >> prove that some cache flush is unneeded, I'd be happy to see it >> go away. > > Hm, so it's not only about adding more cache flushes, but moving them > inside of the locked region: previously the only cache flush was done > outside of the locked region. > > I guess I can't convince myself why we would need to flush cache of > entries that are to be removed, and that also point to pages scheduled > to be freed. As previously said - with a series like this I wanted to strictly be on the safe side, maintaining the pre-existing pattern of all modifications of live tables being accompanied by a flush (if flushes are needed in the first place, of course). As to moving flushes into the locked region, I don't view this as a problem, seeing in particular that elsewhere we already have flushes with the lock held (at the very least the _full page_ one in addr_to_dma_page_maddr(), but also e.g. in intel_iommu_map_page(), where it could be easily moved past the unlock). If you (continue to) think that breaking the present pattern isn't going to misguide future changes, I can certainly drop these not really necessary flushes. Otoh I was actually considering to, subsequently, integrate the flushes into e.g. dma_clear_pte() to make it virtually impossible to break that pattern. This would imply that all page table related flushes would then occur with the lock held. (I won't separately reply to the similar topic on patch 18.) >>>> @@ -2182,8 +2210,21 @@ static int __must_check cf_check intel_i >>>> } >>>> >>>> *pte = new; >>>> - >>>> iommu_sync_cache(pte, sizeof(struct dma_pte)); >>>> + >>>> + /* >>>> + * While the (ab)use of PTE_kind_table here allows to save some work >>>> in >>>> + * the function, the main motivation for it is that it avoids a so far >>>> + * unexplained hang during boot (while preparing Dom0) on a Westmere >>>> + * based laptop. >>>> + */ >>>> + pt_update_contig_markers(&page->val, >>>> + address_level_offset(dfn_to_daddr(dfn), >>>> level), >>>> + level, >>>> + (hd->platform_ops->page_sizes & >>>> + (1UL << level_to_offset_bits(level + 1)) >>>> + ? PTE_kind_leaf : PTE_kind_table)); >>> >>> So this works because on what we believe to be affected models the >>> only supported page sizes are 4K? >> >> Yes. >> >>> Do we want to do the same with AMD if we don't allow 512G super pages? >> >> Why? They don't have a similar flaw. > > So the question was mostly whether we should also avoid the > pt_update_contig_markers for 1G entries, because we won't coalesce > them into a 512G anyway. IOW avoid the overhead of updating the > contig markers if we know that the resulting super-page is not > supported by ->page_sizes. As the comment says, I consider this at least partly an abuse of PTE_kind_table, so I'm wary of extending this to AMD. But if you continue to think it's worth it, I could certainly do so there as well. Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |