|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 16/21] VT-d: free all-empty page tables
On Wed, May 18, 2022 at 12:26:03PM +0200, Jan Beulich wrote:
> On 10.05.2022 16:30, Roger Pau Monné wrote:
> > On Mon, Apr 25, 2022 at 10:42:50AM +0200, Jan Beulich wrote:
> >> @@ -837,9 +843,31 @@ static int dma_pte_clear_one(struct doma
> >>
> >> old = *pte;
> >> dma_clear_pte(*pte);
> >> + iommu_sync_cache(pte, sizeof(*pte));
> >> +
> >> + while ( pt_update_contig_markers(&page->val,
> >> + address_level_offset(addr, level),
> >> + level, PTE_kind_null) &&
> >> + ++level < min_pt_levels )
> >> + {
> >> + struct page_info *pg = maddr_to_page(pg_maddr);
> >> +
> >> + unmap_vtd_domain_page(page);
> >> +
> >> + pg_maddr = addr_to_dma_page_maddr(domain, addr, level,
> >> flush_flags,
> >> + false);
> >> + BUG_ON(pg_maddr < PAGE_SIZE);
> >> +
> >> + page = map_vtd_domain_page(pg_maddr);
> >> + pte = &page[address_level_offset(addr, level)];
> >> + dma_clear_pte(*pte);
> >> + iommu_sync_cache(pte, sizeof(*pte));
> >> +
> >> + *flush_flags |= IOMMU_FLUSHF_all;
> >> + iommu_queue_free_pgtable(hd, pg);
> >> + }
> >
> > I think I'm setting myself for trouble, but do we need to sync cache
> > the lower lever entries if higher level ones are to be changed.
> >
> > IOW, would it be fine to just flush the highest level modified PTE?
> > As the lower lever ones won't be reachable anyway.
>
> I definitely want to err on the safe side here. If later we can
> prove that some cache flush is unneeded, I'd be happy to see it
> go away.
Hm, so it's not only about adding more cache flushes, but moving them
inside of the locked region: previously the only cache flush was done
outside of the locked region.
I guess I can't convince myself why we would need to flush cache of
entries that are to be removed, and that also point to pages scheduled
to be freed.
> >> @@ -2182,8 +2210,21 @@ static int __must_check cf_check intel_i
> >> }
> >>
> >> *pte = new;
> >> -
> >> iommu_sync_cache(pte, sizeof(struct dma_pte));
> >> +
> >> + /*
> >> + * While the (ab)use of PTE_kind_table here allows to save some work
> >> in
> >> + * the function, the main motivation for it is that it avoids a so far
> >> + * unexplained hang during boot (while preparing Dom0) on a Westmere
> >> + * based laptop.
> >> + */
> >> + pt_update_contig_markers(&page->val,
> >> + address_level_offset(dfn_to_daddr(dfn),
> >> level),
> >> + level,
> >> + (hd->platform_ops->page_sizes &
> >> + (1UL << level_to_offset_bits(level + 1))
> >> + ? PTE_kind_leaf : PTE_kind_table));
> >
> > So this works because on what we believe to be affected models the
> > only supported page sizes are 4K?
>
> Yes.
>
> > Do we want to do the same with AMD if we don't allow 512G super pages?
>
> Why? They don't have a similar flaw.
So the question was mostly whether we should also avoid the
pt_update_contig_markers for 1G entries, because we won't coalesce
them into a 512G anyway. IOW avoid the overhead of updating the
contig markers if we know that the resulting super-page is not
supported by ->page_sizes.
Thanks, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |