|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [PATCH 5/6] iommu: remove the share_p2m operation
> -----Original Message-----
> From: Jan Beulich <jbeulich@xxxxxxxx>
> Sent: 26 July 2020 09:50
> To: Paul Durrant <paul@xxxxxxx>
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Durrant, Paul <pdurrant@xxxxxxxxxxxx>;
> Andrew Cooper
> <andrew.cooper3@xxxxxxxxxx>; George Dunlap <george.dunlap@xxxxxxxxxx>; Wei
> Liu <wl@xxxxxxx>; Roger Pau
> Monné <roger.pau@xxxxxxxxxx>; Kevin Tian <kevin.tian@xxxxxxxxx>
> Subject: RE: [EXTERNAL] [PATCH 5/6] iommu: remove the share_p2m operation
>
> CAUTION: This email originated from outside of the organization. Do not click
> links or open
> attachments unless you can confirm the sender and know the content is safe.
>
>
>
> On 24.07.2020 18:46, Paul Durrant wrote:
> > --- a/xen/drivers/passthrough/vtd/iommu.c
> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> > @@ -313,6 +313,26 @@ static u64 addr_to_dma_page_maddr(struct domain
> > *domain, u64 addr, int alloc)
> > return pte_maddr;
> > }
> >
> > +static u64 domain_pgd_maddr(struct domain *d)
>
> uint64_t please.
>
Ok.
> > +{
> > + struct domain_iommu *hd = dom_iommu(d);
> > +
> > + ASSERT(spin_is_locked(&hd->arch.mapping_lock));
> > +
> > + if ( iommu_use_hap_pt(d) )
> > + {
> > + mfn_t pgd_mfn =
> > + pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
> > +
> > + return pagetable_get_paddr(pagetable_from_mfn(pgd_mfn));
> > + }
> > +
> > + if ( !hd->arch.vtd.pgd_maddr )
> > + addr_to_dma_page_maddr(d, 0, 1);
> > +
> > + return hd->arch.vtd.pgd_maddr;
> > +}
> > +
> > static void iommu_flush_write_buffer(struct vtd_iommu *iommu)
> > {
> > u32 val;
> > @@ -1347,22 +1367,17 @@ int domain_context_mapping_one(
> > {
> > spin_lock(&hd->arch.mapping_lock);
> >
> > - /* Ensure we have pagetables allocated down to leaf PTE. */
> > - if ( hd->arch.vtd.pgd_maddr == 0 )
> > + pgd_maddr = domain_pgd_maddr(domain);
> > + if ( !pgd_maddr )
> > {
> > - addr_to_dma_page_maddr(domain, 0, 1);
> > - if ( hd->arch.vtd.pgd_maddr == 0 )
> > - {
> > - nomem:
> > - spin_unlock(&hd->arch.mapping_lock);
> > - spin_unlock(&iommu->lock);
> > - unmap_vtd_domain_page(context_entries);
> > - return -ENOMEM;
> > - }
> > + nomem:
> > + spin_unlock(&hd->arch.mapping_lock);
> > + spin_unlock(&iommu->lock);
> > + unmap_vtd_domain_page(context_entries);
> > + return -ENOMEM;
> > }
>
> This renders all calls bogus in shared mode - the function, if
> it ended up getting called nevertheless, would then still alloc
> the root table. Therefore I'd like to suggest that at least all
> its callers get an explicit check. That's really just
> dma_pte_clear_one() as it looks.
>
Ok, I think I may move this code out into a separate function too since the
nomem label is slightly awkward, so I'll re-factor it.
Paul
> Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |