|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops
> -----Original Message-----
> From: Roger Pau Monne
> Sent: 13 September 2018 09:50
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Kevin Tian <kevin.tian@xxxxxxxxx>; Wei
> Liu <wei.liu2@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>; Jan
> Beulich <jbeulich@xxxxxxxx>
> Subject: Re: [Xen-devel] [PATCH v7 6/6] vtd: add lookup_page method to
> iommu_ops
>
> On Wed, Sep 12, 2018 at 12:30:28PM +0100, Paul Durrant wrote:
> > This patch adds a new method to the VT-d IOMMU implementation to find
> the
> > MFN currently mapped by the specified DFN along with a wrapper function
> > in generic IOMMU code to call the implementation if it exists.
> >
> > This patch also cleans up the initializers in intel_iommu_map_page() and
> > uses array-style dereference there, for consistency. A missing check for
> > shared EPT is also added to intel_iommu_unmap_page().
> >
> > NOTE: This patch only adds a Xen-internal interface. This will be used by
> > a subsequent patch.
> > Another subsequent patch will add similar functionality for AMD
> > IOMMUs.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> > ---
> > Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
> > Cc: Kevin Tian <kevin.tian@xxxxxxxxx>
> > Cc: Jan Beulich <jbeulich@xxxxxxxx>
> > Cc: George Dunlap <george.dunlap@xxxxxxxxxx>
> >
> > v7:
> > - Re-base and re-name BFN -> DFN.
> > - Add missing checks for shared EPT and iommu_passthrough.
> > - Remove unnecessary initializers and use array-style dereference.
> > - Drop Wei's R-b because of code churn.
> >
> > v3:
> > - Addressed comments from George.
> >
> > v2:
> > - Addressed some comments from Jan.
> > ---
> > xen/drivers/passthrough/iommu.c | 11 ++++++++
> > xen/drivers/passthrough/vtd/iommu.c | 52
> +++++++++++++++++++++++++++++++++++--
> > xen/drivers/passthrough/vtd/iommu.h | 3 +++
> > xen/include/xen/iommu.h | 4 +++
> > 4 files changed, 68 insertions(+), 2 deletions(-)
> >
> > diff --git a/xen/drivers/passthrough/iommu.c
> b/xen/drivers/passthrough/iommu.c
> > index a16f1a0c66..52e3f500c7 100644
> > --- a/xen/drivers/passthrough/iommu.c
> > +++ b/xen/drivers/passthrough/iommu.c
> > @@ -296,6 +296,17 @@ int iommu_unmap_page(struct domain *d, dfn_t
> dfn)
> > return rc;
> > }
> >
> > +int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
> > + unsigned int *flags)
> > +{
> > + const struct domain_iommu *hd = dom_iommu(d);
> > +
> > + if ( !iommu_enabled || !hd->platform_ops )
> > + return -EOPNOTSUPP;
> > +
> > + return hd->platform_ops->lookup_page(d, dfn, mfn, flags);
> > +}
> > +
> > static void iommu_free_pagetables(unsigned long unused)
> > {
> > do {
> > diff --git a/xen/drivers/passthrough/vtd/iommu.c
> b/xen/drivers/passthrough/vtd/iommu.c
> > index 0163bb949b..6622c2dd4c 100644
> > --- a/xen/drivers/passthrough/vtd/iommu.c
> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> > @@ -1770,7 +1770,7 @@ static int __must_check
> intel_iommu_map_page(struct domain *d,
> > unsigned int flags)
> > {
> > struct domain_iommu *hd = dom_iommu(d);
> > - struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
> > + struct dma_pte *page, *pte, old, new = {};
> > u64 pg_maddr;
> > int rc = 0;
> >
> > @@ -1790,9 +1790,11 @@ static int __must_check
> intel_iommu_map_page(struct domain *d,
> > spin_unlock(&hd->arch.mapping_lock);
> > return -ENOMEM;
> > }
> > +
> > page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
> > - pte = page + (dfn_x(dfn) & LEVEL_MASK);
> > + pte = &page[dfn_x(dfn) & LEVEL_MASK];
> > old = *pte;
> > +
> > dma_set_pte_addr(new, mfn_to_maddr(mfn));
> > dma_set_pte_prot(new,
> > ((flags & IOMMUF_readable) ? DMA_PTE_READ : 0) |
> > @@ -1808,6 +1810,7 @@ static int __must_check
> intel_iommu_map_page(struct domain *d,
> > unmap_vtd_domain_page(page);
> > return 0;
> > }
> > +
> > *pte = new;
> >
> > iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
> > @@ -1823,6 +1826,10 @@ static int __must_check
> intel_iommu_map_page(struct domain *d,
> > static int __must_check intel_iommu_unmap_page(struct domain *d,
> > dfn_t dfn)
> > {
> > + /* Do nothing if VT-d shares EPT page table */
> > + if ( iommu_use_hap_pt(d) )
> > + return 0;
> > +
> > /* Do nothing if hardware domain and iommu supports pass thru. */
> > if ( iommu_passthrough && is_hardware_domain(d) )
> > return 0;
> > @@ -1830,6 +1837,46 @@ static int __must_check
> intel_iommu_unmap_page(struct domain *d,
> > return dma_pte_clear_one(d, dfn_to_daddr(dfn));
> > }
> >
> > +static int intel_iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t
> *mfn,
> > + unsigned int *flags)
>
> Would be nice to constify domain here, but I see that's not possible
> because of the lock.
>
> > +{
> > + struct domain_iommu *hd = dom_iommu(d);
> > + struct dma_pte *page, val;
> > + u64 pg_maddr;
>
> Use uint64_t.
>
> > +
> > + /* Fail if VT-d shares EPT page table */
> > + if ( iommu_use_hap_pt(d) )
> > + return -ENOENT;
> > +
> > + /* Fail if hardware domain and iommu supports pass thru. */
> > + if ( iommu_passthrough && is_hardware_domain(d) )
> > + return -ENOENT;
>
> I would join the two if conditions above, and consider returning
> something different from ENOENT, since that's also used to signal that
> iommu page tables are present but the entry is not present. Maybe
> EOPNOTSUPP or ENOSYS?
>
> I also dislike that other iommu functions simply return 0 without
> doing anything when the page tables are shared.
>
That's Kevin's preference though and he is subsystem maintainer.
> > +
> > + spin_lock(&hd->arch.mapping_lock);
> > +
> > + pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 0);
> > + if ( pg_maddr == 0 )
>
> It's more common to use !pg_maddr.
>
Ok. I'll make some style adjustments but I generally try to follow the
prevailing style of the source module.
Paul
> Thanks, Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |