|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 08/11] IOMMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (leaf ones).
On April 19, 2016 2:58pm, Tian, Kevin <kevin.tian@xxxxxxxxx> wrote:
> > From: Xu, Quan
> > Sent: Monday, April 18, 2016 10:00 PM
> >
> > While IOMMU Device-TLB flush timed out, xen calls panic() at present.
> > However the existing panic() is going to be eliminated, so we must
> > propagate the IOMMU Device-TLB flush error up to the
> iommu_iotlb_flush{,_all}.
> >
> > If IOMMU mapping and unmapping failed, the domain (with the exception
> > of the hardware domain) is crashed, treated as a fatal error. Rollback
> > can be lighter weight.
> >
> > This patch fixes the leaf ones.
>
> what about other leaves like AMD/ARM callbacks?
>
Thank you for reminding me of such cases. I really missed them in this v2.
Quan
> >
> > Signed-off-by: Quan Xu <quan.xu@xxxxxxxxx>
> >
> > CC: Jan Beulich <jbeulich@xxxxxxxx>
> > CC: Kevin Tian <kevin.tian@xxxxxxxxx>
> > CC: Feng Wu <feng.wu@xxxxxxxxx>
> > ---
> > xen/drivers/passthrough/iommu.c | 8 ++------
> > xen/drivers/passthrough/vtd/iommu.c | 8 ++++----
> > xen/include/xen/iommu.h | 4 ++--
> > 3 files changed, 8 insertions(+), 12 deletions(-)
> >
> > diff --git a/xen/drivers/passthrough/iommu.c
> > b/xen/drivers/passthrough/iommu.c index 9254760..d44fc39 100644
> > --- a/xen/drivers/passthrough/iommu.c
> > +++ b/xen/drivers/passthrough/iommu.c
> > @@ -308,9 +308,7 @@ int iommu_iotlb_flush(struct domain *d, unsigned
> > long gfn, unsigned int page_cou
> > if ( !iommu_enabled || !hd->platform_ops
> || !hd->platform_ops->iotlb_flush )
> > return 0;
> >
> > - hd->platform_ops->iotlb_flush(d, gfn, page_count);
> > -
> > - return 0;
> > + return hd->platform_ops->iotlb_flush(d, gfn, page_count);
> > }
> >
> > int iommu_iotlb_flush_all(struct domain *d) @@ -320,9 +318,7 @@ int
> > iommu_iotlb_flush_all(struct domain *d)
> > if ( !iommu_enabled || !hd->platform_ops
> || !hd->platform_ops->iotlb_flush_all )
> > return 0;
> >
> > - hd->platform_ops->iotlb_flush_all(d);
> > -
> > - return 0;
> > + return hd->platform_ops->iotlb_flush_all(d);
> > }
> >
> > int __init iommu_setup(void)
> > diff --git a/xen/drivers/passthrough/vtd/iommu.c
> > b/xen/drivers/passthrough/vtd/iommu.c
> > index 5cc70ca..930661a 100644
> > --- a/xen/drivers/passthrough/vtd/iommu.c
> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> > @@ -606,14 +606,14 @@ static int iommu_flush_iotlb(struct domain *d,
> > unsigned long gfn,
> > return rc;
> > }
> >
> > -static void intel_iommu_iotlb_flush(struct domain *d, unsigned long
> > gfn, unsigned int
> > page_count)
> > +static int intel_iommu_iotlb_flush(struct domain *d, unsigned long
> > +gfn, unsigned int
> > page_count)
> > {
> > - iommu_flush_iotlb(d, gfn, 1, page_count);
> > + return iommu_flush_iotlb(d, gfn, 1, page_count);
> > }
> >
> > -static void intel_iommu_iotlb_flush_all(struct domain *d)
> > +static int intel_iommu_iotlb_flush_all(struct domain *d)
> > {
> > - iommu_flush_iotlb(d, INVALID_GFN, 0, 0);
> > + return iommu_flush_iotlb(d, INVALID_GFN, 0, 0);
> > }
> >
> > /* clear one page's page table */
> > diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index
> > ff4608d..ca1c409 100644
> > --- a/xen/include/xen/iommu.h
> > +++ b/xen/include/xen/iommu.h
> > @@ -161,8 +161,8 @@ struct iommu_ops {
> > void (*resume)(void);
> > void (*share_p2m)(struct domain *d);
> > void (*crash_shutdown)(void);
> > - void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int
> page_count);
> > - void (*iotlb_flush_all)(struct domain *d);
> > + int (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int
> page_count);
> > + int (*iotlb_flush_all)(struct domain *d);
> > int (*get_reserved_device_memory)(iommu_grdm_t *, void *);
> > void (*dump_p2m_table)(struct domain *d); };
> > --
> > 1.9.1
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |