|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/2 V2] iommu/amd: Workaround for erratum 787
At 00:05 -0500 on 10 Jun (1370822751), suravee.suthikulpanit@xxxxxxx wrote:
> From: Suravee Suthikulpanit <suravee.suthikulpanit@xxxxxxx>
>
> The IOMMU interrupt handling in bottom half must clear the PPR log interrupt
> and event log interrupt bits to re-enable the interrupt. This is done by
> writing 1 to the memory mapped register to clear the bit. Due to hardware bug,
> if the driver tries to clear this bit while the IOMMU hardware also setting
> this bit, the conflict will result with the bit being set. If the interrupt
> handling code does not make sure to clear this bit, subsequent changes in the
> event/PPR logs will no longer generating interrupts, and would result if
> buffer overflow. After clearing the bits, the driver must read back
> the register to verify.
Is there a risk of livelock here? That is, if some device is causing a
lot of IOMMU faults, a CPU could get stuck in this loop re-enabling
interrupts as fast as they are raised.
The solution suggested in the erratum seems better: if the bit is set
after clearing, process the interrupts again (i.e. run/schedule the
top-half handler). That way the bottom-half handler will definitely
terminate and the system can make some progress.
Cheers,
Tim.
> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@xxxxxxx>
> ---
> V2 changes:
> - Coding style fixes
>
> xen/drivers/passthrough/amd/iommu_init.c | 36
> ++++++++++++++++++++----------
> 1 file changed, 24 insertions(+), 12 deletions(-)
>
> diff --git a/xen/drivers/passthrough/amd/iommu_init.c
> b/xen/drivers/passthrough/amd/iommu_init.c
> index a85c63f..048a2e6 100644
> --- a/xen/drivers/passthrough/amd/iommu_init.c
> +++ b/xen/drivers/passthrough/amd/iommu_init.c
> @@ -616,15 +616,21 @@ static void iommu_check_event_log(struct amd_iommu
> *iommu)
>
> spin_lock_irqsave(&iommu->lock, flags);
>
> - /*check event overflow */
> entry = readl(iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
> + while (entry & (1 << IOMMU_STATUS_EVENT_LOG_INT_SHIFT))
> + {
> + /* Check event overflow */
> + if ( iommu_get_bit(entry, IOMMU_STATUS_EVENT_OVERFLOW_SHIFT) )
> + iommu_reset_log(iommu, &iommu->event_log,
> set_iommu_event_log_control);
>
> - if ( iommu_get_bit(entry, IOMMU_STATUS_EVENT_OVERFLOW_SHIFT) )
> - iommu_reset_log(iommu, &iommu->event_log,
> set_iommu_event_log_control);
> + /* RW1C interrupt status bit */
> + writel((1 << IOMMU_STATUS_EVENT_LOG_INT_SHIFT),
> + iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
>
> - /* RW1C interrupt status bit */
> - writel((1 << IOMMU_STATUS_EVENT_LOG_INT_SHIFT),
> - iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
> + /* Workaround for erratum787:
> + * Re-check to make sure the bit has been cleared */
> + entry = readl(iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
> + }
>
> spin_unlock_irqrestore(&iommu->lock, flags);
> }
> @@ -684,15 +690,21 @@ static void iommu_check_ppr_log(struct amd_iommu *iommu)
>
> spin_lock_irqsave(&iommu->lock, flags);
>
> - /*check event overflow */
> entry = readl(iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
> + while ( entry & (1 << IOMMU_STATUS_PPR_LOG_INT_SHIFT ) )
> + {
> + /* Check event overflow */
> + if ( iommu_get_bit(entry, IOMMU_STATUS_PPR_LOG_OVERFLOW_SHIFT) )
> + iommu_reset_log(iommu, &iommu->ppr_log,
> set_iommu_ppr_log_control);
>
> - if ( iommu_get_bit(entry, IOMMU_STATUS_PPR_LOG_OVERFLOW_SHIFT) )
> - iommu_reset_log(iommu, &iommu->ppr_log, set_iommu_ppr_log_control);
> + /* RW1C interrupt status bit */
> + writel((1 << IOMMU_STATUS_PPR_LOG_INT_SHIFT),
> + iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
>
> - /* RW1C interrupt status bit */
> - writel((1 << IOMMU_STATUS_PPR_LOG_INT_SHIFT),
> - iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
> + /* Workaround for erratum787:
> + * Re-check to make sure the bit has been cleared */
> + entry = readl(iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
> + }
>
> spin_unlock_irqrestore(&iommu->lock, flags);
> }
> --
> 1.7.10.4
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |