[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v2 0/2] VT-d flush issue



This patches are based on Kevin Tian's previous discussion 'Revisit VT-d 
asynchronous flush issue'.
Fix current timeout concern and also allow limited ATS support in a light way:

1. Reduce spin timeout to 1ms, which can be boot-time changed with 
'iommu_qi_timeout_ms'.
   For example:
           multiboot /boot/xen.gz ats=1 iommu_qi_timeout_ms=100

2. Fix vt-d flush timeout issue.

    If IOTLB/Context/IETC flush is timeout, we should think all devices under 
this IOMMU cannot function correctly.
    So for each device under this IOMMU we'll mark it as unassignable and kill 
the domain owning the device.

    If Device-TLB flush is timeout, we'll mark the target ATS device as 
unassignable and kill the domain owning
    this device.

    If impacted domain is hardware domain, just throw out a warning. It's an 
open here whether we want to kill
    hardware domain (or directly panic hypervisor). Comments are welcomed.

    Device marked as unassignable will be disallowed to be further assigned to 
any domain.

*Kevin Tian did basic functional review.

--Changes in v2:
    1. Checking hardware_domain should be enough.
    2. Do timeout check within dev_invalidate_iotlb for each ATS device, to 
identify bogus device accurately.

Quan Xu (2):
  VT-d: Reduce spin timeout to 1ms, which can be boot-time changed.
  VT-d: Fix vt-d flush timeout issue.

 xen/drivers/passthrough/vtd/extern.h  |  4 ++
 xen/drivers/passthrough/vtd/iommu.c   |  6 +++
 xen/drivers/passthrough/vtd/iommu.h   |  5 ++
 xen/drivers/passthrough/vtd/qinval.c  | 97 +++++++++++++++++++++++++++++++++--
 xen/drivers/passthrough/vtd/x86/ats.c | 16 ++++++
 xen/include/xen/pci.h                 |  7 +++
 6 files changed, 131 insertions(+), 4 deletions(-)

-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.