[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v7 3/3] AMD/IOMMU: pre-fill all DTEs right after table allocation
Make sure we don't leave any DTEs unexpected requests through which would be passed through untranslated. Set V and IV right away (with all other fields left as zero), relying on the V and/or IV bits getting cleared only by amd_iommu_set_root_page_table() and amd_iommu_set_intremap_table() under special pass-through circumstances. Switch back to initial settings in amd_iommu_disable_domain_device(). Take the liberty and also make the latter function static, constifying its first parameter at the same time, at this occasion. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx> --- v7: Avoid writing the DT twice during initial allocation. v6: New. --- a/xen/drivers/passthrough/amd/iommu_init.c +++ b/xen/drivers/passthrough/amd/iommu_init.c @@ -1262,12 +1262,28 @@ static int __init amd_iommu_setup_device if ( !dt ) { + unsigned int size = dt_alloc_size(); + /* allocate 'device table' on a 4K boundary */ dt = IVRS_MAPPINGS_DEVTAB(ivrs_mappings) = - allocate_buffer(dt_alloc_size(), "Device Table", true); + allocate_buffer(size, "Device Table", false); + if ( !dt ) + return -ENOMEM; + + /* + * Prefill every DTE such that all kinds of requests will get aborted. + * Besides the two bits set to true below this builds upon + * IOMMU_DEV_TABLE_SYS_MGT_DMA_ABORTED, + * IOMMU_DEV_TABLE_IO_CONTROL_ABORTED, as well as + * IOMMU_DEV_TABLE_INT_CONTROL_ABORTED all being zero, and us also + * wanting at least TV, GV, I, and EX set to false. + */ + for ( bdf = 0, size /= sizeof(*dt); bdf < size; ++bdf ) + dt[bdf] = (struct amd_iommu_dte){ + .v = true, + .iv = true, + }; } - if ( !dt ) - return -ENOMEM; /* Add device table entries */ for ( bdf = 0; bdf < ivrs_bdf_entries; bdf++ ) --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -267,9 +267,9 @@ static void __hwdom_init amd_iommu_hwdom setup_hwdom_pci_devices(d, amd_iommu_add_device); } -void amd_iommu_disable_domain_device(struct domain *domain, - struct amd_iommu *iommu, - u8 devfn, struct pci_dev *pdev) +static void amd_iommu_disable_domain_device(const struct domain *domain, + struct amd_iommu *iommu, + uint8_t devfn, struct pci_dev *pdev) { struct amd_iommu_dte *table, *dte; unsigned long flags; @@ -284,9 +284,21 @@ void amd_iommu_disable_domain_device(str spin_lock_irqsave(&iommu->lock, flags); if ( dte->tv || dte->v ) { + /* See the comment in amd_iommu_setup_device_table(). */ + dte->int_ctl = IOMMU_DEV_TABLE_INT_CONTROL_ABORTED; + smp_wmb(); + dte->iv = true; dte->tv = false; - dte->v = false; + dte->gv = false; dte->i = false; + dte->ex = false; + dte->sa = false; + dte->se = false; + dte->sd = false; + dte->sys_mgt = IOMMU_DEV_TABLE_SYS_MGT_DMA_ABORTED; + dte->ioctl = IOMMU_DEV_TABLE_IO_CONTROL_ABORTED; + smp_wmb(); + dte->v = true; amd_iommu_flush_device(iommu, req_id); _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |