|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v4 01/21] AMD/IOMMU: correct potentially-UB shifts
Recent changes (likely 5fafa6cf529a ["AMD/IOMMU: have callers specify
the target level for page table walks"]) have made Coverity notice a
shift count in iommu_pde_from_dfn() which might in theory grow too
large. While this isn't a problem in practice, address the concern
nevertheless to not leave dangling breakage in case very large
superpages would be enabled at some point.
Coverity ID: 1504264
While there also address a similar issue in set_iommu_ptes_present().
It's not clear to me why Coverity hasn't spotted that one.
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
v4: New.
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -89,11 +89,11 @@ static unsigned int set_iommu_ptes_prese
bool iw, bool ir)
{
union amd_iommu_pte *table, *pde;
- unsigned int page_sz, flush_flags = 0;
+ unsigned long page_sz = 1UL << (PTE_PER_TABLE_SHIFT * (pde_level - 1));
+ unsigned int flush_flags = 0;
table = map_domain_page(_mfn(pt_mfn));
pde = &table[pfn_to_pde_idx(dfn, pde_level)];
- page_sz = 1U << (PTE_PER_TABLE_SHIFT * (pde_level - 1));
if ( (void *)(pde + nr_ptes) > (void *)table + PAGE_SIZE )
{
@@ -281,7 +281,7 @@ static int iommu_pde_from_dfn(struct dom
{
unsigned long mfn, pfn;
- pfn = dfn & ~((1 << (PTE_PER_TABLE_SHIFT * next_level)) - 1);
+ pfn = dfn & ~((1UL << (PTE_PER_TABLE_SHIFT * next_level)) - 1);
mfn = next_table_mfn;
/* allocate lower level page table */
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |