[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Dump IOMMU p2m table



On 08/17/2012 04:31 PM, Santosh Jodh wrote:


-----Original Message-----
From: Wei Wang [mailto:wei.wang2@xxxxxxx]
Sent: Friday, August 17, 2012 4:15 AM
To: Santosh Jodh
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; xiantao.zhang@xxxxxxxxx; Tim
(Xen.org); JBeulich@xxxxxxxx
Subject: Re: [PATCH] Dump IOMMU p2m table

On 08/16/2012 06:36 PM, Santosh Jodh wrote:
New key handler 'o' to dump the IOMMU p2m table for each domain.
Skips dumping table for domain0.
Intel and AMD specific iommu_ops handler for dumping p2m table.

Incorporated feedback from Jan Beulich and Wei Wang.
Fixed indent printing with %*s.
Removed superflous superpage and other attribute prints.
Make next_level use consistent for AMD IOMMU dumps. Warn if found
inconsistent.

Signed-off-by: Santosh Jodh<santosh.jodh@xxxxxxxxxx>

diff -r 6d56e31fe1e1 -r 575a53faf4e1
xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c       Wed Aug 15
09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c       Thu Aug 16
09:28:24 2012 -0700
@@ -22,6 +22,7 @@
   #include<xen/pci.h>
   #include<xen/pci_regs.h>
   #include<xen/paging.h>
+#include<xen/softirq.h>
   #include<asm/hvm/iommu.h>
   #include<asm/amd-iommu.h>
   #include<asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u

   #include<asm/io_apic.h>

+static void amd_dump_p2m_table_level(struct page_info* pg, int
level,
+                                     paddr_t gpa, int indent) {
+    paddr_t address;
+    void *table_vaddr, *pde;
+    paddr_t next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level<   1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %"PRIpaddr"\n",
+                page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index<   PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        if ( !present )
+            continue;
+
+        next_level = get_field_from_reg_u32(entry[0],
+
IOMMU_PDE_NEXT_LEVEL_MASK,
+
+ IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        if ( next_level != (level - 1) )
+        {
+            printk("IOMMU p2m table error. next_level = %d, expected
%d\n",
+                   next_level, level - 1);
+
+            continue;
+       }

HI,

This check is not proper for 2MB and 1GB pages. For example, if a guest
4 level page tables, for a 2MB entry, the next_level field will be 3
->(l4)->2(l3)->0(l2), because l2 entries becomes PTEs and PTE entries
have next_level = 0. I saw following output for those pages:

(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1

Thanks,
Wei

How about changing the check to:
         if ( next_level&&  (next_level != (level - 1)) )

That should be good, since we don't have skip levels.
Thanks,
Wei


Thanks,
Santosh




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.