[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Dump IOMMU p2m table



On 08/15/2012 10:54 AM, Jan Beulich wrote:
On 14.08.12 at 21:55, Santosh Jodh<santosh.jodh@xxxxxxxxxx>  wrote:

Sorry to be picky; after this many rounds I would have
expected that no further comments would be needed.

+static void amd_dump_p2m_table_level(struct page_info* pg, int level,
+                                     paddr_t gpa, int indent)
+{
+    paddr_t address;
+    void *table_vaddr, *pde;
+    paddr_t next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level<  1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %"PRIpaddr"\n",
+                page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index<  PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        if ( !present )
+            continue;
+
+        next_level = get_field_from_reg_u32(entry[0],
+                                            IOMMU_PDE_NEXT_LEVEL_MASK,
+                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        address = gpa + amd_offset_level_address(index, level);
+        if ( next_level>= 1 )
+            amd_dump_p2m_table_level(
+                maddr_to_page(next_table_maddr), level - 1,

Did you see Wei's cleanup patches to the code you cloned from?
You should follow that route (replacing the ASSERT() with
printing of the inconsistency and _not_ recursing or doing the
normal printing), and using either "level" or "next_level"
consistently here.

Hi, I tested the patch and the output looks much better nowïplease see attachment. One thing I notice: there is a 1GB mapping in the guest, but the format of it looks like other 2MB mappings:

(XEN)  gfn: 0003fa00  mfn: 0023b000
(XEN)  gfn: 0003fc00  mfn: 00136200
(XEN)  gfn: 0003fe00  mfn: 0023ae00
(XEN)  gfn: 00040000  mfn: 00040000 << 1GB here
(XEN)  gfn: 00080000  mfn: 0023ac00
(XEN)  gfn: 00080200  mfn: 00136000
(XEN)  gfn: 00080400  mfn: 0023aa00
(XEN)  gfn: 00080600  mfn: 00135e00
(XEN)  gfn: 00080800  mfn: 0023a800
(XEN)  gfn: 00080a00  mfn: 00135c00
(XEN)  gfn: 00080c00  mfn: 0023a600
(XEN)  gfn: 00080e00  mfn: 00135a00
(XEN)  gfn: 00081000  mfn: 0023a400
(XEN)  gfn: 00081200  mfn: 00135800
(XEN)  gfn: 00081400  mfn: 0023a200
(XEN)  gfn: 00081600  mfn: 00135600

Thanks,
Wei

+                address, indent + 1);
+        else
+            printk("%*s" "gfn: %08lx  mfn: %08lx\n",
+                   indent, " ",

             printk("%*sgfn: %08lx  mfn: %08lx\n",
                    indent, "",

I can vaguely see the point in splitting the two strings in the
first argument, but the extra space in the third argument is
definitely wrong - it'll make level 1 and level 2 indistinguishable.

I also don't see how you addressed Wei's reporting of this still
not printing correctly. I may be overlooking something, but
without you making clear in the description what you changed
over the previous version that's also relatively easy to happen.

+static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
+                                     int indent)
+{
+    paddr_t address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( level<  1 )
+        return;
+
+    pt_vaddr = map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i<  PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte =&pt_vaddr[i];
+        if ( !dma_pte_present(*pte) )
+            continue;
+
+        address = gpa + offset_level_address(i, level);
+        if ( next_level>= 1 )
+            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level,
+                                     address, indent + 1);
+        else
+            printk("%*s" "gfn: %08lx mfn: %08lx super=%d rd=%d wr=%d\n",
+                   indent, " ",

Same comment as above.

+                   (unsigned long)(address>>  PAGE_SHIFT_4K),
+                   (unsigned long)(pte->val>>  PAGE_SHIFT_4K),
+                   dma_pte_superpage(*pte)? 1 : 0,
+                   dma_pte_read(*pte)? 1 : 0,
+                   dma_pte_write(*pte)? 1 : 0);

Missing spaces. Even worse - given your definitions of these
macros there's no point in using the conditional operators here
at all.

And, despite your claim in another response, this still isn't similar
to AMD's variant (which still doesn't print any of these three
attributes).

The printing of the superpage status is pretty pointless anyway,
given that there's no single use of dma_set_pte_superpage()
throughout the tree - validly so since superpages can be in use
currently only when the tables are shared with EPT, in which
case you don't print anything. Plus you'd need to detect the flag
_above_ level 1 (at leaf level the bit is ignored and hence just
confusing if printed) and print the entry instead of recursing. And
if you decide to indeed properly implement this (rather than just
dropping superpage support here), _I_ would expect you to
properly implement level skipping in the corresponding AMD code
too (which similarly isn't being used currently).

Jan


Attachment: io_pt.dump
Description: io_pt.dump

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.